Search for More Jobs
Get alerts for jobs like this Get jobs like this tweeted to you
Company: AMD
Location: Vancouver, BC, Canada
Career Level: Mid-Senior Level
Industries: Technology, Software, IT, Electronics

Description

WHAT YOU DO AT AMD CHANGES EVERYTHING 

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.  Together, we advance your career.  

THE ROLE: Triton is a widely adopted language and compiler for high-performance GPU kernels, powering major AI frameworks such as PyTorch, vLLM, and SGLang. As AI workloads increasingly rely on Triton-based kernels, first-class Triton support is strategically critical to AMD's AI software roadmap.   AMD GPUs are an official Triton backend; delivering industry-leading Triton performance on AMD Instinct accelerators is a top priority for AMD. The performance and usability of Triton directly impact the competitiveness of AMD hardware in large-scale AI training and inference. In this role you will author state-of-the-art performant Triton/Gluon kernels for ML kernels powering the latest and greatest AI models.   You will collaborate with research, compiler, and hardware architecture teams to co-design high-performance solutions, analyze bottlenecks to make AMD GPUs the best-in-class platform for Triton-powered AI workloads. THE PERSON: The ideal candidate has deep expertise in SIMT programming, parallel algorithms, GPU architecture, and performance engineering. You are comfortable working across the full stack to drive e2e model performance — from vLLM/SGL down to ISA-level performance tuning — and can perform rigorous quantitative analysis to drive measurable improvements.   You thrive in highly technical environments, enjoy solving complex performance problems, and are excited to collaborate across model deployment, compiler, runtime, and hardware teams. Most importantly, you are curious, hands-on, and willing to learn and work across boundaries. KEY RESPONSIBILITIES:
  • Design, research, implement, and rigorously optimize high-performance matmul, attention (flash, paged, grouped-query), MoE, and fully fused transformer kernels using Triton, targeting large-scale LLM and multimodal workloads
  • Own and productionize critical Triton/Gluon kernels within vLLM and SGL (e.g., paged attention, extend attention, MoE, quantized kernels, etc), ensuring correctness, scalability, and peak throughput
  • Partner closely with compiler engineers to develop and maintain the Triton AMD backend across ROCm and the LLVM AMDGPU stack, targeting CDNA and next-generation architectures
  • Drive deep kernel-level optimizations across the AMD memory hierarchy (LDS, L2, HBM), wavefront execution (wave32/wave64), vectorization, MFMA utilization, occupancy tuning, and instruction scheduling to maximize hardware efficiency
  • Perform rigorous profiling and microbenchmarking led optimization on AMD Instinct GPUs using hardware counters and tracing tools; root-cause bottlenecks in memory bandwidth, latency hiding, synchronization, and register pressure
  • Debug and resolve performance and correctness issues end-to-end across PyTorch, vLLM/SGL runtimes, Triton IR/MLIR, ROCm runtime, and the LLVM AMDGPU backend
  • Contribute to open-source Triton, LLVM, and ROCm ecosystems
PREFERRED EXPERIENCE:
  • 3+ years of experience in GPU kernel development, compiler backends, or performance engineering focused on AI/ML workloads
  • Strong hands-on expertise with Triton, including writing custom matmul, attention, and fused transformer kernels and understanding Triton IR lowering to GPU backends
  • Deep understanding of modern GPU architectures (wavefront execution, memory hierarchy, scheduling, occupancy)
  • Meaningful contributions to open-source projects such as Triton, Torch, vLLM, SGLang, IREE, MLIR, LLVM, or ROCm, with a strong collaborative and upstream-first engineering mindset
PREFERRED ACADEMIC CREDENTIALS:     Bachelor's or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent practical experience.  

This role is not eligible for visa sponsorship.

 


#LI-G11

 

#LI-HYBRID

Benefits offered are described:  AMD benefits at a glance.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.

 

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position.  AMD's “Responsible AI Policy” is available here.

 

This posting is for an existing vacancy.


 Apply on company website