We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

GPU Performance Software Development Engineer

Advanced Micro Devices, Inc.
$192,000.00/Yr.-$288,000.00/Yr.
United States, California, San Jose
2100 Logic Drive (Show on map)
Jan 20, 2026


WHAT YOU DO AT AMD CHANGES EVERYTHING

At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.

Mission

Wave is a high-performance GPU programming language and compiler built for modern machine-learning workloads. It combines a Python-embedded DSL with an MLIR-based compiler stack to let engineers write kernels that are both expressive and fast.

Your mission will be to own the end-to-end performance of Wave's GPU kernels. You will design, implement, and continuously optimize hand-tuned kernels (GEMM, Attention, MoE, decoding) while shaping compiler and MLIR infrastructure to extract peak performance on modern accelerators. You will take responsibility for kernel performance, diagnose bottlenecks down to the instruction and scheduling level, and work across kernel code, compiler passes, and hardware models to close performance gaps against vendor libraries.

Core Responsibilities
* Own kernel performance for Wave
* Optimize critical kernels (GEMM, Attention, MoE, decoding) to be competitive with or exceed vendor libraries.
* Profile, analyze, and eliminate bottlenecks across memory, registers, instruction scheduling, and wave/warp execution.
* Low-level GPU optimization
* Write and tune kernels using HIP / CUDA / inline assembly / intrinsics (e.g., MFMA / MMA).
* Optimize LDS/shared memory usage, register allocation, instruction scheduling, occupancy, and wave/warp utilization.
* Reason about hardware details such as waves/warps, WGP/SM behavior, pipelines, cache hierarchies, and memory systems.
* Compiler & MLIR integration
* Extend and optimize MLIR dialects and lowering pipelines relevant to GPU code generation.
* Bridge high-level representations (FX / Python DSL) to low-level MLIR and ISA-aware transformations.
* Implement compiler passes for tiling, vectorization, prefetching, pipelining, and layout transformations.
* Performance modeling & tooling
* Build mental and empirical performance models to guide kernel design.
* Use profiling tools (e.g., rocprof, Nsight, custom counters) and disassembly to validate hypotheses.
* Create internal benchmarks, microkernels, and performance regression tests.
* Architecture bring-up
* Lead kernel and compiler optimization for new GPU architectures.
* Adapt kernels and compiler strategies to evolving hardware capabilities.

Required Qualifications
* Deep GPU performance expertise
* Proven experience optimizing GPU kernels at the instruction and memory-system level.
* Strong understanding of GPU execution models (waves/warps, occupancy, latency hiding).
* Low-level programming
* Proficiency in C++ and GPU programming (HIP or CUDA).
* Experience with GPU intrinsics, inline PTX / GCN assembly, or equivalent low-level code.
* Compiler experience
* Hands-on experience with compilers, preferably MLIR.
* Familiarity with compiler IRs, lowering pipelines, and performance-critical transformations.
* Performance analysis
* Ability to read disassembly, analyze performance counters, and reason from first principles.
* Track record of closing performance gaps against strong baselines.

* Masters in Computer Science or related field

Strongly Preferred
* Experience with AMD GPUs (ROCm, CDNA, MI-series) or NVIDIA GPUs (Ampere/Hopper/Blackwell).
* Experience designing or maintaining a DSL, compiler backend, or GPU codegen pipeline.
* Background in linear algebra kernels, attention mechanisms, or ML workloads.
* Comfort working across Python frontends, MLIR, and backend codegen.

* PhD in Computer Science or related field


#LI-G11

#LI-HYBRID

Benefits offered are described: AMD benefits at a glance.

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's "Responsible AI Policy" is available here.

This posting is for an existing vacancy.

Applied = 0

(web-df9ddb7dc-vp9p8)