Search for More Jobs
Get alerts for jobs like this Get jobs like this tweeted to you
Company: AMD
Location: San Jose, CA
Career Level: Director
Industries: Technology, Software, IT, Electronics

Description



WHAT YOU DO AT AMD CHANGES EVERYTHING 

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.  Together, we advance your career.  



THE ROLE:

 

AMD is looking for a Principal Engineer to serve as a hands-on technical team lead driving the performance and scalability of frontier AI workloads on AMD GPUs, including large language models, mixture-of-experts architectures, and diffusion models. You will lead a team of engineers, define the long-term technical vision, make critical architecture decisions, and tackle the hardest performance challenges across the stack from GPU kernels and to serving frameworks and distributed systems.

 

 

THE PERSON:

 

The ideal candidate is a deep technical expert with a track record of solving industry-hard problems at the intersection of GPU architecture, AI systems, and high-performance software. You understand the full stack from hardware micro-architecture to model architecture, inference paradigms, and system-level design. You lead through technical depth, influence, and by example, staying hands-on while setting direction for your team. If you want to shape how the world runs AI on AMD hardware, this role is for you.

 

 

KEY RESPONSIBILITIES:

 

  • Lead a small team of engineers: set technical direction, prioritize work, and ensure delivery while remaining deeply hands-on
  • Define and drive the long-term technical strategy for AI workload performance on AMD GPUs
  • Own the most complex cross-stack performance challenges, from kernel optimization to framework-level architecture decisions
  • Lead the design and implementation of novel GPU kernels, compiler optimizations, and framework features
  • Establish performance methodology and roofline analysis practices that set the standard for the team
  • Influence upstream roadmaps in major open-source AI frameworks (e.g., vLLM, SGLang, PyTorch)
  • Drive architecture decisions for emerging inference paradigms (e.g., prefill-decode disaggregation, speculative decoding, distributed serving)
  • Identify and close fundamental performance gaps between AMD and competitor platforms
  • Serve as a technical authority across the organization, advising leadership on technical direction and feasibility
  • Mentor engineers and raise the technical bar across the broader engineering organization
  • Represent AMD externally through publications, conference talks, and open-source contributions

 

PREFERRED EXPERIENCE:

  • 10+ years of software development experience in GPU computing, HPC, or AI systems
  • Deep understanding of GPU micro-architecture, memory hierarchy, instruction scheduling, and performance tradeoffs
  • Deep understanding of end-to-end AI systems: model architectures, inference paradigms, and system/rack-level design
  • Understanding of multi-GPU communication: scale-up (NVLink, xGMI, Infinity Fabric) and scale-out (RDMA, RCCL/NCCL) topologies and performance characteristics
  • Experience designing and optimizing across the full stack: from low-level GPU kernels to frameworks and distributed serving systems
  • Strong background in performance engineering, including profiling, roofline analysis, and bottleneck diagnosis at scale
  • Experience with one or more of: HIP, CUDA, OpenCL, Triton/Gluon, CUTLASS, CK
  • Experience with GPU compiler toolchains (e.g., LLVM) and intermediate representations (e.g., MLIR, LLVM IR, Triton IR) is a plus
  • Hands-on experience contributing to or architecting major open-source AI frameworks (e.g., vLLM, SGLang, xDiT, Megatron LM, PyTorch)
  • Strong proficiency in C++ (C++17 or later) and Python
  • Experience leading small technical teams while remaining a hands-on contributor
  • Track record of influencing technical direction across teams and organizations
  • Strong Linux systems knowledge
  • Excellent written and verbal English communication skills
  • Published research or significant open-source contributions in GPU computing, HPC, or AI systems is a plus

 

ACADEMIC CREDENTIALS:

  • Master's or PhD in Computer Science, Computer Engineering, Electrical Engineering, or equivalent. PhD strongly preferred.

 

 

LOCATION:

  • San Jose, CA preferred

 

#LI-TC1

 

#LI-HYBRID

 

Benefits offered are described:  AMD benefits at a glance.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.

 

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position.  AMD's “Responsible AI Policy” is available here.

 

This posting is for an existing vacancy.


 Apply on company website