Search for More Jobs
Get alerts for jobs like this Get jobs like this tweeted to you
Company: AMD
Location: Beijing, China
Career Level: Mid-Senior Level
Industries: Technology, Software, IT, Electronics

Description



WHAT YOU DO AT AMD CHANGES EVERYTHING 

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.  Together, we advance your career.  



The Role: 

The TrainingAtScale team at AMD is looking for a Training Optimization Engineer to help build and optimize our large-scale training infrastructure on AMD GPUs. 

This is an engineering-focused role centered around improving the performance, stability, and scalability of distributed training systems. You will work closely with internal model and platform teams to advance training infrastructure across pre-trainingpost-trainingreinforcement learning (RL), and world model training frameworks, pushing the boundaries of generative AI model development. 

 

Key Responsibilities: 

  • Participate in the development and maintenance of AMD's internal training framework, covering pre-training, post-training, and reinforcement learning (RL) pipelines. 
  • Optimize distributed training pipelines and parallelism strategies (Data Parallelism, Tensor Parallelism, Pipeline Parallelism, ZeRO, etc.). 
  • Improve communication scheduling and kernel overlap to reduce training latency and maximize GPU utilization. 
  • Tune the performance of core operators using HIP/CUDA and low-level profiling tools. 
  • Integrate and adapt open-source training frameworks such as Megatron-LM, TorchTitan, DeepSpeed, etc. 
  • Support internal model training workloads with performance, reliability, and scalability improvements. 
  • Collaborate across teams to investigate and resolve system-level bottlenecks in large-scale training. 

 

Preferred Qualifications: 

  • Solid engineering background and familiarity with end-to-end deep learning training workflows. 
  • Hands-on experience with training framework internals (e.g., Megatron-LM, TorchTitan, DeepSpeed, Verl, Slime, TransformerEngine). 
  • Strong debugging and performance analysis skills (profiling, tracing, etc.). 
  • Understanding of distributed training techniques such as data parallelism, tensor parallelism, pipeline parallelism, ZeRO optimization. 
  • Excellent communication and cross-functional collaboration skills. 

 

Bonus Points: 

  • Experience with large-scale model training (e.g., LLMs, MoE, Diffusion, Wan, WorldModel). 
  • Hands-on experience with CUDA or HIP kernel development. 
  • Familiarity with communication libraries such as NCCL/RCCL and techniques like kernel overlap. 
  • Prior involvement in high-performance ML infrastructure projects, especially in pre-training and reinforcement learning (RL) framework development. 

 

ACADEMIC CREDENTIALS: 

  • Bachelor's or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent 

#LI-FL1



Benefits offered are described:  AMD benefits at a glance.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.

 

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position.  AMD's “Responsible AI Policy” is available here.

 

This posting is for an existing vacancy.


 Apply on company website