Search for More Jobs
Get alerts for jobs like this Get jobs like this tweeted to you
Company: AMD
Location: Shanghai, Shanghai, China
Career Level: Entry Level
Industries: Technology, Software, IT, Electronics

Description



WHAT YOU DO AT AMD CHANGES EVERYTHING 

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.  Together, we advance your career.  



THE ROLE:  

As a core member of the team, you will play a pivotal role in optimizing and developing deep learning frameworks for AMD GPUs. Your experience will be critical in enhancing GPU kernels, deep learning models, and training/inference performance across multi-GPU and multi-node systems. You will engage with both internal GPU library teams and open-source maintainers to ensure seamless integration of optimizations, utilizing cutting-edge compiler technologies and advanced engineering principles to drive continuous improvement. 

THE PERSON:  

Skilled engineer with strong technical and analytical expertise in C++ development within Linux environments. The ideal candidate will thrive in both collaborative team settings and independent work, with the ability to define goals, manage development efforts, and deliver high-quality solutions. Strong problem-solving skills, a proactive approach, and a keen understanding of software engineering best practices are essential. 

KEY RESPONSIBILITIES:  

  • Deep Learning & LLM Framework Optimization: Optimize major DL/LLM frameworks (TensorFlow, PyTorch, vLLM, SGLang) for AMD GPUs and contribute improvements upstream. 
  • GPU Kernel & Operator Optimization: Develop and tune GPU kernels and performance-critical operators to maximize throughput and minimize latency. 
  • Model & Architecture Optimization: Adapt and optimize LLM architectures (e.g., Llama, Qwen, DeepSeek) and apply advanced techniques like FlashAttention, PagedAttention, and quantization. 
  • End-to-End Performance Engineering: Perform comprehensive profiling to identify bottlenecks and implement system, memory, and communication optimizations across multi-GPU and multi-node setups. 
  • Compiler & Pipeline Acceleration: Leverage advanced compiler technologies and graph compilers to enhance the full deep learning and inference pipeline. 
  • Research & Advanced Techniques: Prototype and integrate emerging optimization methods such as speculative decoding and weight-only quantization into production systems. 
  • Cross-Team & Open-Source Collaboration: Collaborate with internal GPU library teams and open-source maintainers to align improvements and ensure seamless upstream integration. 
  • Software Engineering Excellence: Apply robust engineering practices to deliver maintainable, reliable, and production-quality performance optimizations. 

MANDATORY EXPERIENCE:  

  • Inference Frameworks, Model Architectures & Optimization Expertise: Deep practical experience with vLLM or SGLang, mastery of modern LLMs (e.g., DeepSeek, Qwen), strong theoretical grounding in Transformer/Attention/MoE/KV Cache, and hands-on application of advanced inference optimizations such as FlashAttention, PagedAttention, continuous batching, and quantization (INT8/INT4/GPTQ/AWQ). 
  • End-to-End LLM Performance Engineering: Demonstrated ability to profile, diagnose, and optimize compute, memory, and communication bottlenecks across multi-GPU and multi-node environments. 
  • High-Performance Computing: Experience running and optimizing large-scale workloads on heterogeneous clusters with a focus on efficiency, reliability, and scalability. 
  • Deep Learning Framework Integration: Proven ability to integrate optimized GPU kernels into TensorFlow/PyTorch to accelerate large-scale training and inference with strong scaling and throughput. 
  • Software Engineering Excellence & Community Contribution: Strong Python/C++ coding skills, effective debugging and testing practices, proven ability to deliver maintainable performance-critical software, and a track record of open-source contributions with strong self-motivation. 
  • GPU Kernel Development & Optimization is a plus: Hands-on experience designing and tuning high-performance GPU kernels for AMD GPUs using HIP, CUDA, ASM, and tools like CK, CUTLASS, and Triton, with strong knowledge of GCN/RDNA architectures. 
  • Compiler & System-Level Optimization is a plus: Foundational knowledge of LLVM, ROCm, and compiler-driven techniques for improving kernel and system performance. 

ACADEMIC & PREFERRED QUALIFICATIONS:  

  • Bachelor's and/or Master's Degree in Computer Science, Computer Engineering, Electrical Engineering, or a related field. 
  • Low-Level Development Skills: Experience with CUDA C++ programming for writing and debugging high-performance GPU kernels; or practical experience using Triton to develop and optimize deep learning operators. 
  • Compiler Knowledge: Understanding or practical experience with compiler technologies like TVM or MLIR is a significant advantage. 
  • Distributed Systems Experience: Hands-on experience with distributed inference for large-scale models (e.g., Tensor Parallel, Pipeline Parallel). 

 

#LI-EH1



Benefits offered are described:  AMD benefits at a glance.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.


 Apply on company website