
Description
WHAT YOU DO AT AMD CHANGES EVERYTHING
We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world's most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives.
AMD together we advance_
(如已在BOSS直聘投递,请勿在此重复投递)
Responsibilities:
Participate in research and implementation of inference optimization for large language models (LLMs) or multimodal foundation models;
Investigate and apply state-of-the-art model compression and optimization techniques, including quantization, pruning, KV cache optimization, operator fusion, tensor parallelism, etc.;
Analyze and optimize existing inference frameworks (e.g., TensorRT, FasterTransformer, vLLM, DeepSpeed), and contribute to system-level performance improvements;
Support the team in technical validation, benchmarking, and documentation of research outcomes.
Master's or PhD student in Computer Science, Artificial Intelligence, Electrical Engineering, or a related field;
Solid understanding of deep learning fundamentals and Transformer architectures;
Proficient in at least one mainstream deep learning framework (e.g., PyTorch or TensorFlow), with strong coding skills;
Publications in top-tier AI conferences (e.g., NeurIPS, ICLR, CVPR, ACL, AAAI) are highly preferred;
Prior experience in one or more of the following is a strong plus:
Large model inference acceleration (e.g., quantization, compiler-based optimization, distributed inference)
Model compression techniques (e.g., distillation, structured pruning, sparsity)
High-performance computing (e.g., CUDA programming, kernel-level tensor optimizations)
Familiarity with mainstream LLMs (e.g., LLaMA, GPT, Mistral, DeepSeek)
Open-source contributions to projects such as HuggingFace, vLLM, OpenXLA, etc.
Experience in hardware-aware AI optimization (e.g., NPU/GPU backend tuning, compiler stack development)
#LI-EJ1
#LI-HYBRID
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
Apply on company website