Search for More Jobs
Get alerts for jobs like this Get jobs like this tweeted to you
Company: AMD
Location: San Jose, CA
Career Level: Director
Industries: Technology, Software, IT, Electronics

Description



WHAT YOU DO AT AMD CHANGES EVERYTHING

We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world's most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. 

AMD together we advance_



 

THE ROLE: 

We seek a Principal Machine Learning Performance Engineer to focus on ML training performance optimization, profiling, bottleneck analysis, and optimal mapping to GPU.
If you are passionate about performance optimization, getting the best out of the HW, and shaping the future AI performance, then this role is for you.

 

THE PERSON:

As a Principal Machine Learning Training Performance Engineer, you will take a leadership role in analyzing and enhancing the performance of cutting-edge ML training models on our GPU hardware. You will mentor junior engineers, lead complex projects, and collaborate with cross-functional teams to develop innovative solutions for performance bottlenecks. Your deep expertise in software optimization, GPU programming, hardware architecture, and deep learning training algorithms will be instrumental in driving performance improvements and shaping our product roadmap.

 

KEY RESPONSIBILITIES: 

• Benchmark, analyze, and optimize training performance of key machine learning models on single and multi-GPU systems, setting the standard for best practices in the team.
• Design and implement advanced GPU kernels and algorithms for tensor operations like matrix multiplication and convolutions used in high-performance ML training libraries and frameworks.
• Collaborate with machine learning researchers, hardware architects, and software engineers to influence the co-design of our ML hardware and software stack.
• Provide technical leadership and mentorship to engineering teams, fostering a culture of excellence in performance optimization and software development.
• Drive strategic initiatives to enhance ML training performance, scalability, and efficiency across the organization.
• Contribute to the development of internal and opensource tools and methodologies for performance analysis and optimization.

 

PREFERRED EXPERIENCE: 

• Strong experience in high-performance computing, software optimization, and GPU programming.
• Profound understanding of deep learning concepts, training algorithms, and model architectures such as CNNs, RNNs, Transformers, and GANs.
• Framework Proficiency: Extensive experience with deep learning frameworks like TensorFlow, PyTorch, as well as training stacks such as Deep Speed, Megatron LM and etc. including customizing and optimizing their performance.
• Familiarity with distributed training techniques, scalability challenges, and solutions.
• Deep understanding of CPU and GPU architectures, memory hierarchies, and low-level optimization techniques specific to ML training workloads.
• Strong proficiency in major programing languages, such as C++, python, and experience developing high-performance computing applications.
• Expertise in GPU programming using HIP, CUDA, or OpenCL for ML training applications.
• Proven ability to lead technical teams, manage complex projects, and deliver results.
• Excellent written and verbal communication skills, with the ability to present complex technical information clearly to various stakeholders.

 

ACADEMIC CREDENTIALS: 

• A PhD or Master plus equivalent experience in computer science, electrical engineer, or a related field.

 

LOCATION:

  • San Jose, CA or Bellevue, WA preferred. Other US locations may be considered. 

 

#LI-MV1

#LI-HYBRID

#LI-REMOTE



At AMD, your base pay is one part of your total rewards package.  Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position. You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD's Employee Stock Purchase Plan. You'll also be eligible for competitive benefits described in more detail here.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.


 Apply on company website