AMD Job - 48880841 | CareerArc
  Search for More Jobs
Get alerts for jobs like this Get jobs like this tweeted to you
Company: AMD
Location: San Jose, CA
Career Level: Mid-Senior Level
Industries: Technology, Software, IT, Electronics

Description



WHAT YOU DO AT AMD CHANGES EVERYTHING

We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world's most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. 

AMD together we advance_



THE ROLE:

We are looking for a dynamic, energetic  candidate to join our growing team in AI Group. In this role, the individual will be responsible for optimizing neural processor compiler, developing tools and methodologies to optimize and realize full system performance for AI workloads, responsible for architecting and defining kernel dataflow, defining block level and system level performance of Neural Processing Unit (NPU), NPU network performance modeling, and performance bottleneck analysis on pre/post silicon platforms 

 

THE PERSON:

You will be tasked with analyzing AI workloads, analyzing system-level performance bottlenecks, and finding ways to achieve the best performance and power. 

 

KEY RESPONSIBILITIES:

  • Work with cross-functional teams to optimize various parts of the SW stack – AI Compiler, AI frameworks, device drivers, and firmware.
  • Work on block & system level performance analysis for VLIW based AI Engine processor architecture.
  • Bring up emerging ML models based on CNN, transformers and characterize performance.
  • Develop and validate VLIW-based processor systems on both pre-Silicon and post-Silicon platforms based on different use case applications
  • Develop application specific reusable kernel code for AI Engine processors.
  • Bring up & debug on pre/post silicon platform.
  • Debug the failures on pre/post-silicon platform using trace interface, waveform viewer.
  • Solve challenging technical problems with complex SoC based systems that integrate robust algorithms and features.
  • Lead the discussion in the AI Engines Technical Solutions Team forum.
  • Be involved in all aspects of integrated product development, including design, prototyping, implementation, testing, and product demonstration.
  • Provide feedback on architecture, use cases, IP design, tools, and documentation.
  • Create reference model using Matlab/Python libraries and verify functionality of kernels

 

PREFERRED EXPERIENCE:

  • Solid knowledge of AI and ML concepts and techniques. Practical experience applying these concepts to solve real-world problems in the context of research or work experience.
  • Understanding the performance implications on AI acceleration of different compute, memory, and communication configurations and hardware and software implementation choices.
  • Developing and optimizing code for VLIW processors. Analyzing code for high performance CONV, GEMM and non-linear operators Deep understanding of AI frameworks, preferably ONNX.
  • Experience with AI/ML inference stacks such as ONNXRuntime.
  • Proficiency in pre and post silicon performance analysis of ML models for edge and cloud based platforms.
  • Proficiency in C++ based kernel development for distributed processors.
  • Excellent C/C++ coding skills
  • Experience in processor performance and memory performance characterization
  • Experience in system debug tool is plus e.g. using Lauterbach, gdb, Valgrind and other debug tools is required.
  • Experience in TensorFlow, PyTorch, Keras is a plus.
  • Experience with static and dynamic power characterization is a plus.
  • Familiarity with VLIW SIMD vector processor architecture,

 

ACADEMIC CREDENTIALS:

  • BS or MS with industry experience 
  • PhD in Electrical Engineering or Computer Engineering

 

Location:

San Jose, Ca

 

#LI-RF1 

 

#LI-HYBRID



At AMD, your base pay is one part of your total rewards package.  Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position. You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD's Employee Stock Purchase Plan. You'll also be eligible for competitive benefits described in more detail here.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.


 Apply on company website