Description
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
THE ROLE:
The focus of this role is on the performance analysis and optimization of production grade AI services; in particular, in the AMD Inference Microservice (AIM) ecosystem. You will be part of a diverse and ambitious team responsible for ensuring reliable performance of various AI microservices on diverse hardware configurations. You will work with state-of-the-art AI tooling and models on cutting edge AI infrastructure. This role requires both deep understanding of LLMs as well as hands-on knowledge of AI tooling like inference servers.
KEY RESPONSIBILITIES:
LLM and AI Performance:
- Measure, analyze, and optimize LLM and AI service performance across metrics like latency and throughput for various training and inference use cases.
- Design and implement methodologies for measuring model performance, and automating optimization strategies to identify optimal configurations
- Stay on top of current advances in AI, models, APIs, and open-source ecosystems, and translate them into scalable solutions
LLM and AI Tooling:
- Design and develop tooling to measure and analyze the performance of AI model deployments and the effect of different configurations and infrastructure, standalone and Kubernetes clusters.
- Develop and maintain tooling for interacting with different ecosystem functions to improve developer and user experience.
- Develop and maintain internal tooling to support LLM and AI performance tuning at scale
EXPERIENCE & KEY QUALITIES
- Seasoned in deploying LLMs and other AI model types in production using frameworks like vLLM, SGLang, or similar tooling.
- Deep knowledge about LLM serving and performance metric evaluation
- Comfortable with Python software development and bash scripting.
- Experience with Docker, Kubernetes and Helm.
- Desire and ability to continuously learn in a fast-changing environment
- Initiative, pragmatic problem solving, and great collaboration skills
- Bachelor's or master's degree in computer science, computer engineering, electrical engineering, or an equivalent field
NICE TO HAVE
- Experience with multi-objective hyper-parameter optimization
- Knowledge of GPU architecture, kernel development, and debugging (C/C++/CUDA).
LOCATIONS
Finland or Sweden
#LI-MH3
#LI-HYBRID
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
Apply on company website