Description
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
AMD's AI software stack is moving fast — and keeping pace means shipping complete, validated GPU stack releases to customers as quickly as the software can evolve. Getting there requires validating not just ROCm, but the full recipe: firmware, kernel driver, and ROCm together, across multiple GPU products, with confidence that what ships to customers actually works as a coherent system.
We're building the CI infrastructure that makes that possible — automated pre-submit validation, nightly integration builds across the full stack, and a Last Known Good (LKG) manifest that gives every engineer a trusted baseline to build from. The goal is a release pipeline where validated, customer-ready GPU stack recipes can be produced on demand rather than assembled manually.
You'd own that CI system end-to-end: provisioning the runners, integrating the hardware test pipeline, coordinating with IT on real infrastructure constraints, and shipping the automation that enables the whole thing. It's high-ownership work with direct impact on AMD's ability to move fast for customers.
What You'll Do- Get nightly CI running, fast — the first priority is standing up nightly integration builds for the unified GPU stack. You'll own that end-to-end: the pipeline, the scheduling, the result reporting, and the Last Known Good (LKG) manifest promotion that gives every engineer a trusted baseline.
- Solve the runner provisioning problem — standard cloud runners can't build firmware. You'll work directly with IT to provision GitHub Actions self-hosted runners that handle the real constraints: NFS mounts for host-side tools, code-signing pipelines, network access, and permissions that firmware builds require. This is the kind of infrastructure work that requires both technical depth and the ability to get things done across organizational boundaries.
- Build toward AWS-aligned infrastructure — the broader GPU stack CI is moving toward AWS-hosted runners. You'll make sure UnderTheRock's infrastructure is consistent with that direction from the start, rather than creating something that has to be rebuilt later.
- Own the CI, not just contribute to it — nobody else on the team is currently focused on CI. You'll be setting the direction, making the tooling choices, and shipping the automation that everything else depends on.
Required:
- 8+ years of software engineering or infrastructure engineering experience
- Strong coding ability — you'll be writing automation, not clicking through UIs
- Deep knowledge of CI/CD pipeline design and GitHub Actions (or comparable platform)
- Experience provisioning and maintaining self-hosted runners or build infrastructure at scale
- Comfort navigating complex infrastructure environments — network permissions, NFS mounts, firewall rules, signing pipelines
- Strong problem-solving and communication skills across engineering and IT stakeholders
Strong Plus:
- Fluency with agentic AI workflows (Cursor, Claude, Copilot, etc.) as a force multiplier for engineering throughput
- Experience setting up CI infrastructure on AWS (EC2-based runners, IAM, networking)
- Familiarity with firmware signing pipelines and firmware release processes — understanding how signing fits into a CI workflow is a meaningful advantage given the constraints of this environment
- Familiarity with firmware or kernel build environments and their infrastructure constraints
- Experience integrating CI systems with hardware-in-the-loop testing
- Sharpen your agentic AI engineering skills — standing up CI infrastructure across a complex, multi-layer stack means writing a lot of automation fast: runner provisioning scripts, workflow templates, integration glue, monitoring. This is exactly the kind of work where engineers who pair well with AI coding agents move dramatically faster than those who don't. You'll be using these tools daily in a production-grade, high-stakes environment — and building a depth of experience in AI-assisted engineering that's hard to get anywhere else.
- High ownership — you're building the foundation, not maintaining someone else's system
- Customer-facing impact — the CI pipeline you ship is what enables AMD to produce validated, complete GPU stack releases faster for customers
- Technically interesting constraints — firmware CI has real infrastructure challenges (signing, NFS, hardware-in-the-loop) that generic CI work doesn't
- Open-source aligned — this work follows the same shift-left, trunk-health principles driving AMD's ROCm open-source direction
#LI-G11
#LI-HYBRID
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
Apply on company website