Announced on March 18, 2026, right at Samsung’s high-tech Pyeongtaek plant in South Korea, this positions Samsung as the go-to supplier for its new HBM4 AI Memory high-bandwidth memory in AMD’s Instinct MI455X AI GPUs.
Samsung Electronics and AMD just inked a big strategic deal to ramp up their work together on next-level AI memory tech. It’s all aimed at powering tomorrow’s accelerators and server chips as the demand for AI infrastructure keeps surging.

Announced on March 18, 2026, right at Samsung’s high-tech Pyeongtaek plant in South Korea, this positions Samsung as the go-to supplier for its new HBM4 AI Memory high-bandwidth memory in AMD’s Instinct MI455X AI GPUs.
The agreement between Samsung Electronics and AMD also includes custom DDR5 DRAM tailored for AMD’s sixth-gen EPYC processors codenamed “Venice” plus the Helios rack-scale systems.
Deeper Ties in the AI Supply Chain
AMD CEO Dr. Lisa Su and Samsung’s Vice Chairman Young Hyun Jun were both there for the signing, and it really drives home how they’re teaming up to build complete AI computing setups, from the silicon all the way to full racks.
Su put it well: “Samsung’s edge in HBM4, advanced DRAM, foundry tech, and packaging makes them a crucial partner on our AI path.” She stressed how vital tight collaboration is to make AI actually deliver in the real world.
Jun chimed in too, praising Samsung’s “one-of-a-kind end-to-end strengths” to meet AMD’s growing demands in memory design and more. This isn’t starting from scratch it’s building on their past work together, helping AMD spread out its suppliers to handle massive AI needs from big players like Meta and OpenAI.
Foundry Partnership on the Horizon
One of the most exciting parts is that they’re kicking off talks about Samsung acting as a foundry for AMD’s future chips.
That would cut some dependence on TSMC and make manufacturing tougher to disrupt. With the AI chip battle heating up, Samsung’s Pyeongtaek site with its state-of-the-art process nodes could shake things up big time.

The New AI Chip Race
This comes at a perfect time. HBM4 is primed to speed up AI training and inference on AMD’s Instinct GPUs, making them more efficient. Team it with those EPYC “Venice” CPUs, and you’ve got the recipe for packed, energy-smart data centers through the Helios platform.
For everyone in semiconductors, this screams faster innovation in memory to keep feeding the generative AI boom. It could put real pressure on competitors like Nvidia and unlock even more ways for AMD and Samsung to team up on packaging and full systems.





