Sandisk and SK hynix have announced a landmark partnership that could reshape the future of memory for artificial intelligence. The companies have signed a memorandum of understanding aimed at standardizing High Bandwidth Flash (HBF), a forward-looking NAND flash-based technology built to mimic the high-speed performance of conventional DRAM-based HBM stacks—while delivering as much as 8 to 16 times the capacity at comparable cost.
The partnership centers around creating a new class of memory designed to meet the escalating demands of AI inference, especially as models grow larger and increasingly push up against the limits of existing hardware.
Unlike DRAM, which loses data when power is cut and demands constant energy to operate, NAND flash is non-volatile. This persistent storage approach helps curb the power and cooling requirements that are straining data centers and edge deployments, a key concern as AI adoption spreads.
Sandisk’s initial HBF prototypes leverage its proprietary BiCS NAND stacking and CBA wafer bonding technologies, showcased recently at the Flash Memory Summit 2025—where the company picked up the “Most Innovative Technology” award.
The intent is to establish HBF as a cross-industry standard, rather than a closed, proprietary solution. To that end, Sandisk has convened a Technical Advisory Board comprising figures from both inside and outside the company, including recognized experts like RISC co-founder David Patterson and graphics leader Raja Koduri, to help shape the ecosystem and guide technical strategy.
What sets HBF apart is its ability to offer DRAM-like bandwidth with flash-grade capacity. In practical terms, this means future AI accelerators could support several terabytes of memory—without the escalating financial and thermal costs associated with current high-bandwidth DRAM.
For comparison, mixing HBM with HBF in next-gen GPUs could potentially scale memory from 192GB (a typical upper end for HBM today) all the way up to 3TB or even 4TB using only HBF, dramatically broadening the capabilities of AI inference hardware.
Industry adoption will be slow but steady, with first HBF modules expected to sample in the second half of 2026 and the first AI inference devices using the technology anticipated in early 2027.
The collaboration is especially significant as cutting-edge AI chipmakers like Nvidia and hyperscalers seek more affordable, lower-energy solutions amid skyrocketing demand—and as competitors like Samsung also invest in hybrid memory and flash-backed storage for upcoming hardware generations.
The Sandisk – SK hynix effort is more than just a technical milestone; it’s a signal that the memory ecosystem is evolving to embrace heterogeneous stacks, blending the best of DRAM, flash, and soon even next-generation persistent memories.





