Samsung just threw down the gauntlet in the AI memory wars. At NVIDIA GTC 2026, the company unveiled its next-generation HBM4E memory delivering 16Gbps per pin while announcing it's already mass-producing sixth-gen HBM4 for NVIDIA's Vera Rubin platform. The move positions Samsung as the only player offering end-to-end AI solutions from memory to packaging, directly challenging rivals in the red-hot AI infrastructure market where every nanosecond of speed matters.
Samsung is making its boldest play yet for AI infrastructure supremacy. The Korean giant used NVIDIA GTC 2026 in San Jose to showcase not just one but two generations of high-bandwidth memory, while revealing a partnership with NVIDIA that spans everything from massive data centers to smartphones in your pocket.
The headline grabber is HBM4E, Samsung's next-generation memory that pushes bandwidth to 16 gigabits per second per pin and delivers 4.0 terabytes per second of total bandwidth. That's the kind of speed AI models need to process training data without bottlenecks. According to Samsung's announcement, this marks the first public display of the technology that could define the next wave of AI accelerators.
But here's what caught the industry off guard: Samsung's already shipping production volumes of HBM4, the sixth-generation memory designed specifically for NVIDIA's Vera Rubin platform. While competitors scramble to validate their designs, Samsung's manufacturing HBM4 at scale with consistent 11.7Gbps speeds, nearly 50% faster than the 8Gbps industry baseline. The company says it can even push that to 13Gbps for customers willing to pay premium.
The secret sauce? Samsung's leveraging its most advanced 1c DRAM process, a sixth-generation 10-nanometer-class technology that squeezes more performance from every silicon wafer while maintaining stable yields. That manufacturing edge matters when you're stacking memory dies 12 layers high and trying to keep everything cool enough to function.











