Samsung has announced its latest version of HBM2 (High Bandwidth Memory) with a sharp increase in capacity and overall performance. It’s likely a response to the advent of GDDR6, at least in part — the gap between the two memory standards has shrunk significantly.
Back when HBM debuted, the gap between it and GDDR5 was significant. HBM-equipped GPUs could offer significantly more memory bandwidth at lower power consumption. But unlike typical RAM cycles, in which a new technology debuts on a limited number of cards and then waterfalls into the larger market, HBM and HBM2 have both remained at the very top of the stack. AMD used HBM2 for its Vega 56, 64, and Radeon VII consumer cards, but Nvidia has opted to rely on GDDR6, which also offers its own density improvements.
According to Samsung, Flashbolt — that’s the latest HBM version — will open a gap once again between HBM and GDDR. The new HBM2E standard supports up to 3.2Gbps signal rates per pin, a substantial increase over last year’s Aquabolt and its 2.4Gbps signaling rate. Additionally, total capacity has increased to 16Gb per die, double the density of the previous generation. In total, Samsung can now offer 410GB/s of memory bandwidth and 16GB of capacity per stack. A Radeon VII equipped with this type of memory would pack 1.64TB/s of bandwidth and 64GB of onboard RAM.
“Flashbolt’s industry-leading performance will enable enhanced solutions for next-generation data centers, artificial intelligence, machine learning, and graphics applications,” said Jinman Han, senior vice president of Memory Product Planning and Application Engineering Team at Samsung Electronics. “We will continue to expand our premium DRAM offering, and improve our ‘high-performance, high capacity, and low power’ memory segment to meet market demand.”
Should We Ever Expect to See HBM in Mainstream Cards?
Up until now, HBM has been confined to the upper end of the PC market. In theory, it could come farther down the stack with 7nm, but I don’t think we’re going to see HBM in mainstream consumer cards in the 7nm generation. HBM-based cards are trickier to manufacture because routing thousands of wires through multiple layers of DRAM is a non-trivial process. Even AMD always acknowledged that HBM only gave them an economic and power consumption boost at a specific point in their own board stack. Below a certain RAM loadout and TDP, GDDR5 made more sense, even for Team Red.
With GDDR6 having replaced GDDR5, I suspect HBM will be confined back to the very top of the market — and that AMD may have opted to use it as a method of differentiating its 7nm professional and consumer product lines, much like Nvidia has. I say “may have” because those strategy decisions have already been made, even if we don’t know what they are yet.
Samsung is unlikely to have any problems either way. HBM2 has been of great interest to AI and ML companies as an alternative, high-bandwidth memory architecture for emerging compute applications. Regardless of what happens with GPUs, the memory standard should have a bright future ahead of it.
- Samsung Announces High-Speed HBM2 Breakthrough, Codenamed Aquabolt
- JEDEC Extends HBM2 Standard to 24GB, 307GB/s Bandwidth Per Stack
- Samsung Claims It Could Double HBM2 Manufacturing, Fail to Meet Demand