High Bandwidth Memory: HBM3 is twice as fast as HBM2e, up to 819 GB/s per stack

mongeese

Posts: 643   +123
Staff
In a nutshell: JEDEC has announced the HBM3 standard. And, like any good revision to a memory standard, it features a minor decrease in voltage, a slew of added conveniences, and a doubling of all the performance-related specifications. Bandwidth? doubled. Layers? doubled. Capacity? doubled.

In numbers, an HBM3 stack can reach 819 GB/s of bandwidth and have 64 GB of capacity. In comparison, the HBM2e stacks used by the AMD MI250 have half the bandwidth, 410 GB/s, and a quarter of the capacity, a mere 16 GB.

At eight stacks, the MI250 has a total of 128 GB and 3277 GB/s of bandwidth. Eight stacks of HBM3 would have 512 GB with 6552 GB/s of bandwidth.

  HBM3 HMB2e HBM2
Specification JESD238 JESD235C JESD235B JESD235A
Bandwidth (per stack) 819 GB/s 410 GB/s 307 GB/s 256 GB/s
Die (per stack) 16 - 4 layers 12 - 2 layers 8 - 2 layers
Capacity (per die) 4 GB 2 GB 1 GB
Capacity (per stack) 64 GB 24 GB 8 GB
Voltage 1.1 V 1.2 V

HBM3 also doubles the number of independent channels, from eight to 16. And it’s introducing "pseudo-channels" that allow it to support up to 32 virtual channels.

According to JEDEC, HBM3 additionally addresses the "market need for high platform-level RAS (reliability, availability, serviceability)" with "strong, symbol-based ECC on-die, as well as real-time error reporting and transparency."

JEDEC expects the first generation of HBM3 products to appear on the market soon but notes that they won’t meet the maximum specification. A more realistic outlook, it says, would be 2 GB modules in 12-layer stacks.

Image credit: Stephen Shankland

Permalink to story.

 
This made me remember AMD's Fiji cards: It honestly sounded like a nice idea at the time but AMD themselves just want to forget those ever happened even when it comes to driver support.
 
Would this not be the target market for these products?
Well yes, but there are still many sore over the failure of AMD's HBM products. For all the promise they ultimately flopped.
This made me remember AMD's Fiji cards: It honestly sounded like a nice idea at the time but AMD themselves just want to forget those ever happened even when it comes to driver support.
AMD dropped fiji because its hamstrung by its 4GB VRAM limit. There's no point in optimizing for a card with almost no users left and unable to fit modern games above low settings.

HBM held great promise but AMD was never able to make a GPU that justified it. The RTX 3090 would benefit far more given the high power consumption of GDDR6x.
 
That's a stupid crazy number.
Well yes, but there are still many sore over the failure of AMD's HBM products. For all the promise they ultimately flopped.
AMD dropped fiji because its hamstrung by its 4GB VRAM limit. There's no point in optimizing for a card with almost no users left and unable to fit modern games above low settings.

HBM held great promise but AMD was never able to make a GPU that justified it. The RTX 3090 would benefit far more given the high power consumption of GDDR6x.
I would argue that AMD graphics card that had it only scored as high as they did because of it. I wont argue that their GPUs were weak but the boost that HBM2 gave definitely contributed to their performance. It was more of "brute force" of a performance increase. I'd love to have seen what other GPUs could have done with it but it was incredibly expensive at the time. IIRC, nVidia's quadro GPUs used it.
 
Well yes, but there are still many sore over the failure of AMD's HBM products. For all the promise they ultimately flopped.
AMD dropped fiji because its hamstrung by its 4GB VRAM limit. There's no point in optimizing for a card with almost no users left and unable to fit modern games above low settings.

HBM held great promise but AMD was never able to make a GPU that justified it. The RTX 3090 would benefit far more given the high power consumption of GDDR6x.
The R9 Fury performance better than a 4GB RX580, very close to an RX590. So that means at least medium settings at 1080p to this day, except a handful of games. Many games can still run at high quality or better.
 
That's a stupid crazy number.
I would argue that AMD graphics card that had it only scored as high as they did because of it. I wont argue that their GPUs were weak but the boost that HBM2 gave definitely contributed to their performance. It was more of "brute force" of a performance increase. I'd love to have seen what other GPUs could have done with it but it was incredibly expensive at the time. IIRC, nVidia's quadro GPUs used it.
I disagree that it helped at all VS using GDDR, as the r9 fury was 1070 level, and the vega 64 was 1080 level. Both the 1070 and 1080 made do with regular GDDR5 without issue.
The R9 Fury performance better than a 4GB RX580, very close to an RX590. So that means at least medium settings at 1080p to this day, except a handful of games. Many games can still run at high quality or better.
The RX 6500, a GPU that is slower then the 570, is handicapped by its 4GB VRAM limit. The 480, when reviewed, was slightly handicapped by its 4GB limit, and that issue has only gotten worse over the years.

And yet the r9 fury, a GPU that performs better then a 580, ISNT held back by its 4GB VRAM cap? Gonna press "x" to doubt on that. The fury X is handicapped.

It's really not hard to see why the fury X, with performance on par with a 6 year old $200 GPU, a tiny 4GB framebuffer, and a 0.01% marketshare on steam when combined with it's fury cousin, was dropped by AMD. It's not a viable GPU anymore, and nobody is using it. The r9 fury, r9 fury X, the entire 7000 series, the entire 200 series, and the entire 300 series COMBINED had fewer users on steam then the GTX 970, a partially disabled budget GPU that was older then the fury.
 
What about the price? if it's expensive then it's not worth it and performance comparison against the ddr6x and later DDR memory?
 
What about the price? if it's expensive then it's not worth it and performance comparison against the ddr6x and later DDR memory?
It requires very wide memory bus that is supposed to be expensive. Supposed to be because some new packaging technologies might change that. Still probably goes for datacenter use first...
 
I'm excited to see this implemented as a last level cache on an epyc or xeon. Perhaps this will be on next generation server and maybe even the threadripper pro 🤔
 
The RX 6500, a GPU that is slower then the 570, is handicapped by its 4GB VRAM limit. The 480, when reviewed, was slightly handicapped by its 4GB limit, and that issue has only gotten worse over the years.

And yet the r9 fury, a GPU that performs better then a 580, ISNT held back by its 4GB VRAM cap? Gonna press "x" to doubt on that. The fury X is handicapped.

It's really not hard to see why the fury X, with performance on par with a 6 year old $200 GPU, a tiny 4GB framebuffer, and a 0.01% marketshare on steam when combined with it's fury cousin, was dropped by AMD. It's not a viable GPU anymore, and nobody is using it. The r9 fury, r9 fury X, the entire 7000 series, the entire 200 series, and the entire 300 series COMBINED had fewer users on steam then the GTX 970, a partially disabled budget GPU that was older then the fury.
I never said that the Fury isn't limited by its 4GB VRAM buffer. I said that it can run games at more than just low settings despite its 4GB VRAM buffer. And that counts only for 1080p. Higher resolutions will tank performance, not because of the GPU limitations, but because of the VRAM limitations.
 
It had great driver support though? Started out a bit underwhelming performance wise but over the years through driver updates it just kept getting better. Loved mine.. Until it died and I had to replace it with a GTX 1060 - massive downgrade but it was the height of the precious mining bubble and the best I could afford.
 
It had great driver support though? Started out a bit underwhelming performance wise but over the years through driver updates it just kept getting better. Loved mine.. Until it died and I had to replace it with a GTX 1060 - massive downgrade but it was the height of the precious mining bubble and the best I could afford.
Not that, I'm referring to the fact that AMD discontinued driver support as of 6 months ago while those cards are still otherwise extremely viable in terms of performance to this day.
 
Not that, I'm referring to the fact that AMD discontinued driver support as of 6 months ago while those cards are still otherwise extremely viable in terms of performance to this day.
Ah Indeed, that was unexpected. A bit short on memory but still a very solid card. Thought it was an odd choice as well to stop supporting cards right when everyone is trying to squeeze their old cards for all they're worth because a replacement is mad expensive.
 
Ah Indeed, that was unexpected. A bit short on memory but still a very solid card. Thought it was an odd choice as well to stop supporting cards right when everyone is trying to squeeze their old cards for all they're worth because a replacement is mad expensive.
Yes under normal circumstances I could understand if they wanted to dedicate more engineers to code driver support to optimize for newer products, but it's not like we are going to go "Yeah the 6600 is 200 to 300 bucks cheaper than a 3060 and performs the same but drivers are kind of buggy, let me make it rain for a scalper instead and go for that 3060"
 
Driver support?

Isn't up to nvidia/AMD or Intel to make sure their video card works whether it uses GDDR or HBM? It is not something a game developer should care about.

Since HBM is used in datacentre cards, the downside compared to GDDR must be price and maybe availability? Why else would they not use it on the top gaming cards?
 
Video cards aren't potent enough to take advantage of this. I'm not even sure if they're potent enough to take advantage of HBM(1) yet. I have two R9 Furies and they both have 4GB of HBM. Given the choice, I'd have taken 8GB of GDDR5 over 4GB of HBM. Sure, it can do more things than 4GB of GDDR5 but not much more.
 
This made me remember AMD's Fiji cards: It honestly sounded like a nice idea at the time but AMD themselves just want to forget those ever happened even when it comes to driver support.
Yeah, it was just a gimmick. I'd be happier if my Furies had even 6GB of GDDR5 instead of 4GB of HBM. The GPUs themselves aren't potent enough to properly use HBM, not for gaming anyway. The Radeon VII used HMB2 well enough for workstation tasks but again, for gaming, it offered no added benefit despite costing an arm and a leg. As for drivers though, the drivers for the R9 Fury series are rock-solid.
Well yes, but there are still many sore over the failure of AMD's HBM products. For all the promise they ultimately flopped.
AMD dropped fiji because its hamstrung by its 4GB VRAM limit. There's no point in optimizing for a card with almost no users left and unable to fit modern games above low settings.
It's true. My Furies are severely hamstrung by the small 4GB frame buffer. Fiji is potent enough to play Far Cry 6 at medium settings but the 4GB isn't enough to do more than low settings. It sucks.
Would this not be the target market for these products?
It sure should be.
And yet the r9 fury, a GPU that performs better then a 580, ISNT held back by its 4GB VRAM cap? Gonna press "x" to doubt on that. The fury X is handicapped.
It's true. I have two Furies and the 4GB is a SERIOUS handicap. It's one of the reasons that I pity the fools who bought the RTX 3080. That 10GB frame buffer is disproportionately small when compared to the GPUs performance level. ATi did it right by putting 16GB of GDDR6 on the RX 6800 XT because that means the VRAM buffer won't ever be the limiting factor on that card. When one considers the power of Big Navi combined with the 16GB VRAM buffer and AMD fine-wine, that card will outlive the RTX 3080 by at least two years.

It's really not hard to see why the fury X, with performance on par with a 6 year old $200 GPU, a tiny 4GB framebuffer, and a 0.01% marketshare on steam when combined with it's fury cousin, was dropped by AMD. It's not a viable GPU anymore, and nobody is using it.
Ummm... I'm using an R9 Fury right now because I threw my 5700 XT and 6800 XT into a mining rig to try to offset the stupidly high cost of the 6800 XT. The R9 Fury still works well for most games at 1080p. Think of how many people are still using the GTX 1060 and then consider that the R9 Fury is just a smidge weaker than the GTX 1070. Greg Salazar made an interesting video about it here:
The r9 fury, r9 fury X, the entire 7000 series, the entire 200 series, and the entire 300 series COMBINED had fewer users on steam then the GTX 970, a partially disabled budget GPU that was older then the fury.
That's more because so many people bought the almost-as-potent GTX 1060 during the mining craze of 2017. The R9 Fury is VERY power-hungry and that was what the reviewers latched on to in order to give AMD a bad review. Then suddenly when nVidia's cards were the power-hungry ones, it didn't seem to matter so much to them anymore.
What about the price? if it's expensive then it's not worth it and performance comparison against the ddr6x and later DDR memory?
It's VERY expensive and totally not worth it for gaming. Data centre and workstation tasks are another story however (as the Radeon VII demonstrated).
It had great driver support though? Started out a bit underwhelming performance wise but over the years through driver updates it just kept getting better. Loved mine.. Until it died and I had to replace it with a GTX 1060 - massive downgrade but it was the height of the precious mining bubble and the best I could afford.
That's a shame because in the middle of the 2017 mining crisis, there suddenly appeared the Sapphire R9 Fury Nitro OC+ Edition for like $350 (half the price of the GTX 1060 and RX 580). I'm guessing the price was low because the power use was so high as to make it irrelevant for gaming. That's when I got my Fury because they had just stopped supporting Crossfire and so my twin HD 7970s were reduced to one. That was a serious hit. Then I got a refurbished one for $100 about a year later (still works to this day) and I've been glad to have them as the R9 Fury made for a very viable backup card for when I had to send my RX 5700 XT to XFX for RMA... TWICE! Check the video further up in my post to see what happened. Greg Salazar did a good piece on it.
Not that, I'm referring to the fact that AMD discontinued driver support as of 6 months ago while those cards are still otherwise extremely viable in terms of performance to this day.
Sure, they did end official support but the latest Adrenalin drivers do still work on them (even if they say that they're incompatible). I've been gaming with the same driver set that I had for my 6800 XT. Someone forgot to tell my R9 Fury about the incompatibility. :laughing:
 
This made me remember AMD's Fiji cards: It honestly sounded like a nice idea at the time but AMD themselves just want to forget those ever happened even when it comes to driver support.
If HMB prices had come down as planned we could have seen cards equipped with more than 4GB for consumers which would have been great, but unfortunately this tech has been relegated to pro/server cards.

Unfortunately Fiji was forced to remain priced high even 1 years after it launched since HMB prices were atrocious.
 
Back