AMD's GPUOpen initiative offers developers deeper access to GPUs

Shawn Knight

Posts: 15,284   +192
Staff member

AMD has launched GPUOpen, an initiative that aims to equip developers with the tools and access they need to squeeze the most out of today's GPUs.

Nicolas Thibieroz, worldwide ISV gaming engineering manager at AMD, said GPUOpen is composed of two areas: games & CGI which focuses on game graphics and content creation and professional compute for high-performance GPU computing.

The first principle, Thibieroz said, is to provide code and documentation to developers so they can gain more control over the GPU. Thibieroz added that current and upcoming GNC architectures include many features that go unutilized in PC graphics APIs. GPUOpen aims to help developers come up with ways to leverage such features, with the payoff being increased performance and / or quality.

The initiative is also expected to make it easier for developers to port games from current-generation consoles to the PC.

Thibieroz said the second principle is a commitment to open source software with the third being a collaborative engagement with the developer community.

AMD is feeling pressure from both ends of the GPU market. Rival Nvidia reigns supreme in the high-end graphics market with Intel competing at the low end, not because their integrated solutions are better but because Intel processors outsell chips from AMD.

More information about GPUOpen can be found at GPUOpen.com.

Permalink to story.

 
All hell breaks loose as the full potential of multi thousand core gpu units is unleashed on the public.
Seriously though, this sounds good to me. To bad I hear programmers have gone so far the other way, to the point of laziness even, that I doubt much will come of it. But I hope I'm wrong. And I usually am, so....I mean, my Powerball numbers weren't even close...
 
AMD can't even do proper single and multi-GPU drivers in a timely manner, so how do they think this will help anything or anyone? Only time will tell, but my hopes are real low. I mean, this is AMD we're talking about. And if devs want deeper access to the GPU, why are they using Gameworks libraries?

Mantle: Sold off, chopped up, now part of GPU Open
TressFX: TressFX who?
HSA: Even the founders are keeping quiet
HBM: Beaten by GDDR5
 
AMD can't even do proper single and multi-GPU drivers in a timely manner, so how do they think this will help anything or anyone? Only time will tell, but my hopes are real low. I mean, this is AMD we're talking about. And if devs want deeper access to the GPU, why are they using Gameworks libraries?

Mantle: Sold off, chopped up, now part of GPU Open
TressFX: TressFX who?
HSA: Even the founders are keeping quiet
HBM: Beaten by GDDR5
Mantle is now used in Vulkan
TressFX works as intended
HSA works as intended
HBM works as intended and with the latest Crimson drivers AMD improved the performance by a lot (have you even seen the Nano vs 980 benchmarks?).
besides making a troll comment you have nothing better to do?
 
HBM: Beaten by GDDR5
Puiu covered the rest pretty well, but I wanna focus on this.
In what world has GDDR5 ever beanten HBM? HBM on the Fury series provides 512 GB/s of bandwidth, and that's only its very first implementation ever. The fastest GDDR5 implementation so far has been on the 390 and 390X, where it provides 384 GB/s (the fastest Nvidia implementation was the 780 Ti and Titan Black, at 336.5 GB/s). That means the very first HBM implementation ever provides 33% more bandwidth than the fastest GDDR5 implementation ever, while at the same time having lower latency and consuming substantially less power.
 
Puiu covered the rest pretty well, but I wanna focus on this.
In what world has GDDR5 ever beanten HBM? HBM on the Fury series provides 512 GB/s of bandwidth, and that's only its very first implementation ever. The fastest GDDR5 implementation so far has been on the 390 and 390X, where it provides 384 GB/s (the fastest Nvidia implementation was the 780 Ti and Titan Black, at 336.5 GB/s). That means the very first HBM implementation ever provides 33% more bandwidth than the fastest GDDR5 implementation ever, while at the same time having lower latency and consuming substantially less power.

Um, the 980Ti is the better card. GDDR5 wins.
 
Um, the 980Ti is the better card. GDDR5 wins.
Yeah, it's almost like GPUs have other functional units as well besides the memory, like ROPs, TMUs, stream processors and tesselation engines, and those are what make the 980 Ti better (below 4K, of course, where the Fury X is better), rather than memory type.
1/10, ridiculously obvious attempt but gets one point because made me reply.
But anyway, HBM offers higher bandwidth, lower latency and lower power consumption when compared to GDDR5. HBM wins on every metric.
 
Yeah, it's almost like GPUs have other functional units as well besides the memory, like ROPs, TMUs, stream processors and tesselation engines, and those are what make the 980 Ti better (below 4K, of course, where the Fury X is better), rather than memory type.
1/10, ridiculously obvious attempt but gets one point because made me reply.
But anyway, HBM offers higher bandwidth, lower latency and lower power consumption when compared to GDDR5. HBM wins on every metric.

Clearly you don't read reviews.
I loved the part where you mention other parts of the architecture, then go back to saying HBM is THE determining factor. Gee, didn't AMD have more VRAM, more processors, bigger bus and still lost to the 980Ti? Fury X is great at 4K cuz 4K is mainstream now right? And those framerates at 4K are super smooth at High and Ultra quality presets? And how soooo energy efficient Fury X is with HBM that it requires water cooling, and is nowhere near an "Overclockers Dream"?

Thanks for the laugh.
 
Last edited:
Clearly you don't read reviews.
Do you?
https://www.techpowerup.com/reviews/Sapphire/R9_390_Nitro/23.html
I loved the part where you mention other parts of the architecture, then go back to saying HBM is THE determining factor.
Are you insane? I said the exact opposite of that. The 980 Ti is faster (below 4K) because of the other things besides the memory.
Saying GDDR5 is better than HBM because the 980 Ti is better than the Fury X (below 4K) is like saying in 2008 that GDDR3 is better than GDDR5 because the GTX 280 is better than the HD 4870. HBM is undeniably better than GDDR5, the same way GDDR5 is undeniably better than GDDR3. The products they are deployed onto, on the other hand, depend on several other factors besides the memory to define its final performance, so it's completely absurd to say that one type of memory is better than another only because graphics card X is faster than graphics card Y.
Again, HBM has higher bandwidth, lower latency and lower power consumption than GDDR5. Therefore HBM is better than GDDR5. That is irrespective of what graphics cards use either of them.
 
Do you?
https://www.techpowerup.com/reviews/Sapphire/R9_390_Nitro/23.html

Are you insane? I said the exact opposite of that. The 980 Ti is faster (below 4K) because of the other things besides the memory.
Saying GDDR5 is better than HBM because the 980 Ti is better than the Fury X (below 4K) is like saying in 2008 that GDDR3 is better than GDDR5 because the GTX 280 is better than the HD 4870. HBM is undeniably better than GDDR5, the same way GDDR5 is undeniably better than GDDR3. The products they are deployed onto, on the other hand, depend on several other factors besides the memory to define its final performance, so it's completely absurd to say that one type of memory is better than another only because graphics card X is faster than graphics card Y.
Again, HBM has higher bandwidth, lower latency and lower power consumption than GDDR5. Therefore HBM is better than GDDR5. That is irrespective of what graphics cards use either of them.

No you didn't say the opposite or else your first reply wouldn't have been all about HBM, HBM, HBM. Neither card is adequate for 4K on its own. Or did you miss that part? Going 4K to get a "win" is not a win (also look at TPU link below), especially when at 4K you're running sub 40fps. Doesn't make sense to go 4K with one card only to have to resort to medium or high quality with a mix of medium.

AMD needed HBM because their architecture was hot as ****. So hot even AMD thought their flagship needed water. And lets not forget the 295x2 with one or two vendors even trying to attempt an air cooled version. One of which (PowerColor?) had HORRIBLE newegg reviews. Don't even get me started on Crossfire drivers...

Best overall and best value:
https://www.techspot.com/bestof/graphics-cards/
 
Last edited:
No you didn't say the opposite or else your first reply wouldn't have been all about HBM, HBM, HBM. Neither card is adequate for 4K on its own. Or did you miss that part? Going 4K to get a "win" is not a win (also look at TPU link below), especially when at 4K you're running sub 40fps. Doesn't make sense to go 4K with one card only to have to resort to medium or high quality with a mix of medium.
AMD needed HBM because their architecture was hot as ****. So hot even AMD thought their flagship needed water. And lets not forget the 295x2 with one or two vendors even trying to attempt an air cooled version. One of which (PowerColor?) had HORRIBLE newegg reviews. Don't even get me started on Crossfire drivers...
Best overall and best value:
https://www.techspot.com/bestof/graphics-cards/
What are you on about? This is completely off topic.
I'm not talking about specific cards. I'm talking about memory. You said "HBM: beaten by GDDR5". That is 100% false. Even the fastest GDDR5 implementation to date (384 GB/s) is slower than the (only) HBM implementation we've seen (512 GB/s) while at the same time having higher latency and consuming more power. GDDR5 is worse in every way. That is irrespective of specific models of graphics cards. Regardless of how the 980 Ti and Fury X compare among themselves, the 980 Ti would be better if it had HBM instead of GDDR5, and the Fury X would be worse if it had GDDR5 instead of HBM. The 980 Ti isn't better (below 4K, since it's not better at 4K) because it has GDDR5, it's better DESPITE having GDDR5. It's better because the other components on the GK100 chip itself, such as ROPs and stream processors, compensate for the fact that it has slower memory.
Why is that so hard for you to understand?
 
What are you on about? This is completely off topic.
I'm not talking about specific cards. I'm talking about memory. You said "HBM: beaten by GDDR5". That is 100% false. Even the fastest GDDR5 implementation to date (384 GB/s) is slower than the (only) HBM implementation we've seen (512 GB/s) while at the same time having higher latency and consuming more power. GDDR5 is worse in every way. That is irrespective of specific models of graphics cards. Regardless of how the 980 Ti and Fury X compare among themselves, the 980 Ti would be better if it had HBM instead of GDDR5, and the Fury X would be worse if it had GDDR5 instead of HBM. The 980 Ti isn't better (below 4K, since it's not better at 4K) because it has GDDR5, it's better DESPITE having GDDR5. It's better because the other components on the GK100 chip itself, such as ROPs and stream processors, compensate for the fact that it has slower memory.
Why is that so hard for you to understand?

It's faster yet had no effect over Maxwell GDDR5. In fact, AMD claimed how efficient HBM was that their flagship overclocked like a toaster. You're argument is HBM is faster and it is, but AMD did nothing with it, except add water cooling to one card, and made a mini version of the same card and called it Nano.

tl;dr: AMD flunked with their implementation of HBM 1.0. Fact.
 
tl;dr: AMD flunked with their implementation of HBM 1.0. Fact.
You must think the implementation of HBM is the foundation for AMD's short comings. You're confusing problems AMD had before implementation of HBM, and using them for an excuse as a fact for problems they still have with the implementation of HBM. AMD flunking as you say had nothing to do with the introduction of HBM. In-fact their implementation of HBM was one of the best features in Fury.

The fact that GDDR5 was chosen over HBM does not make it a clear winner over HBM. <sarcasm ahead>Hell some cards have DDR3 chosen over GDDR, I guess that makes DDR3 the clear winner over all memory.

tldr; You can't strap a jet engine to a prop-plane and expect it to do mach 6, and then suggest the jet engine was the reasoning behind this failure.
 
You must think the implementation of HBM is the foundation for AMD's short comings.

The complete opposite actually. I in fact said their architectures have been their downfall. They were hot (95C idle 290X's,water cooled 295x2, and water cooled Fury X), underperforming and driver support was severely lacking.The other guy was the one yelling "HBM" from the rooftops, not me. No video card wins by one spec alone or else AMD would be first every time with their higher number of processors and (GDDR5 and HBM) memory bandwidth over their competitor, as examples...
 
Back