AMD launches 7nm Vega: Radeon Instinct MI60 and MI50

LemmingOverlrd

Posts: 86   +40
Something to look forward to: AMD has delivered a boost to its datacenter offering by releasing new GPUs tailored for HPC and hardware virtualization. On the one side, it is addressing the market's current trend towards Machine Learning and HPC, on the other, it is leaving the door open for a new class of cloud services in GPU computing.

The flip side to today's big 'Next Horizon' Zen 2 news was the introduction of the much-awaited next-gen Radeon architecture, Vega 20, built on TSMC's 7nm process node. The first products to come out of the AMD forge are the AMD Radeon Instinct MI60 and MI50 GPUs.

At the AMD Next Horizon event today, David Wang, Senior Vice-President of Engineering at the Radeon Technologies Group, took to the stage to explain just why AMD thinks the new Radeon Instincts are such winners. Much like the Zen 2 presentation, the virtues of 7nm were played up once again: chips twice as dense, 1.25x higher performance and 50% lower power consumption, when compared to the previous 14nm process. Physically, the 7nm Vega is delivering a better deal on a little less real estate. In fact the 'old' Vega 10 measured 510mm2, yet the new 7nm Vega measures just 331mm2, and AMD did still manage to up the transistor count just a little (12.5bn in Vega 10 vs. 13.2bn in Vega 20).

The new 7nm Vega, delivers the same number of compute units (64) as its predecessor, while the MI50 delivers 60 compute units. While Wang did not actively disclose clock speeds on the cards, AMD press materials do mention 1800MHz/1746MHz boost clocks for the MI60 and MI50 respectively.

Below, taking a closer look at how the MI60 fares against its predecessor, as drawn up by AMD, we can see that the MI60 draws a little less power, at a higher 1.8GHz frequency than an MI25 running at 1.5GHz. That's a 30% gain in overall performance per watt at peak frequencies, but it also means that, should an MI60 be clocked at similar speeds as the MI25, there would be power savings of about 50%. Much like Zen 2, this gives AMD build options when shoving a whole bunch of these cards inside a server.

Despite the graphed-out 50% lower power at the same frequency, things get a little dicey when you look at the MI60's power connectors. In the exploded view, below, you can clearly see the dual 8-pin PCI express power connectors, suggesting it'll be once again draw in the high 200s or maybe even 300W of power.

All the horsepower on the MI60 is backed up by 32GB of HBM2 memory (16GB on the MI50) which top a mind boggling 1 TB/s of memory bandwidth. It also delivers, says AMD, 7.5 TFlops of FP64 (6.4 TFlops on the MI50) performance. The cards also support PCIe 4.0 which opens up options for server-side multi-GPU setups and enhanced CPU-to-GPU communication.

Contrary to Zen 2, the 7nm Vega does not seem to be a modular 'chiplet' design, and more of an evolution on the MI25 'Vega XT' chip. This is hardly awe-inspiring, as we are seeing Nvidia remain at the top of the GPU pecking order and hardly anything at AMD to fight them off. Also, contrary to what many speculated, AMD has not gone down the Ray Tracing path, with 7nm Vega. In a way, this clues us in about AMD's plans for Navi, AMD's consumer GPU in 2019, which may simply skip Ray Tracing entirely.

The new "datacenter GPUs", as AMD puts it, are optimized for Deep Learning, Inference, Training and general high-performance computing. But AMD has a new take on an old pitch, the MI60 and MI50 allow for full hardware-virtualized GPU (I.e. MxGPU), which might turn out to create new revenue streams in cloud-based GPU services.

Hardly as awe-inspiring as the CPU announcements today, the new 7nm Vega chips are shaping up to be little more than a shrink of their 14nm predecessors.

The MI60 will become available later this year, while the MI50 will only turn up at the end of Q1'2019. Prices were not discussed.

Permalink to story.

 
So this is a 30 percent gain for what appears to only be a mildly tweaked part. I know this isn't a consumer GPU, however consider that AMD could shrink a Vega 64 down and presumably get the same result. With minimal architecture changes that kind of performance jump would put a Vega 64 competitive with something like an RTX2080. On paper at least. You would hope Navi has extensive architecture enhancements on top of basic tweaks.

The part of the article that talks about skipping ray tracing for Navi would leave us with a scenario I envisioned back at the launch of the RTX series. One where AMD deliver similar 'basic' gaming performance as an RTX card on a smaller, much cheaper chip.

RTX features like ray tracing would distinguish the two GPUs. The necessary tensor cores and much bigger die as a result would certainly meant you will pay a big premium for it. Do you just go with the cheap AMD options that can be as fast minus RTX features, or pony up the dough and desire RTX eye candy on the games that support them?
 
It's because of cost. Yes, AMD can do that, at what cost. 7nm is new and with how giants like Huawei and Apple getting the first dips at the fab, do you think AMD can negotiate a decent price to shrink Vega 64? It can comes out at 2080 performance and 2080 price too, then what?

That's why it makes more sense for the first 7nm gaming card to target 1080 performance at hopefully lower than 1070 price. Otherwise, we are basically stand still. It's now all about price, I can't believe my 2016 GTX 1080 is still only $100 down from original 2016 MSRP and there's no upgrade at a sane price. 2080 Ti is just too expensive and 1080Ti/2080 hardly justify upgrade (at most 30% faster).
 
So this is a 30 percent gain for what appears to only be a mildly tweaked part. I know this isn't a consumer GPU, however consider that AMD could shrink a Vega 64 down and presumably get the same result. With minimal architecture changes that kind of performance jump would put a Vega 64 competitive with something like an RTX2080. On paper at least. You would hope Navi has extensive architecture enhancements on top of basic tweaks.

The part of the article that talks about skipping ray tracing for Navi would leave us with a scenario I envisioned back at the launch of the RTX series. One where AMD deliver similar 'basic' gaming performance as an RTX card on a smaller, much cheaper chip.

RTX features like ray tracing would distinguish the two GPUs. The necessary tensor cores and much bigger die as a result would certainly meant you will pay a big premium for it. Do you just go with the cheap AMD options that can be as fast minus RTX features, or pony up the dough and desire RTX eye candy on the games that support them?

RTX kind of defeats the point of eye candy if you can only play the game at 1080p. I really don't see the selling point of that. Assuming that they somehow get acceptable performance, it will be years before enough games have ray tracing to justify dedicated die space. Given the much smaller die size of the AMD chip my bet is that AMD is going to competitively price it's GPUs. If the top end chip does well against the 2080 with only a 331mm2 die size vs the 2080's 545mm2, that simply means AMD will be getting far better yields and lower costs.

They may not be competing for the top dog anymore but this is to expected after they had a drought of funding. If I were Lisa Sue I would not release my true next gen product until I knew it would beat whatever Nvidia has.
 
So this is a 30 percent gain for what appears to only be a mildly tweaked part. I know this isn't a consumer GPU, however consider that AMD could shrink a Vega 64 down and presumably get the same result. With minimal architecture changes that kind of performance jump would put a Vega 64 competitive with something like an RTX2080. On paper at least. You would hope Navi has extensive architecture enhancements on top of basic tweaks.

The part of the article that talks about skipping ray tracing for Navi would leave us with a scenario I envisioned back at the launch of the RTX series. One where AMD deliver similar 'basic' gaming performance as an RTX card on a smaller, much cheaper chip.

RTX features like ray tracing would distinguish the two GPUs. The necessary tensor cores and much bigger die as a result would certainly meant you will pay a big premium for it. Do you just go with the cheap AMD options that can be as fast minus RTX features, or pony up the dough and desire RTX eye candy on the games that support them?

RTX kind of defeats the point of eye candy if you can only play the game at 1080p. I really don't see the selling point of that. Assuming that they somehow get acceptable performance, it will be years before enough games have ray tracing to justify dedicated die space. Given the much smaller die size of the AMD chip my bet is that AMD is going to competitively price it's GPUs. If the top end chip does well against the 2080 with only a 331mm2 die size vs the 2080's 545mm2, that simply means AMD will be getting far better yields and lower costs.

They may not be competing for the top dog anymore but this is to expected after they had a drought of funding. If I were Lisa Sue I would not release my true next gen product until I knew it would beat whatever Nvidia has.

But they gotta release something new, not just a tweaked non-vega. Why not a "tock" to the vega with a new process dropping it's die size.. They did it with Ryzen and people seemed ok with the performance boost : $.

Raytracing(hybrid) is a great addition.. so is DLSS.. but the tech has flaws.. it's too new and not baked into dx for example.. Requiring to code a game towards those settings will hurt it being accepted. They have to start somewhere sure, but to release it as the top subject with only demos to show it off.. is asinine. They're showing their hand too early.. It's early enough that engine and dx devs could work to adding it to to their systems, to hopefully create a universal way of doing it, to possibly be supported by AMD. Compare how freesync was adapted compared to gameworks barely ever being adapted to.

Personally, I'm semi-stuck with Nvidia, thx to new 21:9 gsync monitor... Before I upgraded from the 1440p gsync to to the 21:9-1440p gsync, I was really hoping AMD would release something to stomp my 1080ti ftw3.. At this point, unless freesync2 monitors are $3-$400 less than gsync... the only realistic way I'd go to AMD is if they can maintain a release schedule of competitive cards to the top ti tier. With Nvidia, I'm guaranteed to have the ability to upgrade when a new line comes out.

At this point.. I can play most games at max settings 3440x1440 @ at least 60fps most of the time (ac:eek:d, fh4.. don't play many games.. used to be a lot of WoW.. but it's been disconnected lately.. with the crowdbase)

If AMD could release the best card you can pump out early next year. Whether it's a process shrinking of VEGA more allowance to overclock.. or something NAVI.. without raytracing(hybrid) but at the speed of a 2080ti.. for $800...(the cost of a 'G'TX-2080ti), they would have busted 2 industry leaders in a relatively short time.
 
But they gotta release something new, not just a tweaked non-vega. Why not a "tock" to the vega with a new process dropping it's die size.. They did it with Ryzen and people seemed ok with the performance boost : $.

Raytracing(hybrid) is a great addition.. so is DLSS.. but the tech has flaws.. it's too new and not baked into dx for example.. Requiring to code a game towards those settings will hurt it being accepted. They have to start somewhere sure, but to release it as the top subject with only demos to show it off.. is asinine. They're showing their hand too early.. It's early enough that engine and dx devs could work to adding it to to their systems, to hopefully create a universal way of doing it, to possibly be supported by AMD. Compare how freesync was adapted compared to gameworks barely ever being adapted to.

Personally, I'm semi-stuck with Nvidia, thx to new 21:9 gsync monitor... Before I upgraded from the 1440p gsync to to the 21:9-1440p gsync, I was really hoping AMD would release something to stomp my 1080ti ftw3.. At this point, unless freesync2 monitors are $3-$400 less than gsync... the only realistic way I'd go to AMD is if they can maintain a release schedule of competitive cards to the top ti tier. With Nvidia, I'm guaranteed to have the ability to upgrade when a new line comes out.

At this point.. I can play most games at max settings 3440x1440 @ at least 60fps most of the time (ac:eek:d, fh4.. don't play many games.. used to be a lot of WoW.. but it's been disconnected lately.. with the crowdbase)

If AMD could release the best card you can pump out early next year. Whether it's a process shrinking of VEGA more allowance to overclock.. or something NAVI.. without raytracing(hybrid) but at the speed of a 2080ti.. for $800...(the cost of a 'G'TX-2080ti), they would have busted 2 industry leaders in a relatively short time.

Yesterday AMD mentioned very little about Navi, which concerned me. This doesn't sound like a GPU that will be available to buy any time soon.

So AMD won't compete again at the high end for quite a while. Ok fine they need more time to get their house in order. In the meantime I would have happily taken a straightforward Vega 64 shrink we have spoken of. As I said if you take a Vega 64 and bust through 1800MHz core clocks as they have done here, you would be in sight of 1080ti/2080 performance. At worst only a little short.

Nothing like that appears to be forthcoming. I think there was a golden opportunity to get 7nm GPU parts out there to compete against these 12nm RTX cards before Nvidia inevitably get their own 7nm GPUs out later in 2019.

AMD don't look like they can hit that window and are focusing on what makes them the most money, the 'Instinct' line. It is an understandable decision, disappointing all the same for gamers hungry for competition against Nvidia's very expensive RTX series.
 
But they gotta release something new, not just a tweaked non-vega. Why not a "tock" to the vega with a new process dropping it's die size.. They did it with Ryzen and people seemed ok with the performance boost : $.

Raytracing(hybrid) is a great addition.. so is DLSS.. but the tech has flaws.. it's too new and not baked into dx for example.. Requiring to code a game towards those settings will hurt it being accepted. They have to start somewhere sure, but to release it as the top subject with only demos to show it off.. is asinine. They're showing their hand too early.. It's early enough that engine and dx devs could work to adding it to to their systems, to hopefully create a universal way of doing it, to possibly be supported by AMD. Compare how freesync was adapted compared to gameworks barely ever being adapted to.

Personally, I'm semi-stuck with Nvidia, thx to new 21:9 gsync monitor... Before I upgraded from the 1440p gsync to to the 21:9-1440p gsync, I was really hoping AMD would release something to stomp my 1080ti ftw3.. At this point, unless freesync2 monitors are $3-$400 less than gsync... the only realistic way I'd go to AMD is if they can maintain a release schedule of competitive cards to the top ti tier. With Nvidia, I'm guaranteed to have the ability to upgrade when a new line comes out.

At this point.. I can play most games at max settings 3440x1440 @ at least 60fps most of the time (ac:eek:d, fh4.. don't play many games.. used to be a lot of WoW.. but it's been disconnected lately.. with the crowdbase)

If AMD could release the best card you can pump out early next year. Whether it's a process shrinking of VEGA more allowance to overclock.. or something NAVI.. without raytracing(hybrid) but at the speed of a 2080ti.. for $800...(the cost of a 'G'TX-2080ti), they would have busted 2 industry leaders in a relatively short time.

AMD just didn't have the money to put towards making a card that good a few years back. At the very least, they have it now.

Yesterday AMD mentioned very little about Navi, which concerned me. This doesn't sound like a GPU that will be available to buy any time soon.

So AMD won't compete again at the high end for quite a while. Ok fine they need more time to get their house in order. In the meantime I would have happily taken a straightforward Vega 64 shrink we have spoken of. As I said if you take a Vega 64 and bust through 1800MHz core clocks as they have done here, you would be in sight of 1080ti/2080 performance. At worst only a little short.

Nothing like that appears to be forthcoming. I think there was a golden opportunity to get 7nm GPU parts out there to compete against these 12nm RTX cards before Nvidia inevitably get their own 7nm GPUs out later in 2019.

AMD don't look like they can hit that window and are focusing on what makes them the most money, the 'Instinct' line. It is an understandable decision, disappointing all the same for gamers hungry for competition against Nvidia's very expensive RTX series.

AMD have been very tight lipped about Navi. What I do know is that AMD pulled a lot of resources from Vega to focus on Navi, so whatever it is has more development time and R&D then Vega.
 
Back