AMD shares new details regarding its upcoming Radeon RX 470, RX 460 graphics cards

Shawn Knight

Posts: 15,279   +192
Staff member

During a recent press event in Australia, AMD revealed additional details regarding its upcoming Radeon RX 470 and RX 460 graphics cards.

As for the Radeon RX 470, VideoCardz's slide shot reveals it will utilize the same blower-style cooling system found on the RX 480. Furthermore, manufacturing partners will have the option to churn out cards with 8GB of GDDR5 although AMD reference cards will apparently be limited to just 4GB.

The RX 470 will also feature a 256-bit interface in conjunction with 32 Compute Units, or 2,048 Stream Processors, down from the 2,304 Stream processors found in the current RX 480. It’ll be powered by a single 6-pin power connector in addition to what it draws from the PCIe slot.

Elsewhere, the RX 460 will have 14 Compute Units (896 Stream Processors) with reference boards coming equipped with 2GB of GDDR5 and a 128-bit interface. As you can see, the card is rather short and resembles AMD’s R9 Nano which would make it much easier to fit inside cases with very little room. Unlike the other two cards, this one won’t require an ancillary power source.

Last but not least is an updated GPU roadmap that describes Vega as a high-end architecture for high-end gamers. The next update, scheduled for 2018, is Navi which will address scalability and feature next-generation memory.

No word yet on when the RX 470 and RX 460 will arrive although the general consensus among the community seems to be next month and shortly thereafter, respectively.

Permalink to story.

 
I'm more interested in that "next gen memory". HBM3?

Just pure speculation but it could be a cheap or low cost version of HBM2. If AMD repeats it's value card first strategy for next gen cards it would mean very high performance memory for everyone.

I think the more interesting part of that is scalability. Does this mean an architecture that is easily increased and decreased in size? If this is true AMD could feasibly create any size card it wants to in order to meet demand. Nvidia releases the next titan and AMD would be able to scale their card to above that performance. Vice Versa, AMD can scale the design down to work with various mobile and custom solutions.
 
I'm more interested in that "next gen memory". HBM3?

Just pure speculation but it could be a cheap or low cost version of HBM2. If AMD repeats it's value card first strategy for next gen cards it would mean very high performance memory for everyone.

I think the more interesting part of that is scalability. Does this mean an architecture that is easily increased and decreased in size? If this is true AMD could feasibly create any size card it wants to in order to meet demand. Nvidia releases the next titan and AMD would be able to scale their card to above that performance. Vice Versa, AMD can scale the design down to work with various mobile and custom solutions.
it will most likely allow them to more easily control the number of CUs, shaders, memory configurations, etc.
but even so, I doubt we'll see a mobile solution from them. making more powerful GPUs when needed does indeed sound interesting, but they need to lower power consumption by a fair margin to be able to do it.

in the end we'll just have to wait for something official from AMD.
 
it will most likely allow them to more easily control the number of CUs, shaders, memory configurations, etc.
but even so, I doubt we'll see a mobile solution from them. making more powerful GPUs when needed does indeed sound interesting, but they need to lower power consumption by a fair margin to be able to do it.

in the end we'll just have to wait for something official from AMD.

Yep, this roadmap is literally all we have to go on.
 
I'm more interested in that "next gen memory". HBM3?

Just pure speculation but it could be a cheap or low cost version of HBM2. If AMD repeats it's value card first strategy for next gen cards it would mean very high performance memory for everyone.

I think the more interesting part of that is scalability. Does this mean an architecture that is easily increased and decreased in size? If this is true AMD could feasibly create any size card it wants to in order to meet demand. Nvidia releases the next titan and AMD would be able to scale their card to above that performance. Vice Versa, AMD can scale the design down to work with various mobile and custom solutions.

The theory right now about navi's scalability is that AMD will launch 1 very small/cheap GPU core. The link multiple smaller GPUs on one package. This is much like the early days of dual and quad core CPUs. Because DX12 and Vulkan will be able to scale many GPU cores easily while sharing 1 pool of memory this is very possible. So for low end the card may have 1 core, 2 cores on midrange, 3 cores on high-end, and 4 cores on enthusiast level. Which would be kind of like 4 RX 480s in crossfire except on 1 chip and with 1 graphics card. AMD has already tested DX12/Vulkan GPU scaling as high as 95% in their testing. Now this is all speculation, but this is a simple way to bring costs down and unify their whole lineup. It'll ultimately come down to developers if they choose to program their games to utilize up to 4 GPU cores or beyond.
 
The theory right now about navi's scalability is that AMD will launch 1 very small/cheap GPU core. The link multiple smaller GPUs on one package. This is much like the early days of dual and quad core CPUs. Because DX12 and Vulkan will be able to scale many GPU cores easily while sharing 1 pool of memory this is very possible. So for low end the card may have 1 core, 2 cores on midrange, 3 cores on high-end, and 4 cores on enthusiast level. Which would be kind of like 4 RX 480s in crossfire except on 1 chip and with 1 graphics card. AMD has already tested DX12/Vulkan GPU scaling as high as 95% in their testing. Now this is all speculation, but this is a simple way to bring costs down and unify their whole lineup. It'll ultimately come down to developers if they choose to program their games to utilize up to 4 GPU cores or beyond.

That theory originated from Adored. The only problem I have with it is that it is based entirely on DX 12 and / or vulkan. DX 11 performance will be severely limited. I can't see Navi coming out with something like this so soon, there simply is nearly enough DX 12 and Vulkan games yet. Not to mention, performance of non-DX12 / vulkan games would be very poor.
 
That theory originated from Adored. The only problem I have with it is that it is based entirely on DX 12 and / or vulkan. DX 11 performance will be severely limited. I can't see Navi coming out with something like this so soon, there simply is nearly enough DX 12 and Vulkan games yet. Not to mention, performance of non-DX12 / vulkan games would be very poor.

Navi's not due out till 2018 by then DX12/Vulkan will be standard. Also this is 2 architectures away, so I'd expect each core to be near GTX 1070 level which is more than enough for DX11 games, and should work like crossfire for all DX11 titles that support it. Seems like a win if it's true. It's just much different than we're used to.
 
Navi's not due out till 2018 by then DX12/Vulkan will be standard. Also this is 2 architectures away, so I'd expect each core to be near GTX 1070 level which is more than enough for DX11 games, and should work like crossfire for all DX11 titles that support it. Seems like a win if it's true. It's just much different than we're used to.

Vega isn't really a new architecture, it's big polaris with HBM2. You are also assuming that they are going to stick two of the bigger cores onto one chip on Navi. That would essentially restrict Navi to the high end only. If they go with weaker Polaris / Vega cores that DX 11 performance will be poor.
 
I'm more interested in that "next gen memory". HBM3?

Just pure speculation but it could be a cheap or low cost version of HBM2. If AMD repeats it's value card first strategy for next gen cards it would mean very high performance memory for everyone.

I think the more interesting part of that is scalability. Does this mean an architecture that is easily increased and decreased in size? If this is true AMD could feasibly create any size card it wants to in order to meet demand. Nvidia releases the next titan and AMD would be able to scale their card to above that performance. Vice Versa, AMD can scale the design down to work with various mobile and custom solutions.

The theory right now about navi's scalability is that AMD will launch 1 very small/cheap GPU core. The link multiple smaller GPUs on one package. This is much like the early days of dual and quad core CPUs. Because DX12 and Vulkan will be able to scale many GPU cores easily while sharing 1 pool of memory this is very possible. So for low end the card may have 1 core, 2 cores on midrange, 3 cores on high-end, and 4 cores on enthusiast level. Which would be kind of like 4 RX 480s in crossfire except on 1 chip and with 1 graphics card. AMD has already tested DX12/Vulkan GPU scaling as high as 95% in their testing. Now this is all speculation, but this is a simple way to bring costs down and unify their whole lineup. It'll ultimately come down to developers if they choose to program their games to utilize up to 4 GPU cores or beyond.

Stop! You are getting me too excited at the idea of AMD being able to make a 1000mm^2 350w single card. My god it would be amazing...
 
That theory originated from Adored. The only problem I have with it is that it is based entirely on DX 12 and / or vulkan. DX 11 performance will be severely limited. I can't see Navi coming out with something like this so soon, there simply is nearly enough DX 12 and Vulkan games yet. Not to mention, performance of non-DX12 / vulkan games would be very poor.

Navi's not due out till 2018 by then DX12/Vulkan will be standard. Also this is 2 architectures away, so I'd expect each core to be near GTX 1070 level which is more than enough for DX11 games, and should work like crossfire for all DX11 titles that support it. Seems like a win if it's true. It's just much different than we're used to.

Ideally I think it would just work like how Zen is build: several quad-cores linked together as a single CPU. The OS would only see 1 GPU with Navi if they simply put an interposer or some master-core at the front of it to control it. You wouldn't need to program for crossfire, but Async compute would likely be paramount - and by 2018 every single game will use it.
 
The theory right now about navi's scalability is that AMD will launch 1 very small/cheap GPU core. The link multiple smaller GPUs on one package. This is much like the early days of dual and quad core CPUs. Because DX12 and Vulkan will be able to scale many GPU cores easily while sharing 1 pool of memory this is very possible. So for low end the card may have 1 core, 2 cores on midrange, 3 cores on high-end, and 4 cores on enthusiast level. Which would be kind of like 4 RX 480s in crossfire except on 1 chip and with 1 graphics card. AMD has already tested DX12/Vulkan GPU scaling as high as 95% in their testing. Now this is all speculation, but this is a simple way to bring costs down and unify their whole lineup. It'll ultimately come down to developers if they choose to program their games to utilize up to 4 GPU cores or beyond.

That theory originated from Adored. The only problem I have with it is that it is based entirely on DX 12 and / or vulkan. DX 11 performance will be severely limited. I can't see Navi coming out with something like this so soon, there simply is nearly enough DX 12 and Vulkan games yet. Not to mention, performance of non-DX12 / vulkan games would be very poor.
AMD seems to be doing a lot of work with interposers lately. particularly integrating HBM onto an interposer with their GPUs. What I think they maybe going for is fabricating the memory controler, shader cores, etc. on individual dies and then connecting them on a single interposer. This would allow you to build massive chips out of very small dies. I have no idea if this could work or not though, as the connection between things like the shader cores and memory controllers perhaps are simply too complex to realistically do.
 
AMD seems to be doing a lot of work with interposers lately. particularly integrating HBM onto an interposer with their GPUs. What I think they maybe going for is fabricating the memory controler, shader cores, etc. on individual dies and then connecting them on a single interposer. This would allow you to build massive chips out of very small dies. I have no idea if this could work or not though, as the connection between things like the shader cores and memory controllers perhaps are simply too complex to realistically do.

That's an interesting take on it but if AMD were the first one to come up with that it would have huge implications for the entire computer world. The cost of producing CPUs, GPUs, and nearly everything else would go down and performance would go up. I don't know much about interposers so I cannot really comment on the feasibility. All I know is the most we've we've seen with interposers is a very high bandwidth connection between a GPU core and it's memory. Whether you can piece out a GPU is yet to be seen. The biggest hurdle would be the speed. How can you part out a GPU on different dies and not reduce speed?
 
That's an interesting take on it but if AMD were the first one to come up with that it would have huge implications for the entire computer world. The cost of producing CPUs, GPUs, and nearly everything else would go down and performance would go up. I don't know much about interposers so I cannot really comment on the feasibility. All I know is the most we've we've seen with interposers is a very high bandwidth connection between a GPU core and it's memory. Whether you can piece out a GPU is yet to be seen. The biggest hurdle would be the speed. How can you part out a GPU on different dies and not reduce speed?
yeah, a lot of unanswered questions and unknowns. I'm only speculating a this point as it seems like something it that plausible, but perhaps is completely impractical. Seems like it would have lots of benefits though.
 
yeah, a lot of unanswered questions and unknowns. I'm only speculating a this point as it seems like something it that plausible, but perhaps is completely impractical. Seems like it would have lots of benefits though.

Yeah, latency between the different dies would be critical.
 
Back