AMD's next-gen Polaris graphics cards may launch in June

Scorpus

Posts: 2,162   +239
Staff member

If the latest information from HardwareBattle is correct, the patient wait for next-generation graphics cards may finally be over in the coming months. The usually reliable site has received information that suggests AMD will be launching their new Polaris GPUs at the end of June, perhaps with a paper launch at Computex the previous month.

Not much is known about AMD's upcoming Polaris series at this stage, although VideoCardz believes that the Radeon R9 490X (or 490) will be based on a Polaris 10 GPU. This suggests that, unlike with the R9 300 series, AMD will not be re-branding last year's flagship Fiji cards as R9 490 series cards.

Whether or not AMD will re-brand their old cards at all remains to be seen. The company does have a habit of only releasing a few truly new graphics cards in any one year, but with Polaris shift to 16nm from 28nm, we could see AMD give their line-up the complete refresh it deserves.

On the other hand, AMD might decide to continue selling the Fiji-based Fury line considering some of these cards have been on sale for less than a year. Whether or not this is through a re-brand or simply keeping the Fury line available remains to be seen. However, VideoCardz does believe that Polaris 10 will compete with Nvidia's GTX 1080/1070 series, which suggests it will feature a handy performance boost over existing cards.

Hopefully this rumor does come to be true, as we've been seriously starved of exciting graphics card launches for some time.

Permalink to story.

 
I really need a new graphics card but I can't make my mind up whether or not wait for Polaris. The benefits are obvious but the time scale for when we'll actually be able to buy one of these in the UK is less so. Guess I'll just have to grit my teeth.
 
Definitely wait. I need something to replace a GTX 680 in my Mac Pro but the jump from Maxwell to Pascal is said to be considerable, and worth waiting for.
 
Definitely wait. I need something to replace a GTX 680 in my Mac Pro but the jump from Maxwell to Pascal is said to be considerable, and worth waiting for.
They always say that about the architecture from one generation to the next but in reality the difference in performance is never really that big (which is good), it's all marketing gobbledygook. Maybe HBM2 will change things a bit.
 
I really need a new graphics card but I can't make my mind up whether or not wait for Polaris. The benefits are obvious but the time scale for when we'll actually be able to buy one of these in the UK is less so. Guess I'll just have to grit my teeth.
What I might do is wait until Pascal comes out, then pick up a used, previous generation card when people start dumping them on e-bay.
 
Wait a second... AMD (and Nvidia) always rebrand their cards and the performance from one generation to another is always quite small, except for the $600 cards. In general, all the cards seem to be getting bigger, take more power and now-a-days, they cannot seem to make a single slot card with any graphics power. There are single slot cards but they are wowfully inept.

Personally I have been dissapointed with the last 6 years of "new" or should I say "rebadged" cards.
 
Wait a second... AMD (and Nvidia) always rebrand their cards and the performance from one generation to another is always quite small, except for the $600 cards. In general, all the cards seem to be getting bigger, take more power and now-a-days, they cannot seem to make a single slot card with any graphics power. There are single slot cards but they are wowfully inept.

Personally I have been dissapointed with the last 6 years of "new" or should I say "rebadged" cards.

This isn't a rebrand. It is an entirely new manufacturing process FINFet @ 14nm (not 16nm as in the article, that is Pascal) from the current 28nm.
That actually means much less power is required and obviously more transistors/processing power.
We should be looking at 50% performance increase on today's comparable cards and better still the manufacturing process is cheaper so prices should be lower overall.
The FuryX and 980Ti should fall in price after the new GPU's are released but we may have to wait for Pascal to arrive to see the true benefits in price cuts.
 
Definitely wait. I need something to replace a GTX 680 in my Mac Pro but the jump from Maxwell to Pascal is said to be considerable, and worth waiting for.
They always say that about the architecture from one generation to the next but in reality the difference in performance is never really that big (which is good), it's all marketing gobbledygook. Maybe HBM2 will change things a bit.

It is an entirely new manufacturing process FINFet @ 14nm (not 16nm as in the article, that is Pascal) from the current 28nm.
HBM2 relates to memory (faster/more bandwidth but may not ship with Polaris or even Pascal at first as it is currently in short supply and expensive) but the FINFet 14nm manufacturing process and Polaris architecture will still see big improvements with GDDR5 - also I believe there is a new generation of GDDR5 memory that is faster that will be used)
Overall it actually means much less power is required and obviously more transistors/processing power.
We should be looking at 50% performance increase on today's comparable cards and better still the manufacturing process is cheaper so prices should be lower overall.
The FuryX and 980Ti should fall in price after the new GPU's are released but we may have to wait for Pascal to arrive to see the true benefits in price cuts.
 
I really need a new graphics card but I can't make my mind up whether or not wait for Polaris. The benefits are obvious but the time scale for when we'll actually be able to buy one of these in the UK is less so. Guess I'll just have to grit my teeth.
What I might do is wait until Pascal comes out, then pick up a used, previous generation card when people start dumping them on e-bay.

Smart thinking...no need to have the latest and greatest when you can pickup the old latest and greatest for much cheaper.
 
Glad to hear this... I'm still running two 290X's, didn't feel an upgrade to 390X or Fury, would be worth it!

I'm disappointed in the whole R300 Series of GPU's... Who knows, I may just jump ship to NVIDIA
 
Glad to hear this... I'm still running two 290X's, didn't feel an upgrade to 390X or Fury, would be worth it!

I'm disappointed in the whole R300 Series of GPU's... Who knows, I may just jump ship to NVIDIA

From what I have heard the 390's do not OC as well as the 290's do. I was recently in a competition where the benchmarks from the 290's stomped the 390's and even kept up with the nvidia 980's. I prefer nvidia cards but it seems that AMD is really making a comeback with some of their higher end cards and the performance is comparable. Currently AMD has the upper hand when it comes DX12 games. Hopefully nvidia gains some ground with pascal in DX12 games.
 
This isn't a rebrand. It is an entirely new manufacturing process FINFet @ 14nm (not 16nm as in the article, that is Pascal) from the current 28nm.
Not entirely. AMD have announced two GPUs for 14nm. Two GPUs would indicate one fully functional part and one salvage part per GPU for a total of four SKUs. The low/entry level mainstream end will still very likely be rebrands (Oland based for example). Polaris 11 is supposedly a 128-bit bus width card, while Polaris 10 should render the Fury cards EOL. That still leaves some sizable gaps in the product stack.
We should be looking at 50% performance increase on today's comparable cards and better still the manufacturing process is cheaper so prices should be lower overall.
That is incorrrect. Even factoring in smaller die sizes (more die candidates per wafer), R&D and manufacturing costs are significantly higher:
But perhaps the biggest issue is cost. The average IC design cost for a 28nm device is about $30 million, according to Gartner. In comparison, the IC design cost for a mid-range 14nm SoC is about $80 million....[snip]...If that’s not enough, there is also a sizable jump in manufacturing costs. In a typical 11-metal level process, there are 52 mask steps at 28nm. With an 80% fab utilization rate at 28nm, the loaded manufacturing cost is about $3,500 per 300mm wafer, according to Gartner. At 1.3 days per lithography layer, the cycle time for a 28nm chip is about 68 days. “Add one week minimum for package testing,” Wang said. “So, the total is two-and-half months from wafer start to chip delivery.”

At 16nm/14nm, there are 66 mask steps. With an 80% fab utilization rate at 16nm/14nm, the loaded cost is about $4,800 per 300mm wafer, according to Gartner. “It takes three months from wafer start to chip delivery,” he added.
The FuryX and 980Ti should fall in price after the new GPU's are released but we may have to wait for Pascal to arrive to see the true benefits in price cuts.
Polaris and GP104 should arrive at the same time (probably end-June/early July). Rumours are that the GTX 980 Ti has already tailed off production, and the Fury cards probably won't be too far behind (excepting the Duo, S9300 X2, and maybe the Nano). Manufacturing and production costs for the Fury make selling the cards any cheaper a very problematic financial issue.
 
I am very eager to get a Pascal with a 600 price euros and the HTC Vive. None of them are available in Portugal yet (Pascal is nowhere of course).
 
They always say that about the architecture from one generation to the next but in reality the difference in performance is never really that big (which is good), it's all marketing gobbledygook. Maybe HBM2 will change things a bit.

Yeah that's true they never unleash its full potential in the first generation, they keep plenty in reserve for the future.
 
Last edited:
What I might do is wait until Pascal comes out, then pick up a used, previous generation card when people start dumping them on e-bay.

That's probably a good idea, though Nvidia's current poor DirectX 12 performance brings further question to the fore. I guess if you upgrade more or less annually then it's a non issue due to the lack of Direct X 12 titles currently anyway. But being the tight *** that I am, I Probably won't replace my next graphics card for a few years.
 
This isn't a rebrand. It is an entirely new manufacturing process FINFet @ 14nm (not 16nm as in the article, that is Pascal) from the current 28nm.
Not entirely. AMD have announced two GPUs for 14nm. Two GPUs would indicate one fully functional part and one salvage part per GPU for a total of four SKUs. The low/entry level mainstream end will still very likely be rebrands (Oland based for example). Polaris 11 is supposedly a 128-bit bus width card, while Polaris 10 should render the Fury cards EOL. That still leaves some sizable gaps in the product stack.
We should be looking at 50% performance increase on today's comparable cards and better still the manufacturing process is cheaper so prices should be lower overall.
That is incorrrect. Even factoring in smaller die sizes (more die candidates per wafer), R&D and manufacturing costs are significantly higher:
But perhaps the biggest issue is cost. The average IC design cost for a 28nm device is about $30 million, according to Gartner. In comparison, the IC design cost for a mid-range 14nm SoC is about $80 million....[snip]...If that’s not enough, there is also a sizable jump in manufacturing costs. In a typical 11-metal level process, there are 52 mask steps at 28nm. With an 80% fab utilization rate at 28nm, the loaded manufacturing cost is about $3,500 per 300mm wafer, according to Gartner. At 1.3 days per lithography layer, the cycle time for a 28nm chip is about 68 days. “Add one week minimum for package testing,” Wang said. “So, the total is two-and-half months from wafer start to chip delivery.”

At 16nm/14nm, there are 66 mask steps. With an 80% fab utilization rate at 16nm/14nm, the loaded cost is about $4,800 per 300mm wafer, according to Gartner. “It takes three months from wafer start to chip delivery,” he added.
The FuryX and 980Ti should fall in price after the new GPU's are released but we may have to wait for Pascal to arrive to see the true benefits in price cuts.
Polaris and GP104 should arrive at the same time (probably end-June/early July). Rumours are that the GTX 980 Ti has already tailed off production, and the Fury cards probably won't be too far behind (excepting the Duo, S9300 X2, and maybe the Nano). Manufacturing and production costs for the Fury make selling the cards any cheaper a very problematic financial issue.
Sorry if I missed it but you stated price per wafer. How many chips are they getting? Same number of transistors would mean more chips per wafer at 14nm hence much better price per chip.
 
Sorry if I missed it but you stated price per wafer. How many chips are they getting? Same number of transistors would mean more chips per wafer at 14nm hence much better price per chip.
Using the oft quoted 232mm² for Polaris 11, and assuming that the 128-bit bus GPU replaces Tonga in the lineup, you are looking at 251 die candidates per 300mm wafer compared with 156 candidates for Tonga - a 61% increase in dice versus 37% increase in wafer cost and 166% increase in chip design cost. You then have to factor in yield. 28nm is a very mature process (well over the 80% from 3+ years ago). Getting any precise information is going to be near impossible considering current 14nm chips are ~100mm² ARM SoC's, and yield will depend upon how much of the die can be fused off and still remain viable as a salvage part. Assuming the standard ~ 90% of a die is required for a salvage part ( R9 290X/390X -> R9 290/390, R9 380X -> R9 380, HD 7970/280X -> HD 7950/280) you would be looking at between 40 and 60% of a mature process yield (estimate based on ARM chip production. Independent analysis of yield ramp is more circumspect, but obviously trails the mature process)
Given those numbers:
28nmHPC : 156 die candidates * 80+% / $3500/wafer = <$28 per GPU
14nmLPP : 251 die candidates * 40-60% / $4800/wafer = $32 - $48 per GPU + increased R&D costs.
 
Will they eliminate lag and rubber-banding completely in multiple material and texture environments where there is very high activity.
 
Using the oft quoted 232mm² for Polaris 11, and assuming that the 128-bit bus GPU replaces Tonga in the lineup, you are looking at 251 die candidates per 300mm wafer compared with 156 candidates for Tonga - a 61% increase in dice versus 37% increase in wafer cost and 166% increase in chip design cost. You then have to factor in yield. 28nm is a very mature process (well over the 80% from 3+ years ago). Getting any precise information is going to be near impossible considering current 14nm chips are ~100mm² ARM SoC's, and yield will depend upon how much of the die can be fused off and still remain viable as a salvage part. Assuming the standard ~ 90% of a die is required for a salvage part ( R9 290X/390X -> R9 290/390, R9 380X -> R9 380, HD 7970/280X -> HD 7950/280) you would be looking at between 40 and 60% of a mature process yield (estimate based on ARM chip production. Independent analysis of yield ramp is more circumspect, but obviously trails the mature process)
Given those numbers:
28nmHPC : 156 die candidates * 80+% / $3500/wafer = <$28 per GPU
14nmLPP : 251 die candidates * 40-60% / $4800/wafer = $32 - $48 per GPU + increased R&D costs.
Process issues aside, wafer imperfections are usually localised. Which means higher yield when die size is smaller as there are approx a fixed number of dies per wafer that are toast from this. You have not factored in that that is the (when process reasonably matured) the greatest advantage of smaller process.

40-60% sounds low. Will await the official figures.
 
I hope it is a great success so we can keep at least 2 mainstream home graphics companies going
 
Process issues aside, wafer imperfections are usually localised. Which means higher yield when die size is smaller as there are approx a fixed number of dies per wafer that are toast from this. You have not factored in that that is the (when process reasonably matured) the greatest advantage of smaller process
The calculation is weighted by market volumes and die sizes as it should be. This is very basic stuff, and why yields are weighted this way (for example, TSMC aren't going to claim a single yield value that is applicable to both the 148mm² GM 107 and 601mm² GM 200). The only way the model falls apart is if AMD decide to only design a single GPU outside of the weighted average, but that would then show a skew in the financials (where AMD's average revenue per GPU is $28.90, die range from 90mm² (Oland) to 596mm² (Fiji), weighted average lying between Bonaire (160mm²) and Curacao (212mm²). Yields are calculated by the metric below multiplied by each wafer tranche for each GPU.
SW5Xj3N.jpg
40-60% sounds low. Will await the official figures.
40-60% was for SoC's half the size of the GPU I mentioned, Globalfoundries own press releases put 14nmLPP at 80+% for the most simple small low density, low power test IC (128Mb SRAM) that are sub-100mm². If Glofo is claiming 80% for a simple low density/voltage/area chip, what would you think happens when you more than double the area, double the voltage, and a complex design that exceeds the repetitive simple cell structure of SRAM that boosts (mainly for PR) parametric yield numbers?
If you are going to wait on definitive yield numbers my advice is don't hold your breath, Foundries don't (and haven't since the modern foundry model arrived in the 70's) publicize them. The best you can do is estimate using their PR as best case scenario and a little math allied with units sold and their attendant revenue.
 
Back