Report suggests AMD has pulled 'Vega' GPU launch forward to October

Scorpus

Posts: 2,162   +239
Staff member

A shaky report from 3DCenter suggests that AMD has pulled the launch of their next high-end GPU, codenamed 'Vega', forward from a scheduled early 2017 release to October of this year.

There are suggestions that this move has been made in response to Nvidia's GeForce GTX 1080 and GTX 1070, which target the high-end performance section of the graphics card market. With AMD's upcoming Polaris GPUs expected to target mainstream PC gamers with lower cost cards, the company mightn't want to be without competitive high performance products until next year.

As The Tech Report notes, these reports have been cobbled together from a variety of sources, including a SemiAccurate post that said Vega has already been shown behind closed doors, and an update to AIDA64 that included "preliminary GPU information for AMD Vega 10 (Greenland)."

3DCenter seems to believe that Vega 10 will be a direct competitor to Nvidia's GP104 silicon, debuting with HBM2 and a large array of shader cores. However the faster version of Vega, named Vega 11 (which is strange considering Polaris 11 is expected to be slower than Polaris 10), is rumored to still launch in early 2017 to compete with Nvidia's fully unlocked Pascal GPU.

Whether or not this rumor comes to be true, AMD will be revealing details of their Polaris architecture just shortly ahead of Computex later this month.

Permalink to story.

 
I was wondering why Nvidia was so calm early this year...they knew their product would stomp on AMD's. The response from AMD is also positive since this will re-ignite some sort of competition...if only short lived. If you look at the past 4 years...there was mediocre increases in performance with their ~10% increases; at least now we have something real to look forward to!
 
"Panicking over geforce 1080 perhaps?" - Not panicking. If you have been around from the beginning of the war between these two companies over the last 20+ years, you'd know that no one is panicking. These are all strategic moves made intentionally as either a trigger or a counter to the other's. AMD unveils Polaris at extremely low price points for low-mid range, providing high end performance at a $299 price point, stating that their high-end parts won't be available till 2017 (nice move). NV counters by revealing that they are not only releasing Pascal sooner than expected, but they are lowering the price to $379 for 1070 and $599 for 1080 in an attempt to smash AMD. AMD chuckles and responds by saying, "Oh, did we say 2017, just kidding, we'll give you the goods in October" to counter NVidia. NVidia will then counter with 1080Ti and Titan 2, and AMD will counter with Vega 11 and Pro Duo 2.

It's a continuous game of back and forth, as it has always been. Neither company is freaking out, neither company is the best. They both have their time in the spotlight. AMD has held the performance crown for the fastest single card GPU for 2 years+ now. Even when NV releases 1080, it still won't be as fast as a Pro Duo, but for a single GPU card, it will be incredible.

AMD is leaning toward utilizing multi-gpus more often than ever before rather than constantly trying to make one huge single die, especially considering their interest and investment in VR. They have learned that dedicating a GPU per eye will provide the best results possible. I like the idea too, but $1500 is stupid. If they brought that card down to $799, I'd consider it, because you can practically purchase two Fury Nano's for that much anyway.

Regardless of who you go with for a GPU, it won't really matter. Both companies are making incredible GPU's and you can't go wrong with either of them. I give credit to AMD for being the small little underdog company that forces the huge companies to reduce their prices, so that all of we consumers can benefit.
 
Wonder if they will manage to fit on HBM2 or go with GDDR5X. I originally thought that the 1080 would be HBM# based and the 1070 would be based on GDDR5X, but was of course wrong.

Curious to see how HBM2 will stand up on the consumer market compared to its older iteration.
 
Cant see AMD using GDDR5X at all on high end parts, Thats Nvidia country, IF HBM2 doesn't happen for some reason, then the will stick with the existing HBM1 they have used in the Fury series.
 
AMD did "move up" anything. They simply revealed a conservative road map then waited to see what NVidia does. Then they acted.

Since 16nm GTX 1080 has to be overclocked to beat 28nm R9 Fury X that has given AMD the entry point for 14nm Vega.

GTX 1080 is actually a disappointment. A new process and clock for clock GTX 1080 is no better than the old process. Yes 16nm GTX 1080 can be clocked to 2.1 gHz. So can 14nm Vega WITH HBM2.

AMD stole NVidia's Christmas AGAIN.
 
AMD did "move up" anything. They simply revealed a conservative road map then waited to see what NVidia does. Then they acted.

Since 16nm GTX 1080 has to be overclocked to beat 28nm R9 Fury X that has given AMD the entry point for 14nm Vega.

GTX 1080 is actually a disappointment. A new process and clock for clock GTX 1080 is no better than the old process. Yes 16nm GTX 1080 can be clocked to 2.1 gHz. So can 14nm Vega WITH HBM2.

AMD stole NVidia's Christmas AGAIN.

How is Nvidia's new product a "dissapointment" when it hasn't even been released yet. You base your assumptions on an Nvidia press event. Until the real product comes out, and we see real world testing, anything anyone says at this point is speculation.
 
AMD has held the performance crown for the fastest single card GPU for 2 years+ now. Even when NV releases 1080, it still won't be as fast as a Pro Duo, but for a single GPU card, it will be incredible.
Dual GPU cards cannot be compared with single GPU cards, but its pretty sad when you can. Dual GPU cards are compared to SLi/CrossfireX setups. And I just looked at recent (and older) reviews of a Fury 9 vs a 980Ti, and at 1080p and 4K I saw the 980Ti winning many of them.
The 980Ti runs cooler stock and does not need liquid from the factory like the Radeon. I believe it has more overclock headroom; sorry but I don't see how the Fury X is the single GPU king when it loses as much as it wins:

http://www.extremetech.com/extreme/...fury-review-chasing-the-gtx-980s-sweet-spot/2

Thier pretty even but the 980Ti is more efficient.

AMD is leaning toward utilizing multi-gpus more often than ever before rather than constantly trying to make one huge single die, especially considering their interest and investment in VR.
AMD makes more dual GPU/liquid cooled solutions because their tech runs hotter and louder.

. I give credit to AMD for being the small little underdog company that forces the huge companies to reduce their prices, so that all of we consumers can benefit.

AMD is the underdog because it is what it is, their tech/drivers can be a step behind and their products are usually--> always priced lower.

Toms Hardware said:
Radeon R9 Fury cards are even faster, though now you’re talking about an outlay of ~$1100. Those are the quickest boards from AMD that we’d pair up—Fury X is nice and short, but a pair of radiators is just too unwieldy.
The better option is two GeForce GTX 980 Tis pumping all of their waste heat out of your case in a simpler dual-slot form factor. For the ultimate in 4K with maxed-out quality, two 980 Tis are the way we’d go. Just be ready to drop $1200 on graphics and a platform powerful enough to prevent bottlenecks.

http://www.tomshardware.com/reviews/best-gpus,4380.html
 
Last edited:
AMD has held the performance crown for the fastest single card GPU for 2 years+ now. Even when NV releases 1080, it still won't be as fast as a Pro Duo, but for a single GPU card, it will be incredible.
Dual GPU cards cannot be compared with single GPU cards, but its pretty sad when you can. Dual GPU cards are compared to SLi/CrossfireX setups. And I just looked at recent (and older) reviews of a Fury 9 vs a 980Ti, and at 1080p and 4K I saw the 980Ti winning many of them.
The 980Ti runs much cooler stock and does not need liquid from the factory like the Radeon. I believe it has more overclock headroom and runs much cooler. Sorry but I don't see how the Fury X is the single GPU king when it loses as much as it does.

http://www.extremetech.com/extreme/...fury-review-chasing-the-gtx-980s-sweet-spot/2

Thier pretty even but the 980Ti is MUCH more efficient.

AMD is leaning toward utilizing multi-gpus more often than ever before rather than constantly trying to make one huge single die, especially considering their interest and investment in VR.
AMD makes more dual GPU/liquid cooled solutions because their tech runs hotter and louder.

. I give credit to AMD for being the small little underdog company that forces the huge companies to reduce their prices, so that all of we consumers can benefit.

AMD is the underdog because they aren't the best, usually their tech and drivers are a step behind and their prodcuts are always priced lower.

Toms Hardware said:
Radeon R9 Fury cards are even faster, though now you’re talking about an outlay of ~$1100. Those are the quickest boards from AMD that we’d pair up—Fury X is nice and short, but a pair of radiators is just too unwieldy.
The better option is two GeForce GTX 980 Tis pumping all of their waste heat out of your case in a simpler dual-slot form factor. For the ultimate in 4K with maxed-out quality, two 980 Tis are the way we’d go. Just be ready to drop $1200 on graphics and a platform powerful enough to prevent bottlenecks.

http://www.tomshardware.com/reviews/best-gpus,4380.html

I agree, you can't compare dual-gpus to single gpus. I think allot of that Dual vs Singe card stuff is bouncing around from a theory invented by AdoredTV on Youtube. His theory states that AMD will make it's die sizes very small and have pretty much every card with multiple GPUs. The yields would be amazing for something like this but it would require support. That's a pretty big "if".
 
Panicking over geforce 1080 perhaps?
The general feeling is that AMD were pretty shocked over the clock speeds Nvidia were able to get out of their silicon. The architectural tweaks of Pascal aren't anything wondrous, but overall performance gets a big lift from the sheer speed the silicon runs at. Seems AMD were banking on more conservative clocks and allied with their own purported failed clock speed validation for Polaris at modest clocks. it might have them in a nervous state.

Having said that, the Vega in October rumour originates from someone posting at SemiAccurate's forums. I'd suggest the rumour be taken with more than a pinch of salt
577615668-araya-dump-truck-saline-salt-mining.jpg


Wonder if they will manage to fit on HBM2 or go with GDDR5X
Vega was slated to use HBM2. Having said that, Synopys (the company that lays out AMD's chips) shouldn't have any difficulty swapping out the HBM I/O and controller logic blocks for GDDR5X if required.
I originally thought that the 1080 would be HBM# based and the 1070 would be based on GDDR5X, but was of course wrong.
It was an unrealistic expectation. HBM2 hasn't started volume production, and even if it had, the assembly costs (GPU+HBM2+CoWoS interposer) would make it economically inviable at the prices the cards need to sell at. AMD effectively lost money on Fiji - I doubt Nvidia was planning on following suit. At the moment, I suspect a huge portion of HBM2 production is going towards getting the 4,500 Tesla P100 board contract the Piz Daint's upgrade fulfilled. At $10,000 per GPU module, it's a no-brainer as to where the HBM2 production should be going.
GDDR5X is very likely the same proposition. There is probably enough initial production to keep the GTX 1080 assembly line happy, but that will be a relatively small volume of cards. If the GTX 1070 mirrors the GTX 970 and sells 250,000 cards a month, that would be a tough ask for a newly ramping technology. I wouldn't be surprised to see a GTX 1070 refresh feature GDDR5X though - the memory controllers are obviously the same as for the 1080.
AMD did "move up" anything. They simply revealed a conservative road map
Basically the opposite of what AMD usually do then.
Vega was originally slated for 2017, but bringing a roadmap in is a whole lot better than letting it lapse...or forgetting about it completely
AMD-Radeon-2016-2017-Polaris-Vega-Navi-Roadmap-900x499.png
 
Last edited:
There are office towers of people in either company following and working on all this stuff. Surprise or panic is about the last words that come to mind. I haven't been to the site but SemiAccurate is a pretty amusing name.
 
I like AMD. Until just recently I used only AMD cpus and of the few discreet video cards I've owned, all but one was ATi. I've also owned two different nforce motherboards and I was really looking forward to getting my grubby mitts on a polaris card so I can take advantage of my monitors freesync feature. all that said, this isn't business as usual. It's not really a game of cards. bills have to be paid and AMD has been having a hard time getting it done for a long while now, and nvidia has grown very powerful. Nvidia can dwarf AMD's GPU R&D many times over and that is a very real problem for AMD. Amd can't afford to misfire, yet they have to take risks, or they will be overrun.
I was/am waiting for some a realworld rundown on polaris, but like many of you I'm thinking it's not going to go in Amd's favor. The only reason I can see then releasing a card around 390 speeds for what, 25$ less than a 390 is for someone very concerned with power usage.(console market maybe?) I'm not a gpu designer, I don't care about clock for clock either. I mean it's interesting, but that
is not how graphics cards are used. I don't pass a prius on the highway and think damn that thing really does something with a little. I think, look at that thing disappear in my mirror. Esp after years of computer users getting used to the power needs associated with powerful cards. they already have the computer infrastructure to support that stuff.
maybe I'll have to forgo freesync and jump to the nvidia wagon. I guess we'll see shortly....
 
T I haven't been to the site but SemiAccurate is a pretty amusing name.
It's a pretty amusing site. The owner - a self proclaimed "tech insider" predicted that discrete graphics would be dead by last year...
dyf13RU.jpg

...and has been predicting the death of Nvidia within a year or two continually since around 2005. He's like the anti-Nostradamus - as his predictions become more dire, the companies revenues, cash on hand, and profits increase.
 
What choice do they have, really?

Polaris which originally was to be a high performance part was recently re-pitched by Roy Taylor recently as a volume mid-performance part.

Pascal beat Polaris to market and will expand over the next two quarters to cover all segments, high performance to volume, with the performance parts launching first.

The behavior by AMD suggests that Pascal beats Polaris bloody on the performance front, or else why the re-setting of expectations/damage control by AMD and acceleration of the successor to Polaris when Polaris itself has yet to launch.?

AMD had better hope that Zen isn't a "do-over" dud too or it's going to be a very long 2016/2017 for them.
 
Well I have yet to see the real performance from Nvidia's cards but I really hope that AMD's Vega GPUs are a worth upgrade, since I was really disappointed with my GTX 970.
 
Let me guess the only thing you were disappointed with, was the 3.5G memory which you never could fully utilize anyway.

No sir, I don't really care about 0.5G my PC won't use on my 1080p monitor, but I do care about high temps (beyond 70 °C) and specially lackluster performance on some games, it's curious, I change my old GPU just because I wanted to use all Nvidia features on The Witcher 3, and guess what? I could not even do that. Also, Nvidia Control Panel is an ancient and horrendous software.
 
@darkzelda, at least you seem to have a legitimate reason for being disappointed in the 970. To be honest it is a bit refreshing to read about someone having issues other than the 3.5G memory on the 970.
 
What choice do they have, really?

Polaris which originally was to be a high performance part was recently re-pitched by Roy Taylor recently as a volume mid-performance part.

Pascal beat Polaris to market and will expand over the next two quarters to cover all segments, high performance to volume, with the performance parts launching first.

The behavior by AMD suggests that Pascal beats Polaris bloody on the performance front, or else why the re-setting of expectations/damage control by AMD and acceleration of the successor to Polaris when Polaris itself has yet to launch.?

AMD had better hope that Zen isn't a "do-over" dud too or it's going to be a very long 2016/2017 for them.

You've far too many assumptions here but what is for sure is that Nvidia will need to make a smaller version of Pascal in order to compete on the lower end. Right now the 1070 and 1080 have very large die sizes which aren't suited to volume production. Many major tech analysts are predicting that 1080s will be very limited in quantity given that GDDR5x hasn't even fully ramped up yet and that making a large GPU means lower yields.

What I suspect will happen is Nvidia will control the top end while AMD takes mid. The gap between mid and high end could very well be a battle between the 1070 and Polaris 10. I suspect that Nvidia cards will sit at the top until Vega.

On the GDDR5x shortage, AMD could very well have issues obtaining it. If I were AMD I would roll with it, lower the price of your cards and just use GDDR. The memory bandwidth of the R9 300 series was already higher than their Nvidia counterparts so it won't hurt AMD as much as it would Nvidia. I believe the R9 290x made the mistake of having far too much memory bandwidth and in the end the only thing that did was increase the heat output.
 
Let me guess the only thing you were disappointed with, was the 3.5G memory which you never could fully utilize anyway.

No sir, I don't really care about 0.5G my PC won't use on my 1080p monitor, but I do care about high temps (beyond 70 °C) and specially lackluster performance on some games, it's curious, I change my old GPU just because I wanted to use all Nvidia features on The Witcher 3, and guess what? I could not even do that. Also, Nvidia Control Panel is an ancient and horrendous software.

Yeah, I am in the same club. Purchased a 970 and just shook my head at the performance with Hairworks turned on in the Witcher 3. Although you are not the first to dislike GameWorks features, they and almost unanimously disliked.
 
Back