Surprise! AMD shares internal Fury X benchmarks ahead of review embargo

It doesn't "lose" the rest though?! Unless you're blind and cannot read simple graphs? It's £350 cheaper yet (currently) according to leaked graphs it CAN keep up with a card that hits quadrople figures ($1000) in price, I'd call that a win from AMD there..

That chart is compared to the 980Ti.... NOT the TitanX... The TitanX beats this card - And I'm not arguing about whether the card to buy is the Fury or the Titan (we won't know until real benchmarks appear anyways!), I'm simply saying that AMD can't claim the performance crown...

The Ti is the better value over the Titan - the Fury will almost certainly be a better value than the Titan as well... possibly even better than the Ti - we'll see once it's released...

But if money is no object - the TitanX is the card to buy... barring unforseen results in REAL benchmarks - but I don't think the Fury will be beating the TitanX in any but a few select games - we'll see in a few days...

Again you are incorrect it does outperform the Titan X, however the Titan X can run 8k now, and only at above the 5k mark does the Titan x outperform the Fury X. This is really simple simple math to understand.

It is like a giant ford 460 BB with a 850 CC carb that has crazy high 500HP and torque, but gets its nearly 10liter *** destroyed by a 2.2 liter Subaru WRX. Whoever moves the resources faster wins.
 
Personally, I'm not that much impressed, simply because of the timing. A bit late and everything they got, to -in some cases- barely top NVIDIA's offering. This is all over again when AMD cards with GDDR5 memory were competing head-to-head with NVIDIA cards using GDDR3. They applied a new memory technology to minimize bottlenecks in fetching data, but without a properly designed architecture to make full use of it.

Pascal, on the other hand, is designed with HBM memory in mind from the beginning, trying to exploit its full potential. I'm not saying to wait a whole year for it; just that that's the way a new technology must be implented: designing everything else around it and not just pasting it like a cheap patch. Improvising and attaching a turbine to a car won't nearly make good use of it due to the limitations of the car; what you're looking to is a plane, with the according design to make full use of the turbine. The car with the turbine may reach 150 mph quick and safely, but not much more speed due to the design of the car, without losing control of it -loss of traction at higher speeds; while you could achieve 700 mph with the same turbine on a fighter jet.


Pascal was also designed on a 20nm node, which was supposed to come out q1 next year, however GloFo "honestly could have been tsmc, cant recall which" killed the 20nm /NVidia intended to use, and with it 20nm pascal. Nvidia has had to go back and redesign for another node, which is not ready for prime time. So Nvidia announced they would be skipping 20nm, however AMD already did that and got in the 16/14 nm line ahead of Nvidia . Nvidia could not to move forward with PAscal at 28 because they would have heating issues due to high mem clocks needed to perform on par with HBM based cards. It is important to note that Nvidia was not even allowed to have HBM 1.0/1.5 as AMD co-developed it, and own part of the tech. AMD is also first inline for HBM 2.0. Nvidia is in a situation here, and people are so busy bashing AMD due to being non-competitive with Intel or Nvidia. which is due in part to contra revenue deals on Intel's part to keep them out of the game.

Just wait till the HSA ZEN SOC APU's with built in HBM hit the market. No company has an answer for that kind of performance, performance per watt, and form factor adaptability.
 
That chart is compared to the 980Ti.... NOT the TitanX... The TitanX beats this card - And I'm not arguing about whether the card to buy is the Fury or the Titan (we won't know until real benchmarks appear anyways!), I'm simply saying that AMD can't claim the performance crown...
You're clearly struggling a bit so here's a link to Techspot's review (just take the Titan X scores from that as well):
https://www.techspot.com/review/1011-nvidia-geforce-gtx-980-ti/

Now open another tab on your browser and open this Techspot article with the AMD internal Benchmarks in - https://www.techspot.com/news/61042-surprise-amd-shares-internal-fury-x-benchmarks-ahead.html

Now compare...

You're right, they can't take the performance crown yet until these kind of results are verified, but you really should read the scores the Titan X got, AMD DOES keep up with it in pretty much all games Techspot tested that AMD also posted and even beats it in a few.

Besides, AMD have a reputation of bad drivers, I wonder if they could squeeze more performance out of the Fury X with a few driver revisions? Time will tell!
 
Where are those most people since I have never seen anyone using TV as a monitor and I work in the IT with people every day.
Since I was referring to most people not needing a frequency above 30Hz and therefor not caring whether they do. I don't feel inclined to comment about how many actually use a TV as a monitor, cause that is irrelevant to the topic of whether a TV would work for them. Just because monitors are available and usually what is used, doesn't mean TV's wouldn't be sufficient for the mass population.
 
You are speaking as if the GTX 980ti had been years on the market, actually it was just release and it seems the R9 Fury X is a better GPU. If you add that it's liquid cooled and has a new type of memory, it looks to me like a really good high end card
The difference here is that the 980Ti is based on nVidia's current series of cards instead of a new architecture. This is over simplifying things, but it's essentially a more powerful GTX 980. This is suppose to be AMD's new architecture and it's not beating nVidias existing tech by a very large margin. I do not see Fury's performance compelling enough to justify paying $650 when something much more powerful is going to be out next year.

Since GlobalFoundries is currently ahead of TSMC and AMD already has HBM expeience, chances are Arctic Islands will be out before Nvidia launches Pascal.
Well I hope AMD comes out with something before nVidia's next gen or they are going to be seriously behind for awhile. Perhaps I am overestimating, but these performance numbers aren't really compelling enough to upgrade from videos in the last few years. The market is already flooded with similar performance videocards, it's going to take something really new before people are going to shell out the cash for a new graphics card. I was honestly expecting more out of fury than what these initial numbers are telling us. By all the hype it sounded like the Fury X would land somewhere between the 980 TI and the Titan X. Here's to hoping it has lots of overclocking headroom
You are daft!!! You do not factor in the unoptimized Nvidia support for Directx 12 or Vulcan. These both borrowed the asynchronous memory use from Mantle! This means AMD's cards benefit MORE from Directx 12 than Nvidia's cards!!! Further, we all know AMD is not good at having optimized drivers at release. The first driver that these Benchmarks have been performed on are 15.15 catalyst driver or older. They are releasing the 15.20 catalyst driver about the same time as putting Fury X on sale. This should allow the additional performance at the start. But as the drivers develop, AMD usually gets the performance kick in about the third month after release. Because of this, it will greatly outperform everything on the market until Arctic Island release which is Q3 JUST LIKE NVIDIA PASCAL'S CURRENT TIMELINE. Saying wait until next year is punting the ball. It depends if you need to upgrade now or can wait! If now is needed, AMD is the obvious choice. If you can wait a year, getting 14/16nm card with Gen2 HBM is obviously the better choice. This gives options of cards ranging from 8-32GB from either side. So, yes, waiting is preferable. But it does not mean Nvidia will have the optimization of asynchronous memory by then. Time will tell on their performance. But we aren't talking Q3 2016, a full year away, now are we. Nvidia should accept the defeat with grace, except for 5K and 8K screens, which the Titan X can do better. But I don't want to get in that discussion again to explain the nuances of why that is...
 
Pascal was also designed on a 20nm node, which was supposed to come out q1 next year, however GloFo "honestly could have been tsmc, cant recall which" killed the 20nm /NVidia intended to use, and with it 20nm pascal. Nvidia has had to go back and redesign for another node, which is not ready for prime time. So Nvidia announced they would be skipping 20nm, however AMD already did that and got in the 16/14 nm line ahead of Nvidia . Nvidia could not to move forward with PAscal at 28 because they would have heating issues due to high mem clocks needed to perform on par with HBM based cards. It is important to note that Nvidia was not even allowed to have HBM 1.0/1.5 as AMD co-developed it, and own part of the tech. AMD is also first inline for HBM 2.0. Nvidia is in a situation here, and people are so busy bashing AMD due to being non-competitive with Intel or Nvidia. which is due in part to contra revenue deals on Intel's part to keep them out of the game.

Just wait till the HSA ZEN SOC APU's with built in HBM hit the market. No company has an answer for that kind of performance, performance per watt, and form factor adaptability.

Yeah, I have a sense of the background story between NVIDIA and TSMC for Pascal. The latest slides and claims NVIDIA made seem either very optimistic or unrealistic; if they end being half of what they claim the improvement will be [compared to Maxwell], on reserve of AMD's counter for then, Pascal must be the first true 4K GPU series in the green side. On the other side, GlobalFoundries and Samsung teaming up came as a miracle to AMD -probably just in time.

And regarding the second paragraph: ZEN looks great -in paper. I don't know it as a fact [yet], but I believe the cache memory still is at the top of the memory hierarchy -above HBM. And since the x86 ISA strongly depends in the cache performance [hits and misses of the required instructions], it will all come to that in the end: how big is your cache, how good is the algorithm for getting rid of pages in cache after a miss, and what task you're trying to achieve. It may come handy in scenarios when a lot of branching mispredictions may occur or where you have to work with huge amount of data; when the CPU is getting everything it needs, available from the cache, due to the space/time location principles of caching, in theory you should see no performance impact between with and without HBM.
 
You're clearly struggling a bit so here's a link to Techspot's review (just take the Titan X scores from that as well):
https://www.techspot.com/review/1011-nvidia-geforce-gtx-980-ti/

Now open another tab on your browser and open this Techspot article with the AMD internal Benchmarks in - https://www.techspot.com/news/61042-surprise-amd-shares-internal-fury-x-benchmarks-ahead.html

Now compare...

You're right, they can't take the performance crown yet until these kind of results are verified, but you really should read the scores the Titan X got, AMD DOES keep up with it in pretty much all games Techspot tested that AMD also posted and even beats it in a few.

Besides, AMD have a reputation of bad drivers, I wonder if they could squeeze more performance out of the Fury X with a few driver revisions? Time will tell!

Dude... those are two DIFFERENT graphs!! Different testbeds for each card!!

If you'll notice, the Ti gets 28 FPS in Crysis 3 in one chart, and over 30 in the other... you can't compare the TitanX's performance from one graph to the Fury's in the other graph!!

Generally, the TitanX gets about 8-15% better performance than the Ti.... and as the chart has the FuryX only a bit higher than the Ti, we can conclude that the Titan performs a LITTLE BIT better.... we'll see on the 24th what the real benchmarks say...

"When showing pretty pictures to prove a point, make sure they actually say what you want them to." -- H. Cactus
 
Let's not start any name calling for things that aren't even out. It's almost as if some of you worked on these GPU designs yourself. No need to get so emotionally charged on this topic - just wait for it to release - choose which benchmarks you want to believe or which ones apply to you - and vote with your wallet. If someone doesn't agree with what you think or believe then let them believe so - or try to convince them without name calling and if you can't then keep it moving.
 
Generally, the TitanX gets about 8-15% better performance than the Ti...
You are mistaken.
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_980_Ti/31.html
Here you can see that, on average, the Titan X is only 3% faster than the GTX 980 Ti at 1080p and only 4% faster at 1440p and 2160p. And that's with a stock GTX 980 Ti.
http://www.techpowerup.com/reviews/EVGA/GTX_980_Ti_SC_Plus/30.html
And here you can see that an overclocked GTX 980 Ti (which you can buy for less than a stock Titan X) outperforms it across the board. So chances are that, if the Fury X is indeed faster than the GTX 980 Ti, it's just as fast or even faster than the Titan X.
 
Well, other reviews put it in a slightly better light... but all agree it's either equal to or slightly behind the Ti... which, as I stated before, leaves the performance crown with the TitanX....

And as for our previous poster - you can't compare an overclocked card with a stock one... UNLESS you can't overclock the stock card!!

Reviews state, however, that the Fury's overclocking potential is limited - so that's another strike against it...as you can overclock both the Ti and the TitanX...
 
Last edited:
Back