AMD reveals the Radeon R9 290X, their next-generation GPU

So I read that AMD will not disclose the actual price or the clock speed when you can pre order it...instead you will need to deposit a sum and hope for the best?

What kind of way is that of selling a GPU :O Im buying it anyway but hey, it just seems fishy
 
So I read that AMD will not disclose the actual price or the clock speed when you can pre order it...instead you will need to deposit a sum and hope for the best?
What kind of way is that of selling a GPU :O Im buying it anyway but hey, it just seems fishy

That's news to me. Where are you hearing that? I was under the impression that the NDA expired on the 3rd October - the day pre-orders open.

Not sure if offering an unspecified part doesn't affect many countries fair trading and consumers laws. Maybe the pre-order offer isn't open for those?
 
That's news to me. Where are you hearing that? I was under the impression that the NDA expired on the 3rd October - the day pre-orders open.

Not sure if offering an unspecified part doesn't affect many countries fair trading and consumers laws. Maybe the pre-order offer isn't open for those?

I read about it on legit reviews, they released that info yesterday, so its not confirmed or anything but this is what it said , >

AMD will be releasing a grand total of 8,000 AMD Radeon R9 290X Battlefield 4 Edition cards. This means globally and not just regionally! We also learned that the cards specifications and the final price of the card will not be disclosed when the pre-order begins. This means that you’ll have to put down a deposit without knowing the price or clock speeds of the card that you will be purchasing. This is a very interesting and something we’ve never seen done before in recent times. To top that off only certain Add-In board partners will be offering it.
Read more at http://www.legitreviews.com/amd-radeon-r9-290x-battlefield-4-edition-8000-specs-pre-order_125224#VOj0rzCyJUTi2CFf.99

Again this is not confirmed info but it would be something new indeed, Would you purchase a card without knowing the price or clock speed? :D
 
Would you purchase a card without knowing the price or clock speed? :D
Short answer, No.
Long answer, Hell no.

My buying parameters are performance (vs what I presently have), price, overclocking headroom, performance gained from overclock....then a bunch other stuff headed by vendor and noise.

Taking any company- let alone a tech company, on faith? No. Some people with short memories will sign up no doubt- and AMD will sell the 8000 copies, but I don't see anything at all to be gained by blindly trusting a multi-billion dollar company...especially when Battlefield 4 will likely be part of a regular game bundle once the card hits retail proper.

At the moment, you have a single Firestrike chart (showing an 18% increase over the HD 7970) and this slide telling you that the card is capable of "over 5 TFlops" FP32 performance- presumably if it was 5.5 or above AMD would have said so, so that leaves 5.1 - 5.4 as a probable range, versus the 4.3 of Tahiti (18.6% - 25.5% improvement) - not a hell of a lot to go on is it?
32.jpg
 
Short answer, No.
Long answer, Hell no.

My buying parameters are performance (vs what I presently have), price, overclocking headroom, performance gained from overclock....then a bunch other stuff headed by vendor and noise.

Taking any company- let alone a tech company, on faith? No. Some people with short memories will sign up no doubt- and AMD will sell the 8000 copies, but I don't see anything at all to be gained by blindly trusting a multi-billion dollar company...especially when Battlefield 4 will likely be part of a regular game bundle once the card hits retail proper.

At the moment, you have a single Firestrike chart (showing an 18% increase over the HD 7970) and this slide telling you that the card is capable of "over 5 TFlops" FP32 performance- presumably if it was 5.5 or above AMD would have said so, so that leaves 5.1 - 5.4 as a probable range, versus the 4.3 of Tahiti (18.6% - 25.5% improvement) - not a hell of a lot to go on is it?

But..But...its so shiny....and if its not more then 600$ it would not be a bad deal...and HEY, you get BF4..woo... :p

I want a card better then the r9 280x , which card could you recommend?
 
But..But...its so shiny....and if its not more then 600$ it would not be a bad deal...and HEY, you get BF4..woo... :p
That's a big "if". Maybe the BF4 SpecialSuper Edition will come with a mousepad...or a poster...or fake dogtags (my fave!). Of course AMD might only charge you another $50 for all that free stuff - who can say. Somehow I don't see the BF4 edition being cheaper than the regular reference version. If the story is true (and I'm not sold on that), AMD will have no problem scaring up 8000 early adopters in any case. The ultimate test of loyalty.
I want a card better then the r9 280x , which card could you recommend?
Any card from a vendor overclocked HD 7970GE and up.
You should start a thread if you're serious. Include the games and GPGPU apps that you use or are planning on using, your native screen resolution, and where you would be buying from (geographic area).
 
That's a big "if". Maybe the BF4 SpecialSuper Edition will come with a mousepad...or a poster...or fake dogtags (my fave!). Of course AMD might only charge you another $50 for all that free stuff - who can say. Somehow I don't see the BF4 edition being cheaper than the regular reference version. If the story is true (and I'm not sold on that), AMD will have no problem scaring up 8000 early adopters in any case. The ultimate test of loyalty.

Any card from a vendor overclocked HD 7970GE and up.
You should start a thread if you're serious. Include the games and GPGPU apps that you use or are planning on using, your native screen resolution, and where you would be buying from (geographic area).

Hah aye a special super edition would be cool ! Well I kinda hope its not true, but I would probably buy it anyway :confused:

I am actually serious, I started a thread 1 and a half month ago regarding building a gaming rig, I went back and forth between the 7970 and 770 and in the end decided to wait for AMDs GPUs , I have roughly 600$ to put on a GPU so if the R9 290X actually will land on that, I will go for it :)

Perhaps the overclocked HD 7970GE could be an idea, but I was advised to not buy something factory overclocked , and instead doing that myself, but since I then would have to make upgrades to my setup for clocking it would kinda break my budget, Maybe you can have a look at the thread and give me some opinions? I don't really want to fill this thread about posts regarding my setup :)

Just go to my profile and check out my thread, you might wanna jump to the very end of it to save some time from unnecessary reading :)
 
Perhaps the overclocked HD 7970GE could be an idea, but I was advised to not buy something factory overclocked , and instead doing that myself.
That pretty much only applies when the factory overclocked cards are more expensive than the stock clock versions. Generally there isn't much binning (selecting better GPUs) going into the OC'ed cards these days so if the reference card is cheaper then it is the better deal.
HOWEVAH...the 7970GE is pretty much at the end of its (r)etail life, and you would be hard pressed to find a reference card that much cheaper than an OC'ed one in a lot of markets with the exception of the crappy voltage locked ones which are pretty well known/notorious...so, if the vendor special is near enough the same price as the reference card then it is much the better deal, more so if the card features a proprietary design with a beefed-up power delivery section.
 
My buying parameters are performance (vs what I presently have), price, overclocking headroom, performance gained from overclock....then a bunch other stuff headed by vendor and noise.

At the moment, you have a single Firestrike chart (showing an 18% increase over the HD 7970) and this slide telling you that the card is capable of "over 5 TFlops" FP32 performance- presumably if it was 5.5 or above AMD would have said so, so that leaves 5.1 - 5.4 as a probable range, versus the 4.3 of Tahiti (18.6% - 25.5% improvement) - not a hell of a lot to go on is it?
32.jpg

^ For someone as knowledgeable as you and given what you just said above - performance gained from overclocking, your view of the R9 290X is seriously skewed in the wrong direction. Here let me do the math for you of how it's going to go down in the real world:

HD7970GE vs. R9 290X both overclocked to 1150mhz GPU on air:

pixel fill-rate (32 rops vs. 44 rops) = 37.5% advantage
compute / shader performance = 37.5% advantage
texture fillrate (128 vs. 176) = 37.5% advantage
Memory bandwidth (7000mhz vs. 6000mhz) = 14% advantage

Now if you think a card with these specs will only beat HD7970GE by 14-17%, you are strongly mistaken. R9 290X out of the box will beat GTX780 reference and with overclocking will beat HD7970GE by 30-35% just like GTX780 OC beats HD7970GE by at least that.

But go ahead and stick to your fire strike scores. Even if the card comes clocked at 900mhz giving us 5.07 Tflops of performance, you realize enthusiasts will attempt to overclock it and then you have the full 2816 SPs, 176 TMUs, 44 ROPs and 512-bit bus to play with. With such a wide chip, it'll increase performance over HD7970GE much more with every % overclock because the chip has more units of everything. You'd think this is common sense for someone like you.
 
Here let me do the math for you of how it's going to go down in the real world:
HD7970GE vs. R9 290X both overclocked to 1150mhz GPU on air
Since you're going with the condescension motif...

We'll get to your math shortly ;)
But firstly, comprehension...
Now if you think a card with these specs will only beat HD7970GE by 14-17%, you are strongly mistaken
Basic mistake I think. What I noted was the results from the (sparse) information supplied by AMD themselves. It is a straight extrapolation of the facts...unless AMD are sandbagging, then it is a straight extrapolation of the purported facts.
IF you want to know what my opinion is, then I have already stated it on these forums. I reserved this thread for dissecting the information provided. I didn't see the need to add a personal opinion when I've already voiced it elsewhere.
Best case scenario is AMD bite the bullet and opt for a big die to put some pressure on the GTX 780/ Titan - Six Weeks ago
At this stage it looks like the 780 and 290X would be roughly matched, kind of like the HD 7970 and GTX 680/770 scenario... A few days ago
I also noted that AMD drivers are likely at beta stage for the card, so benchmarks should allow for that. Likewise you can only extrapolate so much from a single reference point.
IF the GTX 780 and 290X are roughly matched (a view shared by someone closer to the action than me), then that indicates that the 290X is around 20% faster than the 7970GE given the number of Gaming Evolved titles in most benchmark suites. I'll either be wrong, close, or right. Who knows. But you know what the weird thing is? What you're proposing and what I've already posted are basically the same thing.

Having said that though, on the balance of your post I'd have to say..."Oh ffs who's talking about overclocking".
Do the AMD slides showing comparative performance between the 7970GE (280X) and 290X - you know, the only ones published, the ones people are basing their estimates on- do they mention that the comparison is for overclocked cards?

And that is without establishing what kind of overclock the R9-290X is actually capable of. There's every possibility the card clocks well unless AMD have instituted an input power hard limit like the HD 7990 - do you know that they haven't? Are AMD using the Elpida memory IC's because Samsung and Hynix are MIA ? Who can say? I don't have the info -even some dodgy PPS, so I'm not about present a guess as fact

As an aside, I'd also point out that taking straight percentage increases doesn't necessarily translate into the same performance increase. Taking two GCN chips for the sake of argument, Pitcairn XT and Tahiti XT.
Core/shader advantage to Tahiti XT +60% (2048 vs 1280)
TMU advantage to Tahiti XT +62.5% (128 vs 80)
Memory speed advantage to Tahiti XT +14.6%
Core speed disadvantage -7.5%
Actual performance advantage ~30% to 37% at 1920 and 2560...and all because of the sliding scale that is GPU feature set and design efficiency.

BTW: Either your math is wrong or you're living in cloud cuckoo land.
Either you're attributing the 290X with 7Gb/sec memory ( some feat considering Samsung modules are out of supply) and a 0% memory overclock for the 7970GE (they do a little better than that), or you have problems with division.
Memory bandwidth (7000mhz vs. 6000mhz) = 14% advantage
The actual answer is - 14.3% (that's minus...a deficit).
I'd also note that 1150 is a conservative overclock for most of the AIB 7970GE's in the channel. This is somewhat more representative for end user air cooling
gpuzhis7970geoc.png


If you're interested, Brent Justice is doing an overclock versus overclock review of the 290X. You're probably hopeful that he includes a GTX 780 and/or Titan using the AB NCP4206 relaxed voltage tweak since you're so fired up about the overclock potential, and the tweak is in widespread use by owners. If a Titan can make 1424MHz core/7020MHz memory using a 561mm² GPU, the OC potential for a Hawaii GPU 21.9% smaller must be absolutely freakish if we're using your metric.

NDA apparently lifts 15th October. Save your energy for the review threads ;)
 
Not to add fuel to the fire or anything but I was intrigued, I ran 3DMark on my computer which has a 780 and I got an overall score of 9832, that's just shy off 10K.

Don't get me wrong, I'm sure these new ATI cards are going to be optimized loads yada yada but if the 290x is reaching just over 8000 that's a massive gap still, I know Benchmarks like 3Dmark can be irrelevant because games don't use that engine but I personally would expect like a 500 point difference, not a 1800 point difference? Am I missing something?
 
Firestrike is a benchmark that is both GPU and CPU sensitive. It's quite easy to get 1000+ point fluctuations depending upon system specification, so any comparison should be done with similar component fit-out.
Is 9832 the overall score ( in large orange font) ?
 
Firestrike is a benchmark that is both GPU and CPU sensitive. It's quite easy to get 1000+ point fluctuations depending upon system specification, so any comparison should be done with similar component fit-out.
Is 9832 the overall score ( in large orange font) ?

Good point, I haven't looked at the scores with an all AMD system, will check it out, and yeah, that was my overall score.
 
I was thinking the same thing however, they don't state what settings they used, if it's 4k on extreme settings they are awesome scores, if default 1080p though, these are not very impressive?
quote]

The slide says it was on the performance preset

264a.jpg


and my o/c $200 7950 scores over 8000.

http://www.3dmark.com/3dm/1299529

Unless they are talking about the overall score including the CPU they happened to use and not just the graphics score, but that would be crazy, because they would have to use a CPU similar to or slower than my I5 in order for the R9 to pull the score back up to 8000. Surely if they were going to include the CPU score, which would go against usual practices and common sense, then they would use their fastest CPU to get the score as high as possible.

I really can't figure it out. Yet I've only seen 3 people comment on this anywhere and I'm one of them! (EDIT: until I read a few more posts here, but that's it)

They are also mentioning the theoretical compute power a lot, 5.8TFLOPS, which basically means nothing as it's simply the number of stream processors x clock x 2. The GTX 680 only has 1500 processors compared to the 7970 2000+, yet it is arguably better with less cores, it's they way the cores are used more efficiently.

Let's say they are the same performance for a minute, as the Titan has 2688 processors, using the same ratio of 1500:2000, the R9 290X would need 3500 processors to match it. Unless they've significantly increased efficiency, which I doubt as its AMD.

Anyway time will tell, I sincerely hope they are better than my 7950's and a decent price.
 
They are also mentioning the theoretical compute power a lot, 5.8TFLOPS, which basically means nothing as it's simply the number of stream processors x clock x 2. The GTX 680 only has 1500 processors compared to the 7970 2000+, yet it is arguably better with less cores, it's they way the cores are used more efficiently.

Let's say they are the same performance for a minute, as the Titan has 2688 processors, using the same ratio of 1500:2000, the R9 290X would need 3500 processors to match it. Unless they've significantly increased efficiency, which I doubt as its AMD.

Anyway time will tell, I sincerely hope they are better than my 7950's and a decent price.
You cant compare Stream processors and Cuda Cores with a core to core count, they are entirely different architectures and designs in general and are used in different ways to provide similar results.
 
S You're probably hopeful that he includes a GTX 780 and/or Titan using the AB NCP4206 relaxed voltage tweak since you're so fired up about the overclock potential,
NDA apparently lifts 15th October. Save your energy for the review threads ;)
Ahh the classic "its not that fast stock but if you overclock it" and the "I own cards from both sides" excuses. (on the internet, we call that defeat.) Watching you dish out lessons and seeing the kneejerk replies is truly something to behold. God bless you dividebyzero, let the truth shine upon those unwilling to accept it.
Here I can help some.
AMD runs hot yo! Radeons skip & stutter yo! AMD drivers are going on 5 years of badness yo! Nvidia has the fastest single GPU yo! More people on Steam game with Nvidia GPU's yo!
 
You cant compare Stream processors and Cuda Cores with a core to core count, they are entirely different architectures and designs in general and are used in different ways to provide similar results.

I know, that's my point. But AMD's headline figure is the 5 TFLOPS of compute power (greater than Titans) purely because it has more stream processors than Titan's CUDA cores.

It's as useful as saying an FX-8350 has 8x4000=32000 compute power but an I5 only has 4x3400=13600 compute power.

It's meaningless, but it's their headline figure, along with the Firestrike score of 8000. I mean seriously, these are the 2 numbers that their marketing machine has decided to give us to entice us spend $700 (or $850 in Australia probably) on the new R9 290X. I'm sure it's not the only benchmark they've run, they will have done all of them and decided that this will impress us the most and really show off the performance of the product.

Yet most people who bothered to download and run the test will know their mildly overclocked $200 7950 will score 8200 and a 7970 even more so, around 9000. A Titan scores 12500 and a $650 GTX780 scores 11500.

I just can't see why we're supposed to by impressed by a score of 8000.
 
I know, that's my point. But AMD's headline figure is the 5 TFLOPS of compute power (greater than Titans) purely because it has more stream processors than Titan's CUDA cores.

It's as useful as saying an FX-8350 has 8x4000=32000 compute power but an I5 only has 4x3400=13600 compute power.

It's meaningless, but it's their headline figure, along with the Firestrike score of 8000. I mean seriously, these are the 2 numbers that their marketing machine has decided to give us to entice us spend $700 (or $850 in Australia probably) on the new R9 290X. I'm sure it's not the only benchmark they've run, they will have done all of them and decided that this will impress us the most and really show off the performance of the product.

Yet most people who bothered to download and run the test will know their mildly overclocked $200 7950 will score 8200 and a 7970 even more so, around 9000. A Titan scores 12500 and a $650 GTX780 scores 11500.

I just can't see why we're supposed to by impressed by a score of 8000.

Ok a couple of things with that:
Yes, but again, they are talking about the overall compute power of their GPU, with their architecture, they can do a higher amount of compute power than that of the titan. When it comes out of course, we can see. It has nothing to do with the fact that there's more of their stream processor than cuda cores ratio because cuda cores count differently than a stream processor counts.

Firestrike poses a problem in being used for GPU benchmarking because of how it works, however it seems most people view firestrike scores so its just a way for them to show a comparison between previous GPU's. The biggest problem is that firestrike depends on the CPU as well, so for instance you can make a system bump its score significantly higher like you showed with your 7950 because your including the CPU points in the overall firestrike score (That's just how firestrike does it).

Theres a reason I personally dislike firestrike as a GPU performance scale because of that reason right there. Sadly not many I find I like because they either don't work with SLI/CFX well or they just don't seem to give good results. The only one ive really ever liked even a bit is the MSI Kombuster, but that's just an opinion and should not matter.
 
I know, that's my point. But AMD's headline figure is the 5 TFLOPS of compute power (greater than Titans) purely because it has more stream processors than Titan's CUDA cores.
Yep, that's exactly what it is - a headline, bullet point, useful round number to fill out a presentation....and of course, doesn't mean squat.
Firstly actual floating point performance rarely rises to 90% of the theoretical.
Secondly, it's all about coding, how much time, resources, or inclination the IHV or software vendor are willing to expend in optimizing for the architecture. Nvidia often deliberately hobble (or at least don't optimize) OpenCL app performance were there is a CUDA alternative, and AMD's professional driver team is basically non-existent since they rely on developers to optimize their code. Sometimes works, sometimes not. Devs seldom have the same force of will that the manufacturer has. Case in point: The FirePro W8000 has a 48% single precision compute advantage over the Quadro K5000. You'd be hard pressed to find it in a lot of apps
specviewperf-13.png


It's meaningless, but it's their headline figure, along with the Firestrike score of 8000. I mean seriously, these are the 2 numbers that their marketing machine has decided to give us to entice us spend $700 (or $850 in Australia probably) on the new R9 290X.
Probably because AMD's PR people have little technical prowess. The scores in themselves are meaningless. The only thing you can take out of the slide is the comparative differences between scores in that slide. If you know what one or more of the actual scores are then you can extrapolate the others from that information. The 6800 score of the 280X could be equated with this 7341 score, in which case the 290X would score 8442 in comparison.

That all presupposes that
1. The is legit and AMD aren't sandbagging
2. The drivers for the 290X were relatively optimized when the comparison was made. The 280X is about as optimized as it's going to be you would think after almost two years since its initial launch as the 7970.
 
Thread bump for an early taste from PC Online.
r9-290xearlybenchmarks.jpg


Good performance, although it looks to come at a bit of a price regarding power usage. Hopefully the stock cooler is more effective than some of the recent reference blower designs.
 
That's a very slim win over the 780, That makes me happy xD
In fact it's not really that impressive, the Titan is still faster and this thing eats more power, I guess if it's priced right (I'll assume it will be) then at least the prices for the 780 and maybe Titan will come down a notch?

I do wonder what Mantle will do for Battlefield 4 though.
 
Seems as though comparing apples to apples, the tian as of right now maybe slightly ahead, I guess we will have to see if drivers play a point in upping the performance of the 290X, though im curious on the 1000mhz on the GPU core that's showing because im curious about how that's set as with the pre-orders for the BF4 editions of the card, it says 800mhz with 1000mhz boost.
That's a very slim win over the 780, That makes me happy xD
In fact it's not really that impressive, the Titan is still faster and this thing eats more power, I guess if it's priced right (I'll assume it will be) then at least the prices for the 780 and maybe Titan will come down a notch?

I do wonder what Mantle will do for Battlefield 4 though.
Personally, power has never been much of an issue for me personally, but its still an important area and some people have high electric bills to worry about.

As for mantle, im just as mystified as you are, I mean this could range from a whole whoopty do 1-5 FPS difference to 10-20FPS difference from what ive read and understood. But its still just all hot air till we actually see something lol.
 
Personally, power has never been much of an issue for me personally, but its still an important area and some people have high electric bills to worry about.

It's not just the bill though, an extra 80 watts is a lot, the heat this will be producing, and then the coolers will make more noise, overclocking head room reduced, just god damn that's a lot more power than I expected! I was hoping for near same efficiency as a 780! Not 660ti SLI kind of territory!
 
Back