AMD will unveil the next Radeon RX 6000 card at March 3 event

Do you know "the majority of us" don't even have an integrated GPU, since are using Ryzen (a quite popular solution in the last years) or F CPUs ?
And NO, I don't know of many good games that runs well on an iGPU.
Said that, I'd like the crypto market to disappear not only for energy waste reasons.
The integrated GPU was used as a point of reference from a performance perspective. That means that anything better will simply increase the amount of games you can play.

If I didn't understand crypto, I would like it to disappear too. Unfortunately it's too valuable a technology to let die or kill.

In any case, there is nothing wrong with waiting to be able to get what you want. Especially for something like a graphics card. It's really a first world problem that people complain about prices being higher than normal. A while back it was RAM. Soon it will be something else. There is always something that is overpriced.
 
The integrated GPU was used as a point of reference from a performance perspective. That means that anything better will simply increase the amount of games you can play.

If I didn't understand crypto, I would like it to disappear too. Unfortunately it's too valuable a technology to let die or kill.

In any case, there is nothing wrong with waiting to be able to get what you want. Especially for something like a graphics card. It's really a first world problem that people complain about prices being higher than normal. A while back it was RAM. Soon it will be something else. There is always something that is overpriced.
I understand crypto very well, unfortunately.
That's the reason I'd like it to die TOMORROW
 
this is not a negligible drawback, but a showstopper.
RT is already an heavy duty... AMD can't do what Nvidia is doing, and that's a fact (you cannot do something you don't have the hardware to accelerate): just look at the transistor counting.
Ray tracing and AI-based upscaling are two entirely different things, but both AMD and Nvidia's GPUs have the hardware to accelerate everything to do with either of them. The key thing to note is the matter of concurrency: doing multiple things at the same time. In the case of Turing, each SM is limited to graphics and compute or ray tracing or tensor work; in Ampere, it's now graphics and compute; graphics and RT; compute and RT; graphics and tensor; or compute and tensor.

For RDNA 2, each dual compute unit (AMD's equivalent to Nvidia's SM), it's somewhere between Ampere and Turing, as tensor work would be classed as compute - and it handles this pretty well. I should have said that the if the SIMD32 is handling tensor work, it won't be doing anything else, rather than the CU - there are four SIMD32 units per dual CU, and they're independent of each other (so one can be doing tensor, while the others are doing graphics, for example).

The difference between them all lies in the execution of those workloads - in pure tensor work, Ampere rules the roost, thanks to its dedicated cores, but the actual amount of tensor stuff involved in DLSS isn't particularly big: it adds no more than 10% to the total frame load. It's also done either just before or after post-processing in the rendering pipeline (DLSS 1.0 was after, DLSS 2.0 is before), so the rest of the GPU won't be doing much anyway.

While Ampere's dedicated tensor cores are immensely powerful, they're barely used with DLSS. The performance gains generated by the process is down to rendering at a lower resolution; in fact, DLSS doesn't have to use the tensor cores at all and it works perfectly well just on the CUDA cores.

In theory, the same will be true of any AI-based upscaling system run on an RDNA 2 GPU. However, whereas Nvidia has committed considerable time and resources to producing the DNNs for developers to use in their games, AMD isn't offering much at all. DirectML offers everyone the tools to employ DNNs for upscaling, but that requires the developers to create them in the first place - something that Nvidia has done, but not AMD.

TL;DR = AMD's hardware is up to scratch; it's the software/developer support that needs more work.
 
I understand crypto very well, unfortunately.
That's the reason I'd like it to die TOMORROW
You think you understand it... But... The fact that you want it to die at all, confirms that you don't truly understand it.

I want a new graphics card too. But at this point I'll be waiting till next year, and I am completely ok with that, because there are enough other things to do in life, rather than throwing a tantrum because I can't get what I want right now, or becoming mad at the most revolutionary technology since the invention of the internet.
 
We can't comment on the 3060 yet - seems it's been gimped a lot - given the upgrade of a 3080 . Still they would sell in droves and actually be a good but at MSRP - not the value of a 1060 - but those days are a few years away - as reviewers said - you wouldn't upgrade from 2060/ 2070 std and supers - unless you could sell that for a good price
 
You think you understand it... But... The fact that you want it to die at all, confirms that you don't truly understand it.

I want a new graphics card too. But at this point I'll be waiting till next year, and I am completely ok with that, because there are enough other things to do in life, rather than throwing a tantrum because I can't get what I want right now, or becoming mad at the most revolutionary technology since the invention of the internet.
Gamers seething that an entire technology should be banned because they can't get the latest EA title to raytrace at 120 FPS has got to be the richest first-world special pleading I've ever heard of.
 
You think you understand it... But... The fact that you want it to die at all, confirms that you don't truly understand it.

I want a new graphics card too. But at this point I'll be waiting till next year, and I am completely ok with that, because there are enough other things to do in life, rather than throwing a tantrum because I can't get what I want right now, or becoming mad at the most revolutionary technology since the invention of the internet.

dude I have a 3070 and a 3080 (and a 5700XT... yes I have 3 PCs in my house)... I want crypto to disappear for totally different reasons.
 
Ray tracing and AI-based upscaling are two entirely different things, but both AMD and Nvidia's GPUs have the hardware to accelerate everything to do with either of them. The key thing to note is the matter of concurrency: doing multiple things at the same time. In the case of Turing, each SM is limited to graphics and compute or ray tracing or tensor work; in Ampere, it's now graphics and compute; graphics and RT; compute and RT; graphics and tensor; or compute and tensor.

For RDNA 2, each dual compute unit (AMD's equivalent to Nvidia's SM), it's somewhere between Ampere and Turing, as tensor work would be classed as compute - and it handles this pretty well. I should have said that the if the SIMD32 is handling tensor work, it won't be doing anything else, rather than the CU - there are four SIMD32 units per dual CU, and they're independent of each other (so one can be doing tensor, while the others are doing graphics, for example).

The difference between them all lies in the execution of those workloads - in pure tensor work, Ampere rules the roost, thanks to its dedicated cores, but the actual amount of tensor stuff involved in DLSS isn't particularly big: it adds no more than 10% to the total frame load. It's also done either just before or after post-processing in the rendering pipeline (DLSS 1.0 was after, DLSS 2.0 is before), so the rest of the GPU won't be doing much anyway.

While Ampere's dedicated tensor cores are immensely powerful, they're barely used with DLSS. The performance gains generated by the process is down to rendering at a lower resolution; in fact, DLSS doesn't have to use the tensor cores at all and it works perfectly well just on the CUDA cores.

In theory, the same will be true of any AI-based upscaling system run on an RDNA 2 GPU. However, whereas Nvidia has committed considerable time and resources to producing the DNNs for developers to use in their games, AMD isn't offering much at all. DirectML offers everyone the tools to employ DNNs for upscaling, but that requires the developers to create them in the first place - something that Nvidia has done, but not AMD.

TL;DR = AMD's hardware is up to scratch; it's the software/developer support that needs more work.
I don't know why here AMD is to be defended always and forever.
I am a big AMD supporter (I'm writing on a 5800X based system) but you are in denial.
AI-based upscaling is the mean to use RT without (well, not completely without...) the performance hit. AMD just doesnt have the silicon to do things in the same parallel mode Nvidia can. Nvidia GPU are much bigger for that reason.
You can do that on another way, I'm note denying that and I know DirectML is the future and everyone will benefit from that. But AMD is behind in this area, even if their RDNA2 is a vey powerful architecture.
 
I agree with others, even though Nvidia GPU prices are obscene, they are certainly not too hard to buy, but AMD cards are nigh on non-existent. I got sick of the whole situation and just got an as new 2080 Super for less than half price. For 1440p gaming it'll do me perfectly. Maybe when RDNA3 or GTX4000 ships the whole current farce will have improved dramatically and we can actually buy cards at RRP.
 
So there is Newegg Shuffle now...
If you want a GTX 3060 or PS5 or similar, you could try it out.
EVGA also has a queue system as well.
I hope there will be some similar system done for this new AMD card, but if there is still very low stock for this upcoming card, it won't help much.
 
I don't know why here AMD is to be defended always and forever.
I am a big AMD supporter (I'm writing on a 5800X based system) but you are in denial.
AI-based upscaling is the mean to use RT without (well, not completely without...) the performance hit. AMD just doesnt have the silicon to do things in the same parallel mode Nvidia can. Nvidia GPU are much bigger for that reason.
You can do that on another way, I'm note denying that and I know DirectML is the future and everyone will benefit from that. But AMD is behind in this area, even if their RDNA2 is a vey powerful architecture.
Not in denial about anything - just pointing out something that your comments suggest you're not appreciating. DLSS, or any other AI-based upscaling, isn't normally done in parallel with anything else in the rendering chain - it's a typically a separate stage in its own right. So while the individual processing blocks in Ampere GPUs (1/4 of each SM) can indeed handle tensor work alongside graphics or compute shader routines and the SIMD32 units in the CUs of RDNA 2 GPUs can't, it's not any kind of a benefit/limitation when it comes to the upscaling.

This is because DLSS itself isn't particularly demanding, computationally-wise. Nvidia themselves pointed this out in their GA102 whitepaper. On pages 16 and 17, one can see a breakdown of instructions over time for a single frame of Wolfenstein: Youngblood.

Note how for Turing-based 2080 Super, the use of DLSS (as a separate process at the end of the frame) reduced the frame rate from 20 ms to 12 ms (a reduction of 40%), due to the use of the lower resolution. For Ampere 3080, the drop was 12 ms to 6.7 ms, a reduction of 44%. So despite everything being concurrent in this architecture, it's only giving a 4% benefit and that value is also affected by the fact that the 3080 has 42% more SMs than the 2080 Super (there's only a 5% difference in boost clock, favouring the 2080).

So one can see that the DLSS routine itself isn't a big task for Nvidia GPUs and an equivalent for AMD is likely to be same (because for tensor work, it'd be like a Turing processor). Where it's notably behind is, of course, ray tracing - but that's a separate thing altogether. :)
 
Must depend on the country. In Germany, RX 6000 series and RTX 3090 were available, but at ridiculous prices, so no thanks.

I have a Radeon 9600 to sell you. Judging from the number, it's a few generations ahead of RX 6000. I'll give you a good price.
 
I have a Radeon 9600 to sell you. Judging from the number, it's a few generations ahead of RX 6000. I'll give you a good price.
Thanks, but I‘ll try and find my Geforce 4 4800 and use that instead. Judging from the number, it will offer superior RT performance - the 4 indicates that it‘ll do it @ 4k just fine (8k RT with DLSS).

So I‘m fine, but thanks for the offer 👍
 
TPU's story on the 6700XT says rumored price is $250 and the card is meant to compete with the 3060Ti.

The rumored price isn't for sure on the 6700XT, it's probably a very good chance the rumored $250 price is for the 6700 and I'd venture to guess you'd be looking around $350 for a 6700XT.

Then again, those would be MSRP prices and we'd probably see at least a 50% markup from there simply due to how everything else is going right now with GPU pricing.

I see folks are posting "possible" MSRP pricing of the 6700XT to be around $479 and supposedly "better" availability.

Yeah, better availability my as$. Cards, if the MSRP is $479, will be hitting the shelves closer to $600-650. They'll get snatched up right away and hit ebay, for I'm guessing at lest $1200.
 
I see folks are posting "possible" MSRP pricing of the 6700XT to be around $479 and supposedly "better" availability.

Yeah, better availability my as$. Cards, if the MSRP is $479, will be hitting the shelves closer to $600-650. They'll get snatched up right away and hit ebay, for I'm guessing at lest $1200.
As is sadly common with graphics cards from both camps right now, try to get one on launch day, ideally directly from AMD. If that does not work, you‘re SOL unless you are willing to overpay.

Redgamingtech said in one of their recent videos that according to their information, AMD is indeed providing AIB with kits that allow them to offer cards at MSRP (like they told HUB) but that AIB still add a ridiculous markup.

Everyone‘s blaming nVidia and AMD for current high prices but imo it‘s still not clear how much AIB and retailers are marking up prices.
 
Back