AMD Radeon RX 6x50 XT lineup specifications confirmed, plus some early benchmarks

nanoguy

Posts: 1,355   +27
Staff member
In brief: AMD's refreshed RX 6x50 XT cards won't blow your socks off, but they could turn out to be decent upgrades for what are now aging RDNA 2 deisgns. Early benchmarks suggest they'll hold their own against Nvidia counterparts, but pricing is what will ultimately determine their success.

We're less than a week away from the expected release of AMD's refreshed RDNA 2 graphics cards. As we inch closer to the actual launch, more details are popping up online, particularly in terms of power consumption.

Several variants of the upcoming Radeon RX 6950 XT have been confirmed to have a total board power (TBP) of 335 watts — a 35 watts boost over the RX 6900 XT. Previous rumors pointed to a TBP of 350 watts, but it appears that AMD opted for a more modest increase in terms of power budget compared to Nvidia's-100 watt boost for the GeForce RTX 3090 Ti.

The RX 6950 XT's TBP is also just five watts shy of the RX 6900 XT Liquid Edition's power rating. The game clock has been increased to 2,100 MHz, and the GPU will boost to 2,310 MHz if the power budget and thermals allow it. As for the memory, we already know the new card will ship with 18 Gbps memory that translates into an effective memory bandwidth of 1,728 gigabytes per second.

Interestingly, early benchmarks suggest AMD's updated RDNA 2 flagship will not only be faster than the RTX 3090, but also the RTX 3090 Ti. They look even more impressive when you consider the latter card sports a massive overclock and much larger power budget, not to mention 21 Gbps memory and more CUDA cores than the RTX 3090.

The tests in question were performed using the 3DMark TimeSpy benchmark on a system equipped with a Ryzen 7 5800X3D CPU and DDR4-3600 memory. When using pre-release drivers and the TimeSpy Performance preset, the RX 6950 XT was 6.5 percent faster than the RTX 3090 Ti at 1440p, with a graphics score of 22,209 points.

As you'd expect, AMD's new flagship isn't able to challenge Nvidia's best Ampere card when it comes to ray tracing. The RX 6950 XT can get close to an RTX 3080 in the 3DMark Port Royal test, but the RTX 3090 Ti is around 50 percent faster.

The RX 6750 XT will come with a 250-watt TBP similar to the RX 6800, and leaked benchmarks suggest it will rival Nvidia's RTX 3070 in terms of rasterization performance. The ray tracing performance is more modest than that of the RTX 3060 Ti, but that's not a huge surprise. Spec-wise, the new card will supposedly sport a game clock of 2,495 MHz and a boost clock of 2,600 MHz, along with 18 Gbps memory.

AMD has given the RX 6650 XT a 20-watt higher TBP compared to the RX 6600 XT which allowed it to increase the game and boost clocks to 2,410 MHz and 2,635 MHz, respectively. That makes it 11 percent faster in the 3DMark TimeSpy benchmark, but it will be interesting to see how well that translates into actual gaming performance.

Pricing is still an unknown, but we're hoping to see Team Red undercut similar offerings from Nvidia as we're talking about aging RDNA 2 designs that don't shine in the ray tracing department. Curiously, now that prices for the RX 6000 series are reasonably close to MSRP, they 're starting to show up with more frequency in the Steam hardware survey.

If anything, this suggests AMD could make some gains in the mainstream and low-end segments if it can put the budget brand hat back on. The refreshed RDNA 2 cards are expected to land on May 10, so it won't be long before independent reviews will bring gaming performance and pricing into focus.

Permalink to story.

 
Seriously if prices could stay normal with these I could totally invest in a 6950XT. I try to get 5 years out of a setup so as long as I can keep the cost at under $400/yr. But, jeez, I'm really going to have to do a full system upgrade if I do.
 
Well these are likely really fast cards. In older games with simple feature sets.
you mean feature sets that end up having a free, opensource alternative made by AMD that eventually get implemented because it's easier for developers to use? Oh, is that G-sync I hear calling? because it wants to know why nVidia abandoned it at a truck stop with a fake ID and a cassette player.
 
you mean feature sets that end up having a free, opensource alternative made by AMD that eventually get implemented because it's easier for developers to use? Oh, is that G-sync I hear calling? because it wants to know why nVidia abandoned it at a truck stop with a fake ID and a cassette player.
Hmm, no nothing like that. G-Sync doesn’t require explicit support in games. Did you not know that?
 
Radeon 6000 series is not really a match for the Nvidia 30xx parts. Sure at frames per dollar they compete when you strip away games to their bare bones but in terms of features and driver support the Geforce solutions reign supreme.
 
Hmm, no nothing like that. G-Sync doesn’t require explicit support in games. Did you not know that?
It use to require a dedicated chip but nVidia has now turned it into FreeSync. All it is now is a badge that display manufactures PAY to put on their monitor. And often, it goes the other way around when nVidia PAYS monitor manufacturers to not allow AMD cards use Variable refresh rate tech on their displays.

nVidia HAD raytracing, but now AMD's version is now easier to implement. nVidia HAD upscaling tech, but AMD's is now easier to implement. So if you want to talk about legacy gaming, that's going to be nVidia in a few years. Which, it might not because *surprise*, AMD's tech can be implemented in a simple update and not have the engine built around.
 
It use to require a dedicated chip but nVidia has now turned it into FreeSync. All it is now is a badge that display manufactures PAY to put on their monitor. And often, it goes the other way around when nVidia PAYS monitor manufacturers to not allow AMD cards use Variable refresh rate tech on their displays.
Ok, nice word salad and a hot take there. But still not a game feature.

nVidia HAD raytracing, but now AMD's version is now easier to implement.
What exactly is AMD’s version of ray tracing? And how is it easier to implement?
You must mean something other than DXR or VulkanRT. I mean, everybody knows AMD cards are not competitive there, right?

nVidia HAD upscaling tech, but AMD's is now easier to implement.
I guess you mean FSR? It doesn’t have a temporal component so it is of course barely comparable. And runs on Nvidia cards. I mean, if you’d want to.

So if you want to talk about legacy gaming, that's going to be nVidia in a few years. Which, it might not because *surprise*, AMD's tech can be implemented in a simple update and not have the engine built around.
I wouldn’t care to guess what you mean by that. Sounds like hand waving.
 
I mean, cool, they finally got the PREVIOUS generation's flagship beat. But next gen NV comes out this year...
what, space heaters just in time for winter? And who cares who has the best card if it costs as much as a down payment on a new car. I bet there isn't a single person on the forums who actually owns a 3090ti and the only person who says they own a 3090(Quantum who says he owns 3) is probably lying.

Ok, nice word salad and a hot take there. But still not a game feature.


What exactly is AMD’s version of ray tracing? And how is it easier to implement?
You must mean something other than DXR or VulkanRT. I mean, everybody knows AMD cards are not competitive there, right?


I guess you mean FSR? It doesn’t have a temporal component so it is of course barely comparable. And runs on Nvidia cards. I mean, if you’d want to.


I wouldn’t care to guess what you mean by that. Sounds like hand waving.
nVidia tried to make everything about AI and it's just not working. Their cards are great for AI research, I'll give them that, but everything else they aren't cost competitive or performance per watt competitive. and the all of MAYBE 10 games that use nVidia proprietary tech are going to be irrelevant in a few years.
 
nVidia tried to make everything about AI and it's just not working. Their cards are great for AI research, I'll give them that, but everything else they aren't cost competitive or performance per watt competitive. and the all of MAYBE 10 games that use nVidia proprietary tech are going to be irrelevant in a few years.
More games using higher resolutions, ray traced reflections, shadows, caustics and/or global illumination almost every week. AMD better step up, they’re getting left behind again.

But you mentioned AMD’s ray tracing. And how it was different and easier to implement. Could you elaborate?
 
More games using higher resolutions, ray traced reflections, shadows, caustics and/or global illumination almost every week. AMD better step up, they’re getting left behind again.
Higher resolutions? nVidia couldn't even be bothered to put more than 12gb of VRAM on their 3080ti and they're charging $1200 for it. I also don't know if you've seen the new unreal engine demo, but ray tracing is going to be irrelevant soon. Ray tracing was a cool gimmik for like 4 years that no one could run anyway, DLSS or not. If I'm paying $1000+ for a graphics card I don't want to play at 1080p with DLSS and still get 40FPS with my "gsync"
 
Higher resolutions? nVidia couldn't even be bothered to put more than 12gb of VRAM on their 3080ti and they're charging $1200 for it. I also don't know if you've seen the new unreal engine demo, but ray tracing is going to be irrelevant soon.
I did see it. Did you? Did you, really? Because it’s really funny that you should mention it! After all, it did such a great job demonstrating you how vastly superior the hardware assisted Lumen ray tracing is over the software only version.

What it also showed clearly is that AMD’s hardware ray tracing support is still very much in its infancy, with a broken Lumen implementation that is outright missing certain reflections and light bounces, and being way slower even then.

Ray tracing was a cool gimmik for like 4 years that no one could run anyway, DLSS or not. If I'm paying $1000+ for a graphics card I don't want to play at 1080p with DLSS and still get 40FPS
You’ll probably want to avoid AMD cards then.

How’s about that explanation how ray tracing is easier to implement on AMD cards?
 
You’ll probably want to avoid AMD cards then.

How’s about that explanation how AMD is easier to implement on AMD cards?
I mean, AMD is already implemented on AMD cards, they're AMD cards.

I did see it. Did you? Did you, really? Because it’s really funny that you should mention it! After all, it did such a great job demonstrating you how vastly superior the hardware assisted Lumen ray tracing is over the software only version.

What it also showed clearly is that AMD’s hardware ray tracing support is still very much in its infancy, with a broken Lumen implementation that is outright missing certain reflections and light bounces, and being way slower even then.
You mean like how you still can't game with raytracing on as we're heading into 8k gaming? Most people turn it on, say "Ooooo, pretty" and then turn it off because they'd rather have higher frames. I didn't spend $10,000 on an 8k display so I could look at a slideshow.
 
I mean, AMD is already implemented on AMD cards, they're AMD cards.


You mean like how you still can't game with raytracing on as we're heading into 8k gaming? Most people turn it on, say "Ooooo, pretty" and then turn it off because they'd rather have higher frames. I didn't spend $10,000 on an 8k display so I could look at a slideshow.
Oh cool, another random tangent and a fresh argument that you won’t bother backing up 2 minutes from now. How about you don’t ask questions if you’re really not interested anyway?
 
Back