GeForce RTX 4060 Ti vs. Radeon RX 6700 XT: Best Value 1440p GPU?

Basically you forgot to add previous generation of AMD GPUs to the possible choices. And taking them for consideration for pure rasterization performance is they way to go:
The best sub $€200 GPU: Rx6600/Intel arc750
The best sub 250GPU: rx6650xt/Rx 7600
The best 300ish GPU: RX 6700xt
The best sub 450 GPU is Rx 6800
The best 500ish GPU is 6800xt
The best sub 600 GPU is 6950xt

If someone prefers dlss, frame gen, don't care about the best bang for the buck and is Ngreedia fanboy gets their products.
The 7900 XT was going for $705 last week during Prime and Fantastech Newegg sales. I think that's a fantastic deal & with 20GB, don't need to upgrade for a long while.

According to the buffoon MLID, we're expecting to see 2 or possibly 2 RDNA3 cards in September. There's a big price gap from $300-$700 and should have at least 3 cards to fill up the hole. I feel like AMD will follow ngreedia's policy and won't give us anything substantial, worthwhile over RDNA2.

7700 12GB - $349 - 6750XT w/lower power draw?
7700XT - 12GB - $429 - 6800-ish performance?
7800 16GB - $479 - 6800XT w/lower power draw?
7800XT 16GB - $579 - 6950XT w/lower power draw?

AMD can literally demolish the 4060ti and 4070 in the "mid-range". Rebrand the 6750XT+8% tweaks and charge $350, rebrand the 6800XT 16GB for $450 and 6950XT for $600. All with RDNA3 features and lower power draw.

I will buy one of these if it comes to fruition.
 
FYI the 6700xt as well as

Eligible AMD Radeon™ graphics cards

AMD Radeon™ RX 7900 XTX
AMD Radeon™ RX 7900 XT
AMD Radeon™ RX 6950 XT
AMD Radeon™ RX 6900 XT
AMD Radeon™ RX 6800 XT
AMD Radeon™ RX 6800
AMD Radeon™ RX 6750 XT
AMD Radeon™ RX 6700 XT
AMD Radeon™ RX 6700

are eligible to get a game code bundle with Starfield
https://www.amd.com/en/gaming/featured-games/starfield.html#bundle
Currently the 6800 is selling for $450 and performance is on average 20 %to 25% better than the 4060ti
The 16 gig version is selling closer to the 6800xt now.
 
The 4060ti is a complete disaster.

This rebranded 4050ti should be at $200. MAYBE $300 with 16GB of VRAM. $400 for a 8GB GPU in 2023 is completely absurd.

Nobody buy it. Go buy the 6700xt, 6750 xt, or 12 gb 3060.
The 4060ti is a complete disaster.

This rebranded 4050ti should be at $200. MAYBE $300 with 16GB of VRAM. $400 for a 8GB GPU in 2023 is completely absurd.

Nobody buy it. Go buy the 6700xt, 6750 xt, or 12 gb 3060.
We did not. Second hand market has never been better. I love the idea of not
giving GPU makers anything in the last 6 years.
Which actually still helps Nvidia because people upgrade and
have a decent sum of money selling the old card to put toward new one.
If you sell your old car even for 200 bucks, it means that new overpriced card
is not that expensive.
 
It seems to be very hard for TechSpot to write down in the conclusion that 6700XT is the better buy from the two tested cards. 4060 Ti currently does not make sense under any gaming circumstances. Go one step downwards, go one step upwards, buy older or buy used but do not buy the 4060Ti. On the other hand 6700XT is a good buy if you play up to 1440p don't want heavy RT and are able to do minor compromises. I know, bc I actually use a 6700XT.

Instead the author talks about other cards in the conclusion that weren't even tested here (4070) or actually make less sense than the 4060 Ti (that is the 4060Ti with 16GB).
What why? The 4060ti won in 46 out of 50 games, the heck are you talking about
 
Currently, I see RT as a gimmick, for several reasons:

1- Stupid performance hit for not noticeable return (with very few exceptions).

2- Very, very few games supports it in a meaningful way.

3- Needs "cheating" like FSR and DLSS (last one shines with fake frames, but the media dont call them that, of course).
4- There is no hardware right now and perhaps wont be one until at least a couple of gens, that provide future proofing.

In my eyes, it will stop being a gimmick until we can run RT/PT at 4K@120FPS on a GPU that cost around 350 bucks.


DLSS is a horrible tech that its main function is to keep you locked to ngreedia hardware and see how much of a FU to the consumers they provided with DLSS3 hardware requirements.

I know that Tim cannot say anything negative about DLSS (and seems that Steven is doing that more often also, but I will share this for your consideration.

I agree on RT although your requirements for 4K@120FPS are ridiculous, even without RT almost no card can achieve that.

DLSS is not horrible, if my eyes can barely tell the difference between it and native I don't care that some software deems it bad. Same goes for FSR. Frame generation is likely very sht, I would avoid that.
 
I agree on RT although your requirements for 4K@120FPS are ridiculous, even without RT almost no card can achieve that.

DLSS is not horrible, if my eyes can barely tell the difference between it and native I don't care that some software deems it bad. Same goes for FSR. Frame generation is likely very sht, I would avoid that.
Imo if you are going to delve into rt you either go big or you just go for the card with best rasterization in your budget. You can always try the $20 rtx 4080 GeForce experience if you want to try a specific title rt hype for 1 month. One thing that isn't talked about much in upscaling techniques is that it replaces the anti aliasing techniques so not only are you getting better performance from a lower sampled resolution, it's also often a superior anti aliasing solution than traditional anti aliasing techniques required at lower resolutions. Frame generation in current state adds too much lag in my opinion its like fools gold where the frames go up but it feels like you are playing at 1/3 the frame rate. I rather play at a consistent 60 fps with 16 ms of latency vs 120 frames per second my latency goes up by 50%. The one with the lower latency will often be the better marksman!
 
Last edited:
One thing that was left out from the comparison is the power draw. The 4060ti has the advantage of consuming 70-90W less than the 6700XT while being a bit faster. Some people value a quiet system. And if you keep the card for about 3 years you will probably save on energy about 30-60$ which makes up for the price difference.
 
One thing that was left out from the comparison is the power draw. The 4060ti has the advantage of consuming 70-90W less than the 6700XT while being a bit faster. Some people value a quiet system. And if you keep the card for about 3 years you will probably save on energy about 30-60$ which makes up for the price difference.

Power consumption was never issue with RTX3000 -series so it's not now either. Because if power consumption really was on issue, then nobody would have bought RTX3000 -series that was manufactured with obsolete but cheap Samsung 8nm manufacturing tech. Of course Nv*****s gladly ignored that.
 
4K@120FPS are ridiculous, even without RT almost no card can achieve that.
First, you missed the part that I said a couple of gens away, so easily 10 years.

So you are saying that one of those games released today wont run at that speed on a say, 9900XTX?

Anyways, your post is wrong given this:

HbZ9iWz.png



From here: https://www.gpucheck.com/game-gpu/c...ve/amd-radeon-rx-7600/amd-ryzen-9-5950x/ultra

DLSS is not horrible
You ignored what I said about why DLSS is horrible, its a closed source tool which main reason to exist is to keep you locked to Ngreedia hardware and even there, you are down to the whims of Dear Leader Jensen (DLSS 3 fake frames are only available to the "elite" members of the 40 series).

I'm down with better sharpening, but it must be an open sourced/open standard release.
 
Basically you forgot to add previous generation of AMD GPUs to the possible choices. And taking them for consideration for pure rasterization performance is they way to go:
The best sub $€200 GPU: Rx6600/Intel arc750
The best sub 250GPU: rx6650xt/Rx 7600
The best 300ish GPU: RX 6700xt
The best sub 450 GPU is Rx 6800
The best 500ish GPU is 6800xt
The best sub 600 GPU is 6950xt

If someone prefers dlss, frame gen, don't care about the best bang for the buck and is Ngreedia fanboy gets their products.

Frame gen is a total scam to me, but DLSS has its uses though. (Arguably) more wide spread adoption, better upscaled image quality & less weird artifacting/"ghosting" sometimes seen even on FSR 2.1, and allows 8GB cards to run 1440p fairly smoothly (since actual render res is usually sub-1080p). Which leads me to my next (unpopular) opinion. We all know that the 4060ti was a complete and utter failure, on many fronts, as it barely manages to outperform its immediate predecessor -- the 3060ti. Where I live (TX, USA) that card is going for well under $300 (around $240-260 in fact, depending on the model of course) and for those prices, it's actually worth "some" consideration. For sheer rasterization performance, it's close enough to the 6700 XT you really can't even tell the difference outside of bar graphs & charts. For memory intensive games, it's got a glorious 256-bit memory bus (even the dear ol' 6700 XT gets by on a 192-bit one -- albeit with that 12GB buffer). While you can either love or hate DLSS, it's here to stay and really does help 8GB RTX cards at higher res. Although, I'd also like to add a few things of my own, since I personally own a 3060ti (spent right around $300 in mid-2022, when 6700 XTs were still 25-30% more by comparison). With some "fine tuning" (undervolting, core & memory OC'ing, etc) my particular card is within striking distance of a 3070 -- which makes sense, both cards are merely cut down 3070ti's after all. My cousin just got a 6750 XT, and we both did some benchmarking (not like we're pros or anything) but his 3700x @ 4.3Ghz + 6750 XT vs my 5600 @ 4.8Ghz + 3060ti OC'ed to hell and back were damn near the same experience. With some due diligence, one can grab a second hand 3060ti for around the same price as a new 6650 XT/RX 7600 and get performance into the territory of the 6700 XT/6750 XT. DLSS just "sweetens the pot", so to speak. As with the aforementioned RDNA 2 cards, RT really isn't the focus at this performance tier anyways so despite the "RTX" advantage in that category I'm aware most people (myself included) won't be using it. On another note it's a really solid card for emulation too, if that's something you fancy. I personally run stuff like the Yuzu Switch emulator or RPCS3 (usually at 1440p also). Don't confuse me for a "fanboy", I'm most certainly not defending "Nshitia" or the dumpster fire the lower to mid 40-series lineup has turned out to be. Just giving some food for thought. 6700 XT isn't a bad buy whatsoever. And second hand I'm seeing them pop up for $260-275 regularly, so it really just comes down to the extra VRAM being the only real reason to not get a 3060ti. Although with everyone recommending the 6700 XT, I wouldn't be surprised to see it start selling for a little more due to the demand increase over the next few months.
 
First, you missed the part that I said a couple of gens away, so easily 10 years.

So you are saying that one of those games released today wont run at that speed on a say, 9900XTX?
A couple of generations is...easily 10 years? Hahaha, no it isn't. You're now trying to make it seem like you were talking about 4K@120FPS but running todays games in 10 years on GPU that doesn't exist? What's with the CSGO benchmark? It's an esports title never meant to run ray tracing or even look advanced. Stop cherry picking.
You ignored what I said about why DLSS is horrible, its a closed source tool which main reason to exist is to keep you locked to Ngreedia hardware and even there, you are down to the whims of Dear Leader Jensen (DLSS 3 fake frames are only available to the "elite" members of the 40 series).

I'm down with better sharpening, but it must be an open sourced/open standard release.
I didn't ignore that part. It is closed source but that can't be the reason it's horrible, literally nobody thinks DLSS is "horrible". Plenty of closed source software out there, guess it's all horrible then...

When you argue, you make AMD look bad, opposite of what you're trying to achieve. If you want to help just be quiet.
 
A couple of generations is...easily 10 years?
lol, take words literally...went convenient
You're now trying to make it seem like you were talking about 4K@120FPS but running todays games in 10 years on GPU that doesn't exist? What's with the CSGO benchmark? It's an esports title never meant to run ray tracing or even look advanced. Stop cherry picking.
Err, no. You clearly said that no GPU today can do what I said (4k@120FPS). Your answer, not mine.

didn't ignore that part. It is closed source but that can't be the reason it's horrible, literally nobody thinks DLSS is "horrible". Plenty of closed source software out there, guess it's all horrible then...
Yes you ignored and doubled down on it with that comment, since you are still conveniently ignoring my main complain about it and no, wont repeat it a third time for you.

When you argue, you make AMD look bad, opposite of what you're trying to achieve.
I mentioned open standards, open source and ngreedia, not AMD. So you are now projecting.

If you want to help just be quiet.
Wise words that definitely apply to you, set the example.
 
Can't wait for a full review of the $500 card that is 4060ti 16GB.........mwahahaha

https://videocardz.com/newz/geforce-rtx-4060-ti-16gb-ends-up-slower-than-8gb-in-official-msi-testing
The one thing to note with any card that uses clamshell mode to double up the VRAM amount, compared to the original version, is potentially going to be slower in situations where the performance is 100% memory bandwidth limited.

This is because to have two DRAM modules per controller, the data bus widths of each module are halved to 16 bits. So the memory controller will only be able to perform read/writes on one module at half the rate of the normal model. In games like Cyberpunk, though, this isn't an issue as most of the render time is taken up by shader routines.

That said, MSI's 'testing' though was neither thorough nor especially repeatable, so the figures aren't exactly reliable -- despite being very close to each other.
 
The one thing to note with any card that uses clamshell mode to double up the VRAM amount, compared to the original version, is potentially going to be slower in situations where the performance is 100% memory bandwidth limited.

This is because to have two DRAM modules per controller, the data bus widths of each module are halved to 16 bits. So the memory controller will only be able to perform read/writes on one module at half the rate of the normal model. In games like Cyberpunk, though, this isn't an issue as most of the render time is taken up by shader routines.

That said, MSI's 'testing' though was neither thorough nor especially repeatable, so the figures aren't exactly reliable -- despite being very close to each other.
You are absolute right about this. The 8GB model use 4 modules of 16Gb on a 4x32 bit memory controller. Adding 8 modules to the same bus will make the memory controller split into 8x16bit because of the memory addressing.
 
Back