The RTX 3090 Ti is still a monster when limited to 300W

..but when all is said and done, a $2000 card is still only about 20% faster than my $650 3080 FE.
Although I agree, I keep seeing people making this mistake: RTX 3080 MSRP is $699, not $650.

You maybe confuse it's price with the RX 6800 XT, that one has an MSRP of $650.
 
I wanted to get a RX 6700 XT because of the 12GB VRAM and also being very similar in design to the PS5.
But the price differences on the market meant I ended up getting a 3060ti and I was worried about the longevity of the 8GB of VRAM but the benefits of DLSS has been a game changer to me, I honestly think the image just looks cleaner and sharper than native res with traditional AA methods. Plus the RT performance of the 3060ti is massively faster than the RX 6700 XT. Hopefully if I game at 1440p for the foreseeable the 3060ti will age better than expected.
 
I call bs on that, I've seen 16 year P4's running at 80-90c no issues. The issue is all the other parts on a graphics card that can go, 70< is perfectly fine.

What does have anything to do with? Different manufacturing process, and not even the same chip. CPUs with the current node can withstand over 100C with no issues, but VRAM and I/O dies can experience issues with long term temperatures.
 
All I've learned here is that somehow I suck at undervolting my GPU.

Even following guides (-200 or so on the clock, drag voltage you'd like to attempt to clock freq you want to shoot for) I still end up pulling well over 300W, typically still 320 and loss in performance, just that it'll not exceed set voltage or selected clock. I'm guessing I'm still just shooting for too high a voltage at too high clock, but dropping further just doesn't seem worth the performance loss for the little power it seems to save.

So I said screw it and just set an OC on it with a stupid aggressive fan curve, and let it go all out 380-400W if it wants to, might as well push all I can out of it.

Currently highest temps I've seen at 67 in 2077 (memory hot spot at 78), though I know that'll climb as the summer sets in here. Still not too worried as it was still summer heat when I got the card last year, and temps sat around 75-78 under load with an even more aggressive clock and memory OC on it at the time (dialed both core and memory down just a little bit last month at no loss in overall performance, and a gain in 2077. Had prior one just a little too aggressive for it.)
 
Last edited:
All I've learned here is that somehow I suck at undervolting my GPU.

Even following guides (-200 or so on the clock, drag voltage you'd like to attempt to clock freq you want to shoot for) I still end up pulling well over 300W, typically still 320 and loss in performance, just that it'll not exceed set voltage or selected clock. I'm guessing I'm still just shooting for too high a voltage at too high clock, but dropping further just doesn't seem worth the performance loss for the little power it seems to save.

So I said screw it and just set an OC on it with a stupid aggressive fan curve, and let it go all out 380-400W if it wants to, might as well push all I can out of it.
Not necessarily, probably silicon lottery have something to do with it. My RX 6900XT can barely go below 1.125V without crashing while other people can go much lower.
 
Not necessarily, probably silicon lottery have something to do with it. My RX 6900XT can barely go below 1.125V without crashing while other people can go much lower.

I managed 2010 @ both .895 v and .975v, 2040 at 1v, 2070 @ 1v crashed as unstable.
It'll certainly run well over stock boost speeds at lower voltages than stock just fine, just doesn't seem to have the huge impact on power saving as I've seen others report, and a reduction of only 60 or so watts didn't seem worth it.
Might as well use that extra 60 or so to get as many frames as I can.
 
All I've learned here is that somehow I suck at undervolting my GPU.

Even following guides (-200 or so on the clock, drag voltage you'd like to attempt to clock freq you want to shoot for) I still end up pulling well over 300W, typically still 320 and loss in performance, just that it'll not exceed set voltage or selected clock. I'm guessing I'm still just shooting for too high a voltage at too high clock, but dropping further just doesn't seem worth the performance loss for the little power it seems to save.

So I said screw it and just set an OC on it with a stupid aggressive fan curve, and let it go all out 380-400W if it wants to, might as well push all I can out of it.

Currently highest temps I've seen at 67 in 2077 (memory hot spot at 78), though I know that'll climb as the summer sets in here. Still not too worried as it was still summer heat when I got the card last year, and temps sat around 75-78 under load with an even more aggressive clock and memory OC on it at the time (dialed both core and memory down just a little bit last month at no loss in overall performance, and a gain in 2077. Had prior one just a little too aggressive for it.)

You don't really need to underclock, but it can help with lower temps and power draw. What you can do is make sure you set vsync on or the in game frame limits on. My monitor can run 165Hz, but I've got it set to 120 so any game I play I set the vsyc or frame limits to 120. No sense in having the GPU worker harder than it needs to. Doing this helps keeps power draw down some and temps down a little.

Just something else to keep in mind.
 
You don't really need to underclock, but it can help with lower temps and power draw. What you can do is make sure you set vsync on or the in game frame limits on. My monitor can run 165Hz, but I've got it set to 120 so any game I play I set the vsyc or frame limits to 120. No sense in having the GPU worker harder than it needs to. Doing this helps keeps power draw down some and temps down a little.

Just something else to keep in mind.


Yeah, and I'll enable a frame cap if I'm doing capture of something that runs stupid high over 60FPS currently (like wreckfest) if only for smoothness of capture and playback, but capping frames to 60 when it's normally trying to push 120+ otherwise leads to very undesirable game play.
Won't do vsync due to input latency though.

A higher refresh rate monitor is planned for sometime this year though
 
Undervolting is not guaranteed. I have had GPUs that undervolt very well and I have had some that don't undervolt significantly at all. What makes undervolting annoying also is that the undervolt can seem stable in one workload and fail in another.

Getting older has made me want to waste none of my time fiddling with these things. I expect things to be optimized by big companies with big resources for optimization.
 
So you're saying that updated drivers has made the 3060Ti much faster and closer to 3080 performance, but the new drivers haven't made any difference for the 3080? Huh....

I don't see the 3060Ti on some of the newer game benchmarks from TPU, but the 3060Ti is roughly 10% faster than the 2080 on average.

If that's true, even if it's 15% faster than the 2080, the 3060Ti still around 35-40% behind the 3080.

We'll take Total War: Warhammer III (pic)
1440p
RTX 2080 is 45.5fps
RTX 3080 is 73.7fps
RTX 3060Ti 10% over the 2080 = 50.1fps
That's a 47% difference between the 3060Ti and 3080
Even if the 3060Ti is 15% faster than the 2080 = 52.3fps
That's 41% difference between the 3080 and 3060Ti.


Next we'll look at Dying Light 2 (pic)
1440p
RTX 2080 is 62.8fps
RTX 3080 is 111.0 fps
RTX 3060Ti 10% over the 2080, so = 69.1fps
That's a 60% difference.
Even if it's pulling the same fps as the 6700XT at 80.2fps
That's still a 38% difference between the 3060Ti and the 3080.


How about God of War (pic) where they did use the 3060Ti as part of the testing?
3060Ti = 69.8fps
3080 = 104.7fps
That's a 50% difference between the two.

Benchmark tests used will also give different results between sites you look at. As it stands, the 3080 is still 30-40% faster, on average, over the 3060Ti - regardless of what game or resolution you're using.

Simply look at the TECHPOWERUP link I wrote and watch the performance table instead of cherry picking. All the cards uses newest driver. It's a recent review, because it's 3090 Ti launch review.

At 1440p the 3080 is nowhere near 40% faster than 3060 Ti. At 4K the difference is much bigger, which is what I am saying...

It's common knowledge by now that GA102 first shines at 4K. Very few games can utilize the cores at 1440p. Arch is simply bottlenecking here. RDNA2 generelly does better in 1440p than Ampere, but loses in 4K+

This is why 3070 performs very close to 3080 in 1440p. Yet uses almost half the watts to do so (because of non-GDDR6X too, tho).

Once again, I have 3080 Ti and 3070. I know for a fact that the performance difference is not that huge in 1440p gaming. Mostly because I am typically CPU BOUND at 1440p, blasting out 120+ fps in most games on both cards, the 3080 Ti is maybe 25% faster overall. In Shadow of the Tomb Raider bench, on max settings, the difference was only around 20% ... And 3070 was 499 dollars vs 3080 Ti 1500 dollars. At 4K, the difference is closer to 40%

Going to 4K or higher removes the CPU limitation and makes GA102 shine.

3060 Ti and 3070 has crazy good performance per watt and runs cool and quiet compared to GA102 based cards like 3080-3090 series, which are fat and long, with tons of heatsink space. My 3070 is like half the size of my 3080 Ti. Perfect ITX card, and I happen to use it in an ITX case (HTPC in the living room for my OLED TV).

I undervolted my 3070, still runs in the 1850-2000 MHz range and uses less than 200 watts in gaming.. Sometimes it's as low as 175 watts with peaked GPU usage. That's like HALF the watts of my 3080 Ti (also Undervolted).

GDDR6X uses alot of watts and the bandwidth increase makes no difference in 1440p or lower, compared to regular GDDR6. First at 4K or higher.

This is part of the reason why 3070 Ti uses 80 more watts than 3070, yet only performs 2-4% better in 1440p. It's a weird card, because GDDR6X is meant for 4K+ gaming but 8GB is on the low side and the GPU power too. GDDR6 had low supply at some point, which was probably the reason for 3070 Ti.

3080 12GB launched for a reason. 12GB is way more future proof compared to 8-10GB for 4K gaming. However 8-10GB is still viable in most games at 4K -TODAY- // This will change tho, because next gen games are coming. Meaning games with only PS5/XSX and PC support. VRAM requirement will rise.
 
Last edited:
Simply look at the TECHPOWERUP link I wrote and watch the performance table instead of cherry picking. All the cards uses newest driver. It's a recent review, because it's 3090 Ti launch review.

I wasn't cheery picking. You specifically said newer games and drivers. I posted info from the latest games that TPU tested. If you feel W1zzard is wrong, please contact him and let him know his benchmarks are wrong and what he can do to fix them. He's an understanding guy and likes to fix issues that arise from his testing methods.

In 1440p 3060 Ti is "only" 20-25% slower than 3080 :laughing: Is 20% alot? Yes and no.

You were the one that specifically said (see above) 20-25% slower than a 3080 and I corrected you saying that that is not the case and I posted links.

I even showed that Techspot shows a similar performance gap between the 3060Ti and 3080 - upwards of 40% at 1440p.

The 3060Ti, as it stands, sits where it has always been - upwards of 40% behind the 3080 at 1440p.

The 3060Ti is a solid performer. It does handle 1440p well enough. A great mid-range card. No one is taking that away from the 3060Ti. I'm just providing correct info that it is, on average, upwards of 40% slower than the 3080 at 1440p.
 
I wasn't cheery picking. You specifically said newer games and drivers. I posted info from the latest games that TPU tested. If you feel W1zzard is wrong, please contact him and let him know his benchmarks are wrong and what he can do to fix them. He's an understanding guy and likes to fix issues that arise from his testing methods.



You were the one that specifically said (see above) 20-25% slower than a 3080 and I corrected you saying that that is not the case and I posted links.

I even showed that Techspot shows a similar performance gap between the 3060Ti and 3080 - upwards of 40% at 1440p.

The 3060Ti, as it stands, sits where it has always been - upwards of 40% behind the 3080 at 1440p.

The 3060Ti is a solid performer. It does handle 1440p well enough. A great mid-range card. No one is taking that away from the 3060Ti. I'm just providing correct info that it is, on average, upwards of 40% slower than the 3080 at 1440p.
Well not according to techpowerup's testing, more like ~25% slower than 3080 at 1440p in their game selection.

And yeah you are cherrypicking games, I look at the overall picture, which includes all the games on average.

25% is pretty much the same difference I get with my 3080 Ti vs 3070 in 1440p.

3060 Ti to 3080 pretty much have the same perf difference as 3070 to 3080 Ti, at least in 1440p.

When you go to 4K, 3080 fares better, but the only true capable cards are 3080 Ti, 3090 and 3090 Ti if you ask me... Maybe 3080 12GB too, but it is on the low-side in terms of cores and raw power.

I am not sure I would consider 3080 a high-end GPU today. It's almost 2 years old and several GPU has it beat today, while having enough VRAM for the future. This means 3080 is not really a true 1440p card, neither a true 4K card. It's perfect for 3440x1440 UltraWide users for example. A good compromise.

High-end for Nvidia is GA102 with 10K+ cores and 12+ GB VRAM, 3080 series don't quality, only 3080 Ti + 3090 series

However 3080 is still a good chip
 
Last edited:
Well not according to techpowerup's testing, more like ~25% slower than 3080 at 1440p in their game selection.

And yeah you are cherrypicking games, I look at the overall picture, which includes all the games on average.

25% is pretty much the same difference I get with my 3080 Ti vs 3070 in 1440p.

3060 Ti to 3080 pretty much have the same perf difference as 3070 to 3080 Ti, at least in 1440p.

When you go to 4K, 3080 fares better, but the only true capable cards are 3080 Ti, 3090 and 3090 Ti if you ask me... Maybe 3080 12GB too, but it is on the low-side in terms of cores and raw power.

I am not sure I would consider 3080 a high-end GPU today. It's almost 2 years old and several GPU has it beat today, while having enough VRAM for the future. This means 3080 is not really a true 1440p card, neither a true 4K card. It's perfect for 3440x1440 UltraWide users for example. A good compromise.

High-end for Nvidia is GA102 with 10K+ cores and 12+ GB VRAM, 3080 series don't quality, only 3080 Ti + 3090 series

However 3080 is still a good chip

I don't know how else to explain it to you. On average, the 3060Ti is upwards of 40% behind the 3080 at 1440p just as I've shown.

Look at the latest game benches from Techspot, even.
Halo Infinite - 28% slower
BF2042 - 37% slower
FC6 - 29%
CP2077 - 49%

Those 4 games alone shows, on average, the 3060Ti is behind the 3080 by 36%.

I even showed the last 3 games TPU benched and between them the 3060Ti is, on average, 40% slower than the 3080.

As I've said, on average, the 3060Ti is upwards of 40% behind the 3080 at 1440p. Whereas you claim it's 20-25%.

I'm not cheery picking, as you say. I'm using the latest bench results from the most recent games TPU and TS have used.....but as they say, you can lead a horse to water, but you can't make them drink.
 
Last edited:
That for gaming purposes, 24GB of VRAM was, and still is, completely unnecessary in 2022, never mind in 2020 when the cards launched.
24GB of VRAM was NVidia appealling to both the future landscape of gaming as they saw it's potential AND the market sector that was once served by the Titan cards. It's ok if you don't understand it's purpose and potential. Such will not change the fact that the card will stay relevant for the next 4 or 5 years.
Sure, if you're part of the 0.001% of gamers who want to run Skyrim/Flight Simulators with 50 mods installed and 8K texture packs, you would feel like $1400 was well spent...I suppose.
Or someone who runs anything that benefits from lot of VRAM. Gaming is not the only purpose for a GPU.
 
24GB of VRAM was NVidia appealling to both the future landscape of gaming as they saw it's potential AND the market sector that was once served by the Titan cards. It's ok if you don't understand it's purpose and potential. Such will not change the fact that the card will stay relevant for the next 4 or 5 years.

Or someone who runs anything that benefits from lot of VRAM. Gaming is not the only purpose for a GPU.

I get it, you threw money down the drain for a 3090, but you don't need to go through all the mental gymnastics required to justify the fact you were suckered into Nvidia's marketing spiel for the 3090.

Let it go, move on. Learn from your mistake.
 
I don't know how else to explain it to you. On average, the 3060Ti is upwards of 40% behind the 3080 at 1440p just as I've shown.

Look at the latest game benches from Techspot, even.
Halo Infinite - 28% slower
BF2042 - 37% slower
FC6 - 29%
CP2077 - 49%

Those 4 games alone shows, on average, the 3060Ti is behind the 3080 by 36%.

I even showed the last 3 games TPU benched and between them the 3060Ti is, on average, 40% slower than the 3080.

As I've said, on average, the 3060Ti is upwards of 40% behind the 3080 at 1440p. Whereas you claim it's 20-25%.

I'm not cheery picking, as you say. I'm using the latest bench results from the most recent games TPU and TS have used.....but as they say, you can lead a horse to water, but you can't make them drink.

Ehh I am cherrypicking? hahahah, I look at _overall performance across multiple titles_ you pick out the games with the biggest difference in performance, JOKER :joy:

3080 is not 40% faster than 3060 Ti overall in 1440p, THE END.
GA102 is first being utilized properly at higher resolutions. THIS IS NOTHING NEW and common knowledge.

SIMPLY look at my link and you will see that the perf difference is much bigger in 2160p than in 1440p and especially 1080p.
 
Ehh I am cherrypicking? hahahah, I look at _overall performance across multiple titles_ you pick out the games with the biggest difference in performance, JOKER :joy:

3080 is not 40% faster than 3060 Ti overall in 1440p, THE END.
GA102 is first being utilized properly at higher resolutions. THIS IS NOTHING NEW and common knowledge.

SIMPLY look at my link and you will see that the perf difference is much bigger in 2160p than in 1440p and especially 1080p.

Says the guy that said I needed to use latest games and drivers. Per your request I took the most recent games from a couple of reputable sites and posted the info. It all shows that the 3060Ti, on average, is upwards of 40% slower than the 3080 at 1440p.

But, since you're so hung up on the graphs from the 3090Ti review you linked and we reference the Relative performance of the 1440p from that review (see here). You can clearly see that the 3060Ti sits about 38% slower than the 3080. If you're lost, please continue.

If you're looking at that graph and seeing the 3080 at 84% and the 3060Ti at 61% and then going: 84 - 61 = 23%, you're doing it wrong.

61 * 1.23 ≠ 84
61 * 1.23 = 75

We'll say the 3090Ti in that graph represents 100 fps (we're removing the percentage sign in hopes to remove any confusion here). Every card below it represents the difference in fps against the 3090Ti.

3090Ti = 100fps
3090 = 93fps
3080Ti = 92fps
3080 = 84fps
3070Ti = 73fps
3070 = 70fps
3060Ti = 61fps

As we can see the 3060Ti is 64% slower than the 3090Ti. This information is extrapolated by taking 100/61 = 1.639 (we'll round up to 1.64. As a percentage we drop the 1 in front of the decimal and we have 64%)

Using that same graph we can take the data of:
The 3060Ti gets 61fps
The 3080 gets 84fps
84/61 = 37.7% difference in performance.

If you were taking the 84 and the 61 and subtracting them from each other to get the % difference in overall average performance of the 3080 and 3060Ti, you're doing it wrong. I'll show you:

84 - 61 = 23 (this number is the difference in fps, not the percentage of difference. To find the percentage of difference you need to divide the numbers)
84/61 = 37.7%

If think that the 37.7% difference is still wrong, you can do the math to verify. If you think it's only 23% you can check the math to verify.

61 (fps of the 3060Ti) * 1.23 = 75
75 ≠ 84 (the fps of the 3080)

61 (fps of the 3060Ti) * 1.377 = 83.99 (round up to 84)
84 = 84

Math is fun!
 
3090Ti = 100fps
3090 = 93fps
3080Ti = 92fps
3080 = 84fps
3070Ti = 73fps
3070 = 70fps
3060Ti = 61fps

As we can see the 3060Ti is 64% slower than the 3090Ti.
Your math needs improvement. 100-61=39. Where 100 = 100%, the difference is 39% Your 64% statement does not work mathematically.
Using that same graph we can take the data of:
The 3060Ti gets 61fps
The 3080 gets 84fps
84/61 = 37.7% difference in performance.
Also nonsense. 84-61=23. Where 100 = 100% the difference is 23% not 37.7%.

Where did you learn Math? Common Core maybe?
 
Says the guy that said I needed to use latest games and drivers. Per your request I took the most recent games from a couple of reputable sites and posted the info. It all shows that the 3060Ti, on average, is upwards of 40% slower than the 3080 at 1440p.

But, since you're so hung up on the graphs from the 3090Ti review you linked and we reference the Relative performance of the 1440p from that review (see here). You can clearly see that the 3060Ti sits about 38% slower than the 3080. If you're lost, please continue.

If you're looking at that graph and seeing the 3080 at 84% and the 3060Ti at 61% and then going: 84 - 61 = 23%, you're doing it wrong.

61 * 1.23 ≠ 84
61 * 1.23 = 75

We'll say the 3090Ti in that graph represents 100 fps (we're removing the percentage sign in hopes to remove any confusion here). Every card below it represents the difference in fps against the 3090Ti.

3090Ti = 100fps
3090 = 93fps
3080Ti = 92fps
3080 = 84fps
3070Ti = 73fps
3070 = 70fps
3060Ti = 61fps

As we can see the 3060Ti is 64% slower than the 3090Ti. This information is extrapolated by taking 100/61 = 1.639 (we'll round up to 1.64. As a percentage we drop the 1 in front of the decimal and we have 64%)

Using that same graph we can take the data of:
The 3060Ti gets 61fps
The 3080 gets 84fps
84/61 = 37.7% difference in performance.

If you were taking the 84 and the 61 and subtracting them from each other to get the % difference in overall average performance of the 3080 and 3060Ti, you're doing it wrong. I'll show you:

84 - 61 = 23 (this number is the difference in fps, not the percentage of difference. To find the percentage of difference you need to divide the numbers)
84/61 = 37.7%

If think that the 37.7% difference is still wrong, you can do the math to verify. If you think it's only 23% you can check the math to verify.

61 (fps of the 3060Ti) * 1.23 = 75
75 ≠ 84 (the fps of the 3080)

61 (fps of the 3060Ti) * 1.377 = 83.99 (round up to 84)
84 = 84

Math is fun!

I don't care about all your cherrypicking, use the overall performance across all the games tested and you will see it's nowhere near 40%

The end. Stop defending your 3080 and look at reality. GA102 bottlenecks in 1440p, this is nothing new.
 
Last edited:
I get it, you threw money down the drain for a 3090, but you don't need to go through all the mental gymnastics required to justify the fact you were suckered into Nvidia's marketing spiel for the 3090.

Let it go, move on. Learn from your mistake.
Just because I understand why some would want a 3090 with 24GB of VRAM, you automatically assume I have one and that I'm trying to defend my choice? That is flawed on sooo many levels.

Your pithy comment is an attempt at a clever attack. It wasn't clever, it was just sad.
 
Back