Nvidia GeForce RTX 2070 Super and RTX 2060 Super Review

The 2070 super is a promising card, and if I was looking to upgrade that would be my card of choice. It really makes me think Navi will be DOA, as these cards are still 14nm, when Nvidia finally embraces 7nm or even 12nm Navi will be left in the dust. Again.

Please point to a specific place/line in the DirectX 12 specification where Microsoft say that your nebulous "async compute" is required for a specific GPU to be DX12 compliant.

Better yet, please define what you mean by async compute since you believe that the only NVIDIA cards capable of it are the RTX line of cards.

What are "real" DX12 games ? What does a game has to do/have to be considered a "real" DX12 game ?

First of all, we don't really know how Navi performs in anything since it hasn't launched yet. You seem to miss that "little" detail. And then, faster than what ? Pascal ? Turing ? Polaris ? Vega ?

Developers have been writing their games for GCN (X1, X1X, PS4, PS4PRO) for years now and yet being present in consoles for so long did not help GCN to outperform NVIDIA's offerings. Explain why you think it's going to be different with Navi ? I'm not saying it won't but the same arguments were flying around back when AMD won both X1 and PS4 deals a few years back. Historical evidence doesn't agree with your assumption.

Barometer of what ? Marketing failure ?


Dude, stop tryharding and learn to read. Never did I say that Pascal wasn't DX12 complaint. READ...

I said that Pascal chokes on real dx12 code that has async compute and/or DirectX RayTracing, etc. It is not the same as Turing, even if the 1080ti is as powerful at a RTX2080... the RTX is still superior, because Turing can do those things.

Secondly, Navi is not GCN, it is RDNA(1) and is a competely new archetecture. RDNA can simultaneously use GCN and RDNA... it is a hybrid design, until Developers transition into full RDNA. Even then, RDNA can inherently do GCN regardless, because is has a highly updated front end. Beyond what Turing is capable.

RDNA is 100% Gamer stuff.
But can Turing do Pascal? What about Kepler doing Fermi?

GCN is not an API, it is not something that develoeprs specifically code for like DX12.

Do you have examples of this "DX12" code that Pascal can supposedly not run properly? You keep claiming that the 1xxx series cant handle DX12, despite completely missing that Async is an OPTIONAL part of DX12, not a requirement to run DX12 games, and you have yet to provide a source for your ridiculous claims.

Also, Pascal cant choke on RayTracing, because RayTracing is exclusive to Turing cards. You have no clue what you are talking about.
 
The 2070 super is a promising card, and if I was looking to upgrade that would be my card of choice. It really makes me think Navi will be DOA, as these cards are still 14nm, when Nvidia finally embraces 7nm or even 12nm Navi will be left in the dust. Again.


But can Turing do Pascal? What about Kepler doing Fermi?

GCN is not an API, it is not something that develoeprs specifically code for like DX12.

Do you have examples of this "DX12" code that Pascal can supposedly not run properly? You keep claiming that the 1xxx series cant handle DX12, despite completely missing that Async is an OPTIONAL part of DX12, not a requirement to run DX12 games, and you have yet to provide a source for your ridiculous claims.

Also, Pascal cant choke on RayTracing, because RayTracing is exclusive to Turing cards. You have no clue what you are talking about.


Navi is 7nm.... and out performs Nvidia much bigger chips.

Secondly, Pascal can't do asynchronous compute, that is why Nvidia pushes Turing so that they don't fall behind AMD in async. And yes, Pascal chokes on newer compute you even admit it yourself above, but still want to play like your clueless and need to be lead around with sources.

Turing is better than Pascal.... you just don't want to admit it, because you have a personal stake and sleep with pascal.
 
The 2070 super is a promising card, and if I was looking to upgrade that would be my card of choice. It really makes me think Navi will be DOA, as these cards are still 14nm, when Nvidia finally embraces 7nm or even 12nm Navi will be left in the dust. Again.


But can Turing do Pascal? What about Kepler doing Fermi?

GCN is not an API, it is not something that develoeprs specifically code for like DX12.

Do you have examples of this "DX12" code that Pascal can supposedly not run properly? You keep claiming that the 1xxx series cant handle DX12, despite completely missing that Async is an OPTIONAL part of DX12, not a requirement to run DX12 games, and you have yet to provide a source for your ridiculous claims.

Also, Pascal cant choke on RayTracing, because RayTracing is exclusive to Turing cards. You have no clue what you are talking about.


Navi is 7nm.... and out performs Nvidia much bigger chips.

Secondly, Pascal can't do asynchronous compute, that is why Nvidia pushes Turing so that they don't fall behind AMD in async. And yes, Pascal chokes on newer compute you even admit it yourself above, but still want to play like your clueless and need to be lead around with sources.

Turing is better than Pascal.... you just don't want to admit it, because you have a personal stake and sleep with pascal.

So why do I see 1080ti as good as RTX 2080 on every benchmark (as long as the reviewer uses same clocks on 1080ti)? You have a lot of 1080ti 2000mhz vs RTX 2080 2000mhz comparasions on youtube ans they are the same. Even on dx12 games like BF V, tomb raider, metro etc. What you are saying makes no sense.

Also how does Navi outperform Nvidia? Have any links? Any reviews yet? You talk out of your mouth as usual.. And we all know why. Always the same.
 
Secondly, Pascal can't do asynchronous compute, that is why Nvidia pushes Turing so that they don't fall behind AMD in async.
A simple check in 3DMark Time Spy on a Pascal chip would provide some insight to your claim - on my Titan X (Pascal), the first graphics test averages at 63.4 fps with async compute on, and 59.1 fps with it off; this is a performance gain of 7%. All somebody needs to do is test a few Turing chip in the same manner and compare the relative performance gains.

Direct3D 12 has far less 'hand holding' than Direct3D 11, so programs developed for both APIs aren't always go to display the same levels of performance. Older games aren't going to be sensible choices for examining such differences, but newer ones are - for example:

https://www.overclock3d.net/reviews/software/the_division_2_pc_performance_review/7

The 1080p results for the NVIDIA chips definitely favours D3D12, whereas there are no gains at 4K. For AMD, D3D12 is better regardless of the test resolution. However, the important part is the analysis of the claim that Pascal chips suffer because of the use of async compute, and this test, at the least, shows that this doesn't seem to be the case, if correctly programmed for.
 
It was never meant to be taken seriously.

Although, the 2070S is %12 faster than the 2070, and the 5700XT is claimed to be 6% faster than the 2070, thus the 2070S theoretically should be 6% faster than the 5700XT?

Really need a proper benching across multiple titles and scenarios. One benchmark, one that heavily favors Nvidia none the less, can't be used to prove anything.
The sad thing is the 2070 s is around 10% more expensive then the 5700xt. If the 5700 xt delivers a 2% performance gain over the original 2070, then it's pretty much the same thing. Cost vs performance.

So the 5700xt will probably be 4% better, cost vs performance. With the new anti lag stuff I'd go for amd. Unless of course when the actual benchmarks come out and something changes. What I want to see is what is going to happen with the vega 64? 300-330$?

The 2070 super has a bit better tdp, which would really take years to make an impact.
 
New term - "Toilet paper launch", since Jenny soiled himself upon hearing Navi price/performance figures.
 
A simple check in 3DMark Time Spy on a Pascal chip would provide some insight to your claim - on my Titan X (Pascal), the first graphics test averages at 63.4 fps with async compute on, and 59.1 fps with it off; this is a performance gain of 7%. All somebody needs to do is test a few Turing chip in the same manner and compare the relative performance gains.

Direct3D 12 has far less 'hand holding' than Direct3D 11, so programs developed for both APIs aren't always go to display the same levels of performance. Older games aren't going to be sensible choices for examining such differences, but newer ones are - for example:

https://www.overclock3d.net/reviews/software/the_division_2_pc_performance_review/7

The 1080p results for the NVIDIA chips definitely favours D3D12, whereas there are no gains at 4K. For AMD, D3D12 is better regardless of the test resolution. However, the important part is the analysis of the claim that Pascal chips suffer because of the use of async compute, and this test, at the least, shows that this doesn't seem to be the case, if correctly programmed for.

Don't know what to tell you, but Pascal is EOL dude, Turing is superior in every way.
 
I wasn't suggesting otherwise - it clearly is, in every aspect of the design and performance. I don't think anybody would argue that Turing isn't better than Pascal, with regards to such things. But that wasn't what was being examined though; it was the repeated statements of:
...Pascal can't do asynchronous compute...
...Pascal chokes on newer compute...
...Pascal chokes on real dx12 code that has async compute...
...Pascal chokes on real DX12 games...
...1080ti can't do async compute...
...1080ti can't not do asynchronous compute. It chokes on dx12 code...
that were and the brief evidence put forward suggests otherwise to the above statements.

Edit: I wonder if you were thinking of the Maxwell architecture - it really didn't like any attempt to do asynchronous blocks of instructions with it.
 
Last edited:
Dude, stop tryharding and learn to read. Never did I say that Pascal wasn't DX12 complaint. READ...

I said that Pascal chokes on real dx12 code that has async compute and/or DirectX RayTracing, etc. It is not the same as Turing, even if the 1080ti is as powerful at a RTX2080... the RTX is still superior, because Turing can do those things.

Secondly, Navi is not GCN, it is RDNA(1) and is a competely new archetecture. RDNA can simultaneously use GCN and RDNA... it is a hybrid design, until Developers transition into full RDNA. Even then, RDNA can inherently do GCN regardless, because is has a highly updated front end. Beyond what Turing is capable.

RDNA is 100% Gamer stuff.

Well, you have said things like: "GTX1080ti is for DX11 games.", "GTX1080Ti chokes on real DX12 code" which heavily imply that you do not consider the 1080Ti to be a DX12 card. There's also this:

What is this nonsense? The 1080ti is absolutely a DX12 card. Even the GTX 900 series was DX12 ready.
Sorry, no your 1080ti can't not do asynchronous compute. It chokes on dx12 code...

Cheap Scotch stated that the 1080Ti is a DX12 card. You disagreed. So actually yes, you have said that the 1080Ti is not a DX12 card. Now you're trying to move the goal posts and even bring DXR into the mix for no apparent reason. Stand by your own words, don't be a coward.

None of what you said answers the question I asked you. Most of it doesn't even make sense. It's just a word salad with a bunch of incoherent statements.

Since you believe that Turing is capable of AC when Pascal is not, please highlight differences in Turing vs Pascal that specifically enabled Turing to do async compute when Pascal could not.

It's actually a little surprising you put so much weight behind async compute. Even on AMD hardware AC alone gives like 10-15% performance uplift provided workloads had been constructed favourably. While definitely not negligible it's nowhere near enough to make it a game changer. And that's ignoring the nature of DX12/Vulkan as low-level APIs.
 
Last edited:
16 GB HBM2 cost a lot of money, 300~ just for the 16GB of memory.

Irrelevant. Fancy tech alone is not an excuse for a high price if said tech doesn't give any meaningful advantages versus alternatives.

Mike89 said that 2070S at 499$ is price gouging when at the same time we have a card that costs 200$ more for the same level of performance which is what I was pointing out.
 
With the Benchmarks finally out Navi is DOA if they don't reduce price. The RTX 2070 Super is basically a GTX 1080TI with built in Ray Tracing for $499.. AMD your move.


GTX1080ti is for DX11 games. It doesn't have anything new or exciting and is basically wasted money for gamers today. Insignificant.

Notice how these benchmarks aren't using frametimes..? It is because RTX2070 can't compete with 5700x... which means superturd.

Except for the fact that the average gamer doesn't care about anything new or exciting; 1000 series is looking like it was a fantastic bargain to continue to play the latest games at good FPS in 2019 while Nvidia gets the greed worked out of its system.
 
Well, you have said things like: "GTX1080ti is for DX11 games.", "GTX1080Ti chokes on real DX12 code" which heavily imply that you do not consider the 1080Ti to be a DX12 card. There's also this:



Cheap Scotch stated that the 1080Ti is a DX12 card. You disagreed. So actually yes, you have said that the 1080Ti is not a DX12 card. Now you're trying to move the goal posts and even bring DXR into the mix for no apparent reason. Stand by your own words, don't be a coward.

None of what you said answers the question I asked you. Most of it doesn't even make sense. It's just a word salad with a bunch of incoherent statements.

Since you believe that Turing is capable of AC when Pascal is not, please highlight differences in Turing vs Pascal that specifically enabled Turing to do async compute when Pascal could not.

It's actually a little surprising you put so much weight behind async compute. Even on AMD hardware AC alone gives like 10-15% performance uplift provided workloads had been constructed favourably. While definitely not negligible it's nowhere near enough to make it a game changer. And that's ignoring the nature of DX12/Vulkan as low-level APIs.


Don't paint me as a liar.

I never said that pascal can not do DX12. I have repeatedly said that it can not do what Turing can do. Asynchronous compute and full DX12 compliance. It seems a good many people have issues with facts.

I'll take a $499 2070Super over a 1080ti. But I wouldn't upgrade my 1080ti for any SUPER. I'd wait until 2020.
 
Don't paint me as a liar.

I never said that pascal can not do DX12. I have repeatedly said that it can not do what Turing can do. Asynchronous compute and full DX12 compliance. It seems a good many people have issues with facts.

I'll take a $499 2070Super over a 1080ti. But I wouldn't upgrade my 1080ti for any SUPER. I'd wait until 2020.

Do you not understand how language works ? Let's go through it again. Cheap Scotch stated that the 1080Ti is a DX12 card. You disagreed with his statement. That is equivalent of saying the 1080Ti is NOT a DX12 card. That's how disagreeing with a positive statement works.

What's even better, you have just said that Pascal lacks full DX12 compliancy. If it were true, then by definition Pascal would NOT be a DX12 architecture, period. Strike two.

To top all of that, many people have challenged you to back up your claims with examples of that "real DX12 code" you speak of. I asked you to point to specific changes between Turing and Pascal which enabled Turing to do async compute as you put it. Dead silence. I would also love to see your proof of why Pascal isn't fully DX12 compliant. One can dream :)

Let's recap: You dance around semantics, you use vauge and sometimes outright incoherent statements, you throw just asinine claims left and right, you don't provide any proof of said claims even when challenged and you bring unrelated arguments into the conversation.

The above tactics are hallmarks of a liar, or in a more general sense a dishonest person. So if you don't want to be painted as a liar, don't act like one, simple.
 
Looking back at the last 6-9 months, early adopters of RTX cards have gotten pretty screwed over by Nvidia. People trusted Nvidia and purchased cards that were objectively no faster than their previous generation's counterparts (some of which were 30 months old), primarily due to the promise of ray tracing being a game changer within the industry. They have faced extensive QA issues with early versions of cards that never should have been shipped in the first place. They have paid $350 (minimum) for an "enthusiast" RTX 2060 which inexplicably only had 6GB of VRAM.

Despite the lousy RTX launch, I am glad to see Nvidia making these changes. AMD has finally started to threaten Nvidia's dominance in the high-end card market, and consumers should ultimately benefit from it.
rtx cards performed better than their 10 series counterparts at launch. people were just upset it wasnt a huge milestone of performance gains. if people would watch linus tech tips etc and do some research theyd know what theyre buying. nvidia should just hold off on releasing products until theyre at full potential. amd isnt knocking on their doors yet in order to pressure them so theres literally no reason to rush cards out at this moment except to gain money off of people who want the newest things and then release a better refreshed version of the same card after people like me have already paid their money.
 
Dude, stop tryharding and learn to read. Never did I say that Pascal wasn't DX12 complaint. READ...

I said that Pascal chokes on real dx12 code that has async compute and/or DirectX RayTracing, etc. It is not the same as Turing, even if the 1080ti is as powerful at a RTX2080... the RTX is still superior, because Turing can do those things.

Secondly, Navi is not GCN, it is RDNA(1) and is a competely new archetecture. RDNA can simultaneously use GCN and RDNA... it is a hybrid design, until Developers transition into full RDNA. Even then, RDNA can inherently do GCN regardless, because is has a highly updated front end. Beyond what Turing is capable.

RDNA is 100% Gamer stuff.
Hello there troll... you say that Pascal chokes on dx12 code.... care to provide some evidence?

And we’ve now received Navi benchmarks.... hurray, they almost hit 2070 super levels... still can’t touch the high end....
 
I'm with you there. It is definitely cheaper to buy games on PC with even a little bit of patience.

Do console games still cost more on average than PC games? If so I hope that is also included in the comparison.

I really have to deeply desire a game, before I spend over $15 for a PC title. Ohh and I use my machine for more than gaming, so for me it is not a straight up comparison in price. For me the console price would totally be additional.
 
It's not silly, it's not a myth, and I wasn't even discussing that train of thought. I simply stated that I've spent that much, and they would not. I spend that much for the performance gains...which I stated clearly twice. Beyond that, it is not likely at all you'll get Xbox One X and PS4 Pro performance for $400 on PC. That is just not true, and it's not a myth.

Most of what else you say, I agree with.

But it's just a fallacy that you need to spend 2000$ or even 1000$ to have a "console-like" experience (performance, visual fidelity) on PC so please, stop spreading this silly myth. You can build a gaming PC capable of that for a price similar to a console. Educate your friends because it's evident they are misinformed on the subject.

Now, you absolutely can spend as much as you wish on a PC. Isn't it awesome that you have the option to do so ? For example, you can upgrade your rig at any time without having to buy a completely new system every 3-5years. With a decent foundation - CPU+MOBO+RAM - you can get through 2-3 console gens only having to upgrade your GPU to keep your rig relevant.

Games are cheaper on PC, there's more places to buy them which means more competition and that means more sales/deals etc. PC games are also more flexible - mods - making them last a lot longer. PC gamers are also very adept and fast at fixing bugs/glitches which dev's are too inept to do themselves etc. Let's also not forget that you don't have to pay to play online.

You get to choose which controller you want, not the console manufacturer. And on...

So in the long term, I'd wager you'll spend less on a gaming PC, if you're smart about it. And if prices of computer components keep rising, so will the cost of consoles. That or they will fall even farther behind PCs.

Consoles do have one thing going for them - exclusives, at least some of them. The amount of sacrifices you have to make to have the option to play them though is just too much. Performance alone is reason enough to not consider consoles at all. If you can live with it then that's fine I guess. It's weird though. You have the option to create a gaming machine exactly as you want - size, price, performance, controllers etc. - in case of PCs. You lose all of that freedom, upgrade potential and you put so many restrictions on yourself just for a bit more convenience and a few solid exclusive titles. It's a very, very steep price to pay.
 
It's not silly, it's not a myth, and I wasn't even discussing that train of thought. I simply stated that I've spent that much, and they would not. I spend that much for the performance gains...which I stated clearly twice. Beyond that, it is not likely at all you'll get Xbox One X and PS4 Pro performance for $400 on PC. That is just not true, and it's not a myth.

Most of what else you say, I agree with.

Yeah sorry, it was a knee jerk reaction on my part and I know you haven't claimed that one has to spend 2000$ for a console-like experience. There are many instances however where such things are being spread thus my reaction.

I will also concede that using new-only parts it would be hard to match an X1X or PS4PRO but I was thinking about a solution based on a end-of-lease enterprise desktop with beefed up PSU and GPU, like so - Link

Have built a few similar setups for my friends and clients and they sure as hell can give you the performance of a console for a similar amount of money :)
 
Back