Nvidia's RTX 2060 shows up in gaming benchmark database, almost matches GTX 1070

midian182

Posts: 9,738   +121
Staff member
Something to look forward to: Now that we’re fully clued up on Nvidia’s RTX 2080 Ti, 2080, and 2070, the focus has turned to what will presumably be the next entry in its 2000-series lineup: the RTX 2060. Little is known about the card, but we’ve just been given an idea of its performance, thanks to Final Fantasy XV.

First noticed by Tom’s Hardware, the game’s benchmark database has featured unreleased graphics cards in the past, including AMD’s recent Radeon RX 590.

The benchmark was performed at a 3840 x 2160 resolution with the graphics set to ‘High Quality.’ The RTX 2060 scored 2589 points, which makes it 30.43 percent faster than its Pascal equivalent, the GTX 1060 (6GB), which scores 1985. The upcoming card’s score is only 6 percent slower than the GTX 1070, which is still an excellent product that offers great value for money. The RTX 2060 is also faster than the RX 590.

It’s important to note that the 2060 used in these benchmarks was most likely an engineering sample, so the final product will likely have a slightly better score. But that might not matter if Nvidia prices it too high—something the company’s been accused of doing with the RTX 2070 and 2080s.

With it being more of a mid-range card, we don’t know for certain whether the 2060 will feature the same Ray Tracing tech found in the 2000-series’ high-end offerings. If not, it might not even use the RTX moniker.

The potential release date of the 2060 is also a mystery, but it could take a while to arrive. Nvidia has an excess of GTX 1060 cards following the sharp decline of the cryptomining industry, and it’s unlikely to launch their Turing equivalent until the majority of this stock has been sold off, which could take between three to six months.

Permalink to story.

 
Here's hoping that they don't jack up the price like they did with the other RTX cards. Although I do wonder if it will be RTX at all...
 
Nvidia had to bring new products to market when the actual gaming community "wasn't demanding them".

STEAM has already told us that the average gamer is not using more than an i5 and a GTX 1060.

The best all-around value for perfromance card is the non-inflated GTX 1080Ti.

Problem is, the mining craze sent prices sky high and now that it's over, lightly used hardware is being dumped on the market to compete with the RTX models.

Nobody asked for "ray tracing". Neither developers nor consumers really care about it and probably won't miss it.

The mass markets want low-end games such as Fortnite, Minecraft and Overwatch... They aren't demanding "Crysis" tech demonstrators any more.

Consumers wanted cheaper cards with more efficient, fast performance.

I'm disappointed with my RTX2080Ti's price (cost more than my Titan Xpascal) and the 2080 and the 2070. The way I see it, the latter two cards should be designed to rival the 1080Ti and the 2070Ti in performance.
 
The top 15 slots are owned by Nvidia and the $1000 2080Ti outperforms the $3000 Titan V.

I wonder if we'll see an RTX Titan?
 
Last edited:
Get rid of the Tensor cores, price it sensibly and you'll have a winner.

Whoever is made the decisions on the RTX pricing is living in cloud cuckoo land. Get your head examined and come back to the real world.

This is exactly the market I think AMD are aiming for with Navi. No RTX effects a midrange card like this probably can't do anyway. A smaller chip minus Nvidia's tensor cores, but essentially gaming performance like a GTX1070, or perhaps even higher.

Priced $80 -100 less than the RTX2060, see how Nvidia scramble around and respond by lowering prices. AMD would have a major winner on their hands.

Reminds me a little of the GTX480 and how AMD had the Radeon 5870. GTX480 was undeniably faster by about 10-15 percent, but it cost $120 extra or over 30 percent more. That was one of AMD's biggest wins against Nvidia. Been a long time though....
 
Nvidia had to bring new products to market when the actual gaming community "wasn't demanding them".

STEAM has already told us that the average gamer is not using more than an i5 and a GTX 1060.

The best all-around value for perfromance card is the non-inflated GTX 1080Ti.

Problem is, the mining craze sent prices sky high and now that it's over, lightly used hardware is being dumped on the market to compete with the RTX models.

Nobody asked for "ray tracing". Neither developers nor consumers really care about it and probably won't miss it.

The mass markets want low-end games such as Fortnite, Minecraft and Overwatch... They aren't demanding "Crysis" tech demonstrators any more.

Consumers wanted cheaper cards with more efficient, fast performance.

I'm disappointed with my RTX2080Ti's price (cost more than my Titan Xpascal) and the 2080 and the 2070. The way I see it, the latter two cards should be designed to rival the 1080Ti and the 2070Ti in performance.

Actually someone did ask for tensor cores, the AI industry and the cloud computing market. I don't think the 20 series designs were ever intended for gamers at all. I think the die was designed for industry and someone inside invidia said "how do we leverage this gpu design we have that's A: huge, and B: "wastes" huge swaths of die space on "cores" that currently have no use for game designers or gamers?!" And this is the result. A "parts bin" special where a giant chunk of die real estate is "wasted" on cores we either wont ever use or at the least wont be used for to any great extent for a generation or two.

Hopefully they currently have another generation in the pipeline that uses more of the die space on actual compute cores (cuda) for actual performance gains without similar price increases. If not then many of us will be waiting for a while.

AMD has more or less said that their next *2* generations of GPUs are not going to compete at the high end. So that leaves intels new card as the only possible competitor and we don't yet know what markets they plan on targeting. It could be a mostly AI focused product, a midrange focused product or if we are really lucky it could be a cutting edge design that allows them to compete at the top end with Nvidia both in performance and price. I guess we will find out next year.
 
Actually someone did ask for tensor cores, the AI industry and the cloud computing market.


Just like Nvidia builds special GPU for the workstation, they could have perfected tensor cores and ray tracing on their workstation GPU rather than pushing it on the gaming industry in a time when the demand IS NOT there.
 
Most likely, if they don't include the Tensor cores, it may simply be called the GTX 2060. If Nvidia isn't stupid with the pricing and it performs as well or better than a GTX 1070 (hell, they can even push it into 1070 Ti territory) no one would complain. The entire RTX line so far seems to have been rushed, and the pricing scheme is abysmal.

Price it reasonably and make sure the artifacting issues have been solved and cater to what your consumer wants (I'm fine with ray-tracing on next gen cards) and you'll have a hit on your hands.
 
Just like Nvidia builds special GPU for the workstation, they could have perfected tensor cores and ray tracing on their workstation GPU rather than pushing it on the gaming industry in a time when the demand IS NOT there.

Pretty much my opinion. The demand was there for an upgraded GPU, and there even could have been a demand for those new features, but just not at that price. Titan GPU prices for the 1080ti successor and a 30% performance boost? Ouch. Thats a tough pill to swallow and nvidia should have expected it.
 
The top 15 slots are owned by Nvidia and the $1000 2080Ti outperforms the $3000 Titan V.

I wonder if we'll see an RTX Titan?

The Titan V card already has tensor cores so it is the RTX Titan. But yes, given Nvidia's previous Titan card releases we will likely see another Titan card using Turing this time.
 
Last edited:
The Titan V can already has tensor cores so it is the RTX Titan. But yes, given Nvidia's previous Titan card releases we will likely see another Titan card using Turing this time.

I dont think most people really consider the V to be in the "titan" class the same way they do the others (the titan, titan x, etc) simply because it was released as a over the top halo class product mostly aimed at the workstation market (since it cost *3,000$* retail).

In fact I think part of the reason behind the less then stellar reception of the new cards is that they set expectations for performance and pricing by reusing the 70, 80, and 80ti prefixes. Had they released the RTX series as something "else" (like the titan class) and not even named it xx70, xx80, xx80ti then I suspect they probably would have gotten less backlash overall. A big part of the negative response was the fact that your paying a 40-50% premium for a 30% performance gain when nvidia had accustomed gamers with each successive generation giving not only a larger performance improvement but also maintaining price parity or even going down in price. So if 500$ bought me a 100fps GPU this gen then the new GPU should get me 130-150fps for the same 500$. Instead what we got was parity at best, or an increase at worst. The gamer who couldn't afford 1080ti performance in pascal *still* can't afford 1080ti performance, and the gamer who wanted then next step up from 1080ti performance in the same price bracket was surprised to find that the new next step up had been scaled up to titan MSRPs.

Basically, TLDR, performance per dollar hasn't changed between pascal and turing. And that sucks. If the future of discrete GPUs is that a doubling of performance is going to come with a doubling of cost, well thats a bleak future for gaming.
 
Back