Nvidia debuts the Titan RTX as the world's most powerful desktop GPU

You really think that nVidia cares who is actually buying the cards? That would be a first. Remember when cryptominers were buying up all of the GPUs (both AMD & nVidia), & nVidia made sure they specifically produced some GPU models that reduced or eliminated their usefulness for cryptomining so that gamers could buy cards from nVidia? Oh, wait, that's right, they didn't do that. As long as they get their cash, nVidia doesn't care who buys it or whether it's "affordable" for the customers.

What I find strange about the specs is that the Titan RTX appears to be identical to the Quadro RTX 6000 (just a slight dip in TDP, & loses the 4th DP spot & VirtualLink for an HDMI port & USB Type C)...but the Titan RTX lists "510 GT/s" (GigaTexels) as its performance figure, while the 6000 lists "10 GigaRays/sec". Not sure why they wouldn't use the same unit for that performance if they're meant to compare to each other. But... based on those figures, & the figures we have for the GeForce cards, I came to the following conclusions:
  • In terms of equivalency, you get the following:
    • GeForce RTX 2070 ~~ Quadro RTX 4000
    • GeForce RTX 2080 ~~ Quadro RTX 5000 (latter has 48 Tensor cores vs. 46 for the former, but otherwise seem identical)
    • GeForce Titan RTX ~~ Quadro RTX 6000 (RTX 2080TI only has 68 Tensors vs. the 72 these cards have, but is otherwise nearly identical in specs)
    • Quadro RTX 8000 technically is just an RTX 6000 with double the VRAM (I.e. able to already act like two 6000s using VirtualLink)
  • Currently, the maximum performance for Ray Tracing seems to be 10 GigaRays/sec. All 4 of the top cards -- RTX 2080TI, Titan RTX, Quadro RTX 6000, & Quadro RTX 8000 -- have identical performance (10 GR/s), despite the 2080TI having 4 fewer Tensor cores. And since there are big differences in VRAM sizes (11 GiB for the 2080TI, 24 GiB for the Titan & 6000, & 48 GiB for the 8000), past a certain point VRAM appears to have zero effect on ray-tracing performance.
  • If the Titan RTX is truly geared towards "content creators" vs. gamers, then the assumption is that it's the "budget" card. But it would make little sense for them to have a $6,400 GPU (Quadro RTX 6000), then say they're going to offer a $2,500 GPU with identical performance...especially since the Titan cards of previous generations were just Quadro cards geared towards non-content creators (I.e. gamers that wanted bragging rights). And no, this isn't a "well, they've always done it this way" opinion, this is a "it makes no sense from business perspective to offer your customers Product X at a given price, then 3 months later advertise Product Y with the same performance for 1/3 the price". That tends to make the customers that bought Product X feel like they got the short end of the stick, & think three times about buying from that manufacturer in the future.
I don't think I've ever seen someone this mad before...

Look, Nvidia likes money and if they don't have products for customers to buy then they lose market share. Losing market share means losing money, this isn't a difficult concept
 
"...world's most powerful desktop GPU"

- for now.

Yeah, I was going to ask the over-cliched "Yeah, but can it run Crysis?" too... but I'm cliched myself...

Too bad there are no new games to take that infamous title from Cysis.
 
"...world's most powerful desktop GPU"

- for now.

Yeah, I was going to ask the over-cliched "Yeah, but can it run Crysis?" too... but I'm cliched myself...

Too bad there are no new games to take that infamous title from Cysis.
Metro Last Light, that grinds my computer into the ground; but that may be overtaken in a few months time with the release of Metro Exodus
 
Literally no one needs 8K. Not even movie studios.
Maybe the Japanese market, as they're trying to push our 8K broadcasts for 2020 Olympics, so that market might be keen to find hardware to create content for?
 
If you'd read the article, you would know that:
1. The Titan RTX states 11 GigaRays, not 10.
2. Tensor cores do not calculate raytracing, that's RT cores
3. VRAM quantity has nothing to do with raytracing speed, nor regular memory quantity, in any application ever. It may bottleneck the I/O, but the calculations are done by the processors, not the memory - and the memory SPEED is not the same as the QUANTITY.

Please read up on the topic next time.
  1. On nVidia's Titan RTX site, there is ZERO mention of "11 GigaRays". Not in the brief specs, not in the full specs. The number "11" doesn't even show up anywhere on the site, except on the press release...& we all know how reliable press releases are in comparison to real-world performance. But I'm sure we would all love to hear your experience-based, engineering perspective of how two GPUs can have identical specs, yet the cheaper card supposedly puts out 10% more performance.
  2. Yes, I made a small mistake there...but it was in the labeling of the cores. The Quadro RTX 6000 & Titan RTX both have 576 Tensor cores and 72 RT cores...I just mislabeled the RT cores as Tensors, but the cards still have identical core counts. Beside, per nVidia (see here) not only do you need both the Tensor & RT cores for real-time raytracing, but their current top limit is 10 Gigarays/second...not 11. This is not Spinal Tap, remember, this is the real world.
  3. I said NOTHING about memory speed. And if the VRAM isn't being used in the raytracing, then what, pray tell, is being used? PC RAM? Disk cache on the SSD/HDD? That'd be a heck of a thing, having your raytracing performance being gimped because you only had 16GB of system RAM instead of the 128GB "suggested". But again, even if raytracing truly depended solely on the base system providing the data & not the VRAM, then why even bother with the NVLink bridges for the Titan or Quadro RTX cards? Heck, why even bother putting so much VRAM on them at all, if it has zero effect? They could save a whole lot of production money by just giving them 2GB of GDDR3 VRAM & still make a ton of money. It makes no sense to put so much VRAM on these GPUs, & then also advertise a feature that lets you combine them in parallel...unless it does have an effect on performance.
And BTW...yes, I read the article...but I also did additional research to verify it, rather than simply relying on a single source of information. You might want to try it yourself next time.
 
So previously you needed to spend about the same money on a slower quatro card to get a 24gb frame buffer. Which makes this reasonable value if that’s what you were looking for.

However I have no doubt that tech forums everywhere will be full of negative comments whining that they can’t afford it.

Reminder, just because you can’t afford it doesn’t mean others can’t and that it won’t sell.

Right now is a very good time to be buying a GPU with tumbling prices on the 10 series, so what if it’s not the latest and greatest if value is what you’re after?

Clearly you haven't seen the prices of the 1080ti lately. It's actually cheaper to buy a 2080 right now.
 
I guess everyone predicting the RTX 2080 Ti was replacing the Titan were wrong. Whatever it takes to justify insane pricing and now this card drops. Enjoy the monopoly, Nvidia just built a hotel!
 
"5 million content creators using PCs that could greatly benefit from real-time 8K video editing" - I refuse to believe that.
Anyway, November Steam survey shows top 3 GPUs to be under $250 cards, $450 GTX1070 is 4th, $550 GTX1080 is 8th. RTX is nowhere to be seen

Comparing market share of products on the market for a month with ones that have been available for well over 2 years is completely meaningless. Please check back in 2 years for some actually useful data.
 
Clearly you haven't seen the prices of the 1080ti lately. It's actually cheaper to buy a 2080 right now.
I have seen them. And I have also seen the 2080 selling for less than the 1080ti MSRP.

Of course it’s still a good time to buy a GPU, better than it’s been for over a year. Unless of course for some reason you feel you are entitled to a massive price cut despite no competition in the marketplace.
 
They CLAIM it's for content creators, researchers, etc... yet they decide to release it just in time for the holiday season.... hmmm....
".

Or budgets need to be used up before the year end. It's not for gamers. If 1 GPU is faster and costs less than more cores for FEA or CFD, it already paid for itself.
Most companies' year-end is June, not January... sorry for the necro, but only noticed your reply now after the Radeon VII review :)
 
The amount of Vram is impressive, my PC has 32GB lol, maybe they could bring gaming RTX Titan with 12GB and of Vram for $1300 and drop the prices of the rest of the stack :)

My first PC (IBM / x86) came with 1MB VRAM, 8MB RAM, and 1.1GB HDD. It was 2,000 GBP in 199(6?). x1.4 for USD.
 
Back