Nvidia debuts the Titan RTX as the world's most powerful desktop GPU

Greg S

Posts: 1,607   +442
Bottom line: The Titan RTX brings new possibilities for AI researchers, content creators, and data scientists. Turing architecture will help speed up development and reduce compute times for those solving difficult problems.

After teasing the Titan RTX on social media sites with well-known influencers, Nvidia has officially pulled the sheets off the new most powerful desktop GPU. Built on Turing architecture and nick-named the T-Rex, Nvidia is boasting of more than 130 teraflops of compute performance for deep learning applications with 11 GigaRays of ray-tracing.

To churn out all those calculations, 576 Turing Tensor Cores can be found on the die. An additional 72 Turing RT Cores are responsible for handling all ray-tracing needs. A cool 24GB of GDDR6 memory provides bandwidth of up to 672GB/s. Real-time 8K video editing is possible thanks to all of the available memory and bandwidth.

It turns out that the Titan RTX is actually very similar to the Quadro RTX 6000 in terms of available features. The Titan RTX and Quadro series allow for full FP32 accumulation, unlike the GeForce series that is limited to half the performance for FP16 with FP32 accumulation. The GeForce RTX 2080 Ti is also a powerful card, but this key difference will set the Titan RTX apart.

For those that need more performance than a single GPU can provide, a pair of Titan RTX GPUs can be linked at 100GB/s via NVLink. Gamers need not worry about buying two of these cards though. AI researchers are the intended audience for multiple GPUs. Doubling the amount of memory from previous generation offerings will allow for significantly larger data sets to be handled efficiently.

Nvidia estimates that there are at least 5 million content creators using PCs that could greatly benefit from real-time 8K video editing. For those that are not yet ready to make the jump beyond 4K, ray-tracing can still produce some impressive animations and special effects that would otherwise take too long to render with CPU computation.

Here is the part where your wallet will begin to cry. The Titan RTX will be available in the US and Europe later this month for the bargain price of $2,499. Considering that researchers and professional content creators are the main targets though, the price is not likely to be a problem.

Permalink to story.

 
And even though this is marketed for creators, we will still get the onslaught of "what a ridiculous price, you can get a GPU that has similar gaming performance for much less!!" comments in 3.....2.....1

( or in a thread where you show gaming benchmarks )
 
The TITAN already available from nvidia's 20 series lineup? This is whack. This never happens. Usually these cards come out over the course of a year, not within a few months - this just screams to me that nvidia has a new line up of cards coming in the near future (next 1-1.5 years) which feature HBM2.
 
The amount of Vram is impressive, my PC has 32GB lol, maybe they could bring gaming RTX Titan with 12GB and of Vram for $1300 and drop the prices of the rest of the stack :)
 
And even though this is marketed for creators, we will still get the onslaught of "what a ridiculous price, you can get a GPU that has similar gaming performance for much less!!" comments in 3.....2.....1

( or in a thread where you show gaming benchmarks )
So because it's marketed towards creators, which is debatable, it's justified to cost an absorbent amount of money?
 
And even though this is marketed for creators, we will still get the onslaught of "what a ridiculous price, you can get a GPU that has similar gaming performance for much less!!" comments in 3.....2.....1

( or in a thread where you show gaming benchmarks )
So because it's marketed towards creators, which is debatable, it's justified to cost an absorbent amount of money?

Insane prices haven't hurt the Quadro line, so... :p
 
$/£2K Titan.....

And even though this is marketed for creators, we will still get the onslaught of "what a ridiculous price, you can get a GPU that has similar gaming performance for much less!!" comments in 3.....2.....1

( or in a thread where you show gaming benchmarks )
So because it's marketed towards creators, which is debatable, it's justified to cost an absorbent amount of money?

Insane prices haven't hurt the Quadro line, so... :p

Because they are targeted towards industry professionals who can justify the expense from a work flow/productivity point of view.
 
Literally no one needs 8K. Not even movie studios.

8K can be downsampled to 4K. Movie studios probably shoot in 8K and then downsample it to whatever they want.

Just because you don't need it doesn't mean other won't.
 
"Considering that researchers and professional content creators are the main targets though, the price is not likely to be a problem."

Because Pros and Researchers just love to waste money! And how do you think they claim back these costs ... by increasing their prices to end users.
 
High prices are not good for academic research in Canada. Post-doc or doctoral research relies on grants to fund the research. They get no money anywhere else and there is no one else to pass any costs onto. If you're an academic PhD (BS piled higher and deeper we used to say) and you are not doing research your career will go down the tubes. Time to look for a job in gov't research or private industry.
 
So previously you needed to spend about the same money on a slower quatro card to get a 24gb frame buffer. Which makes this reasonable value if that’s what you were looking for.

However I have no doubt that tech forums everywhere will be full of negative comments whining that they can’t afford it.

Reminder, just because you can’t afford it doesn’t mean others can’t and that it won’t sell.

Right now is a very good time to be buying a GPU with tumbling prices on the 10 series, so what if it’s not the latest and greatest if value is what you’re after?
 
What people don't understand about the price is that it's intended to keep it out of the hands who don't actually need it. Look at the 2080Ti, it's sold out nearly everywhere. Availability is a big issue for these cards so the price locks people out who can't see the point in doubling the price for a 5-10% in gaming performance. However, the price is justified for people who get 10x the performance for double the price.

If Nvidia didn't figure out how to keep these cards on the shelf for professionals they would likely go to AMD meaning they'd lose market share. Losing market share is worse than having gamers mad they can't afford it
 
And even though this is marketed for creators, we will still get the onslaught of "what a ridiculous price, you can get a GPU that has similar gaming performance for much less!!" comments in 3.....2.....1

( or in a thread where you show gaming benchmarks )
So because it's marketed towards creators, which is debatable, it's justified to cost an absorbent amount of money?
That must be the old fashioned paper kind of money, but even that wasn't really very absorbent.
 
What people don't understand about the price is that it's intended to keep it out of the hands who don't actually need it. Look at the 2080Ti, it's sold out nearly everywhere. Availability is a big issue for these cards so the price locks people out who can't see the point in doubling the price for a 5-10% in gaming performance. However, the price is justified for people who get 10x the performance for double the price.

If Nvidia didn't figure out how to keep these cards on the shelf for professionals they would likely go to AMD meaning they'd lose market share. Losing market share is worse than having gamers mad they can't afford it

You really think that nVidia cares who is actually buying the cards? That would be a first. Remember when cryptominers were buying up all of the GPUs (both AMD & nVidia), & nVidia made sure they specifically produced some GPU models that reduced or eliminated their usefulness for cryptomining so that gamers could buy cards from nVidia? Oh, wait, that's right, they didn't do that. As long as they get their cash, nVidia doesn't care who buys it or whether it's "affordable" for the customers.

What I find strange about the specs is that the Titan RTX appears to be identical to the Quadro RTX 6000 (just a slight dip in TDP, & loses the 4th DP spot & VirtualLink for an HDMI port & USB Type C)...but the Titan RTX lists "510 GT/s" (GigaTexels) as its performance figure, while the 6000 lists "10 GigaRays/sec". Not sure why they wouldn't use the same unit for that performance if they're meant to compare to each other. But... based on those figures, & the figures we have for the GeForce cards, I came to the following conclusions:
  • In terms of equivalency, you get the following:
    • GeForce RTX 2070 ~~ Quadro RTX 4000
    • GeForce RTX 2080 ~~ Quadro RTX 5000 (latter has 48 Tensor cores vs. 46 for the former, but otherwise seem identical)
    • GeForce Titan RTX ~~ Quadro RTX 6000 (RTX 2080TI only has 68 Tensors vs. the 72 these cards have, but is otherwise nearly identical in specs)
    • Quadro RTX 8000 technically is just an RTX 6000 with double the VRAM (I.e. able to already act like two 6000s using VirtualLink)
  • Currently, the maximum performance for Ray Tracing seems to be 10 GigaRays/sec. All 4 of the top cards -- RTX 2080TI, Titan RTX, Quadro RTX 6000, & Quadro RTX 8000 -- have identical performance (10 GR/s), despite the 2080TI having 4 fewer Tensor cores. And since there are big differences in VRAM sizes (11 GiB for the 2080TI, 24 GiB for the Titan & 6000, & 48 GiB for the 8000), past a certain point VRAM appears to have zero effect on ray-tracing performance.
  • If the Titan RTX is truly geared towards "content creators" vs. gamers, then the assumption is that it's the "budget" card. But it would make little sense for them to have a $6,400 GPU (Quadro RTX 6000), then say they're going to offer a $2,500 GPU with identical performance...especially since the Titan cards of previous generations were just Quadro cards geared towards non-content creators (I.e. gamers that wanted bragging rights). And no, this isn't a "well, they've always done it this way" opinion, this is a "it makes no sense from business perspective to offer your customers Product X at a given price, then 3 months later advertise Product Y with the same performance for 1/3 the price". That tends to make the customers that bought Product X feel like they got the short end of the stick, & think three times about buying from that manufacturer in the future.
 
Once again, NVIDIA releases a card that pretends it isn't for gamers, yet appeals to them nonetheless...

They CLAIM it's for content creators, researchers, etc... yet they decide to release it just in time for the holiday season.... hmmm....

Basically, we take our Quadra line GPU that wasn't quite binned properly, slap on some cool stuff, cut the price in half and call it a Titan...

If you're not a gamer, it's actually a great deal... if you ARE a gamer, then you're spending a LOT of money for the privilege of boasting "best gaming rig ever".
 
Currently, the maximum performance for Ray Tracing seems to be 10 GigaRays/sec. All 4 of the top cards -- RTX 2080TI, Titan RTX, Quadro RTX 6000, & Quadro RTX 8000 -- have identical performance (10 GR/s), despite the 2080TI having 4 fewer Tensor cores. And since there are big differences in VRAM sizes (11 GiB for the 2080TI, 24 GiB for the Titan & 6000, & 48 GiB for the 8000), past a certain point VRAM appears to have zero effect on ray-tracing performance.

If you'd read the article, you would know that:
1. The Titan RTX states 11 GigaRays, not 10.
2. Tensor cores do not calculate raytracing, that's RT cores
3. VRAM quantity has nothing to do with raytracing speed, nor regular memory quantity, in any application ever. It may bottleneck the I/O, but the calculations are done by the processors, not the memory - and the memory SPEED is not the same as the QUANTITY.

Please read up on the topic next time.
 
They CLAIM it's for content creators, researchers, etc... yet they decide to release it just in time for the holiday season.... hmmm....
".

Or budgets need to be used up before the year end. It's not for gamers. If 1 GPU is faster and costs less than more cores for FEA or CFD, it already paid for itself.
 
"5 million content creators using PCs that could greatly benefit from real-time 8K video editing" - I refuse to believe that.
Anyway, November Steam survey shows top 3 GPUs to be under $250 cards, $450 GTX1070 is 4th, $550 GTX1080 is 8th. RTX is nowhere to be seen.
People will turn away from PCs to consoles rather than spend $1200 for "mid-tier 2080Ti" as nVidia is trying to convince Us is the new standard.
I still own GTX970 and waiting for $350 card with double the performance. First victims of lack of progress will be game devs that refuse to optimise their games for cards in $150-250 price range.
 
Back