mY 1080Ti CaNI was gonna do the same.
cAn iT rUn cRySIs?
I don't think I've ever seen someone this mad before...You really think that nVidia cares who is actually buying the cards? That would be a first. Remember when cryptominers were buying up all of the GPUs (both AMD & nVidia), & nVidia made sure they specifically produced some GPU models that reduced or eliminated their usefulness for cryptomining so that gamers could buy cards from nVidia? Oh, wait, that's right, they didn't do that. As long as they get their cash, nVidia doesn't care who buys it or whether it's "affordable" for the customers.
What I find strange about the specs is that the Titan RTX appears to be identical to the Quadro RTX 6000 (just a slight dip in TDP, & loses the 4th DP spot & VirtualLink for an HDMI port & USB Type C)...but the Titan RTX lists "510 GT/s" (GigaTexels) as its performance figure, while the 6000 lists "10 GigaRays/sec". Not sure why they wouldn't use the same unit for that performance if they're meant to compare to each other. But... based on those figures, & the figures we have for the GeForce cards, I came to the following conclusions:
- In terms of equivalency, you get the following:
- GeForce RTX 2070 ~~ Quadro RTX 4000
- GeForce RTX 2080 ~~ Quadro RTX 5000 (latter has 48 Tensor cores vs. 46 for the former, but otherwise seem identical)
- GeForce Titan RTX ~~ Quadro RTX 6000 (RTX 2080TI only has 68 Tensors vs. the 72 these cards have, but is otherwise nearly identical in specs)
- Quadro RTX 8000 technically is just an RTX 6000 with double the VRAM (I.e. able to already act like two 6000s using VirtualLink)
- Currently, the maximum performance for Ray Tracing seems to be 10 GigaRays/sec. All 4 of the top cards -- RTX 2080TI, Titan RTX, Quadro RTX 6000, & Quadro RTX 8000 -- have identical performance (10 GR/s), despite the 2080TI having 4 fewer Tensor cores. And since there are big differences in VRAM sizes (11 GiB for the 2080TI, 24 GiB for the Titan & 6000, & 48 GiB for the 8000), past a certain point VRAM appears to have zero effect on ray-tracing performance.
- If the Titan RTX is truly geared towards "content creators" vs. gamers, then the assumption is that it's the "budget" card. But it would make little sense for them to have a $6,400 GPU (Quadro RTX 6000), then say they're going to offer a $2,500 GPU with identical performance...especially since the Titan cards of previous generations were just Quadro cards geared towards non-content creators (I.e. gamers that wanted bragging rights). And no, this isn't a "well, they've always done it this way" opinion, this is a "it makes no sense from business perspective to offer your customers Product X at a given price, then 3 months later advertise Product Y with the same performance for 1/3 the price". That tends to make the customers that bought Product X feel like they got the short end of the stick, & think three times about buying from that manufacturer in the future.
oNlY cHrIstMaS sOliTaiRebUt cAn iT rUN miNEcRaFt?
Metro Last Light, that grinds my computer into the ground; but that may be overtaken in a few months time with the release of Metro Exodus"...world's most powerful desktop GPU"
- for now.
Yeah, I was going to ask the over-cliched "Yeah, but can it run Crysis?" too... but I'm cliched myself...
Too bad there are no new games to take that infamous title from Cysis.
Maybe the Japanese market, as they're trying to push our 8K broadcasts for 2020 Olympics, so that market might be keen to find hardware to create content for?Literally no one needs 8K. Not even movie studios.
Literally no one needs 8K. Not even movie studios.
If you'd read the article, you would know that:
1. The Titan RTX states 11 GigaRays, not 10.
2. Tensor cores do not calculate raytracing, that's RT cores
3. VRAM quantity has nothing to do with raytracing speed, nor regular memory quantity, in any application ever. It may bottleneck the I/O, but the calculations are done by the processors, not the memory - and the memory SPEED is not the same as the QUANTITY.
Please read up on the topic next time.
can humans even see 8K?Maybe the Japanese market, as they're trying to push our 8K broadcasts for 2020 Olympics, so that market might be keen to find hardware to create content for?
Literally no one needs 8K. Not even movie studios.
So previously you needed to spend about the same money on a slower quatro card to get a 24gb frame buffer. Which makes this reasonable value if that’s what you were looking for.
However I have no doubt that tech forums everywhere will be full of negative comments whining that they can’t afford it.
Reminder, just because you can’t afford it doesn’t mean others can’t and that it won’t sell.
Right now is a very good time to be buying a GPU with tumbling prices on the 10 series, so what if it’s not the latest and greatest if value is what you’re after?
"5 million content creators using PCs that could greatly benefit from real-time 8K video editing" - I refuse to believe that.
Anyway, November Steam survey shows top 3 GPUs to be under $250 cards, $450 GTX1070 is 4th, $550 GTX1080 is 8th. RTX is nowhere to be seen
I guess everyone predicting the RTX 2080 Ti was replacing the Titan were wrong. Whatever it takes to justify insane pricing and now this card drops. Enjoy the monopoly, Nvidia just built a hotel!
I have seen them. And I have also seen the 2080 selling for less than the 1080ti MSRP.Clearly you haven't seen the prices of the 1080ti lately. It's actually cheaper to buy a 2080 right now.
Most companies' year-end is June, not January... sorry for the necro, but only noticed your reply now after the Radeon VII reviewThey CLAIM it's for content creators, researchers, etc... yet they decide to release it just in time for the holiday season.... hmmm....
".
Or budgets need to be used up before the year end. It's not for gamers. If 1 GPU is faster and costs less than more cores for FEA or CFD, it already paid for itself.
The amount of Vram is impressive, my PC has 32GB lol, maybe they could bring gaming RTX Titan with 12GB and of Vram for $1300 and drop the prices of the rest of the stack