Upcoming GeForce GTX 1070 is faster than Titan X for $379

They are enthusiasts, and they do their research more than anyone.
TL;DR they knew Pascal was coming and are always looking for an excuse to upgrade.
I did my research when I decided to purchase a 780.

The Ti came out a few months later, with no warning.

Much more powerful than the 780, released for the same price. I was burned so hard. It was the first ever x80 class GPU I purchased; my first real enthusiast piece of hardware. And it took months of saving.

I'll stay salty over it my entire life.

LOL! That's terrible man! I feel for you, but it's a reality we must all live with when buying a graphics card.
It happens.
 
I did my research when I decided to purchase a 780.

The Ti came out a few months later, with no warning.

Much more powerful than the 780, released for the same price. I was burned so hard. It was the first ever x80 class GPU I purchased; my first real enthusiast piece of hardware. And it took months of saving.

I'll stay salty over it my entire life.

Next time get an EVGA, they offer a step-up for a while after you buy a new card you just pay the difference for the next one.
 
The link you provided shows that the Titan X merely holding it own against gpus $400 cheaper. It's computer performance isn't that much better than a 980 Ti and it doesn't have as good FP processing as previous titans have had.
You still don't get it.
1. The Titan X's GPGPU selling point is the 12GB memory capacity. GPGPU workloads including visualization and rendering that require large onboard memory capacity - basically any rendering at 4K and up for example and large parallelized data set workloads. Name a single other card that offers at least 12GB of vRAM that retails for $1K.
2. When the Titan X launched there were no $400 cheaper cards offering the same performance. Everyone knew they would come, but the months when they ruled the roost were obviously worth it to a large percentage of buyers. If this were not the case, why would enthusiasts continue to do the same thing generation of cards after generation?

You know why people don't put 8-16 980Ti in a server rack for GPGPU for the same workloads? Because the vRAM cannot hold the data set and you would have an instance of 8-16 cards sending huge quantities of overflow data across the PCI-E bus and rely on system memory segmenting the individual workloads and constantly moving them backwards and forwards across the bus. It doesn't work. It introduces stalls and latency and broken animation and TDR issues.
I could post PDF's on the subject but I suspect you'd just ignore them, just as you seem to be ignoring the fact that vRAM capacity is in an arms race for the exact same reasons I've outlined. It is the reason that the M6000 is now available as a 24GB card as well as a 12GB card (to be able to undertake 5K-8K rendering), and why the AMD increased the W9100 to 32GB
Good for them but purely looking at fact the Titan X doesn't provide a good performance / cost ratio except for a very very small segment of the already small ultra-enthusiast class.
Newsflash! No $1000 card has ever provided good performance/cost ratio, but some people could give a damn about perf/$ - these are the buyers. Some people need the vRAM capacity for GPGPU workloads - these are the buyers.

What I find laughable is that you are trying to prove that no market exists for these cards DESPITE the fact that they have sold well, are popular with enthusiasts/benchmarkers and are prevalent in GPGPU systems - according to you, everyone who doesn't share your view lacks common sense - despite the fact that you obviously know very little of the mindset of the actual users or the usage the cards are put to . You devote thousands of words to this subject, but when a prime example of a card from a rival vendor that has an even worse perf/$ ratio and will sell in minuscule numbers you raise not a single word of criticism...now why would that be? At first I would have thought that it might be because its usage characteristics make it suitable for niche market workloads which mitigate its pricing.....but that is clearly wrong, because you believe no such argument is valid.
 
Last edited:
Why do you say it's faster than Titan X? Even Nvidia says it's "as powerful as Titan X" (https://twitter.com/nvidia/status/728773098661543936), not faster and in raw TFLOPS it is slower/equal. It's supposed to be faster in VR, not in every use case.

Of course it might end up faster than Titan (especially after OC), but for now there is no evidence of that. Writing such a statement is just bending the truth - not a thing I would expect from TechSpot.

In their event, Nvidia's CEO specifically said it was faster than a Titan X. And looking at TFLOPs data (all we have to go on right now) they are correct (6.5 TFLOPs versus 6.1 TFLOPs)
 
In their event, Nvidia's CEO specifically said it was faster than a Titan X. And looking at TFLOPs data (all we have to go on right now) they are correct (6.5 TFLOPs versus 6.1 TFLOPs)
You can make different claims about TFLOPS power of Titan X - it's either ~6.1 or ~6.5 TFLOPs. It depends if you count it using base or boost clock. Theoretical 9 TFLOPS of GTX 1080 can only be achieved using it's boost clock, so its' probably the same story with GTX 1070. Looking at TFLOPs data it's still "we don't know for sure". It'll probably go higher than Titan X after OC, but for now even Nvidia doesn't market it as faster.

But on the other hand the CEO really did say it's faster. So you're right, they did announce it like that and it wasn't wrong to write about it. Now we have to wait for some real-world data.
 
Holy Cow, $350 for faster than a titan x?!!

Must feel pretty crappy for those guys that bought titan X cards just a few months ago to hear they can get more for 1/3 the price now.

Not really. No one buys a Titan on a whim. They are enthusiasts, and they do their research more than anyone.
TL;DR they knew Pascal was coming and are always looking for an excuse to upgrade.

If they do their research more than anyone, they wouldn't buy a Titan-X in the first place, specially when an overclocked GTX 980 Ti costs 50% less and is around 30% faster.
Essentially what I was thinking. There's an "if it is more expensive than everything else, it must be the absolute best" market out there, and nVidia knows this all too well if you ask me. As an example, I have a friend that I consider highly technical and does a lot of research when he buys things, however, he seems stuck in a "if it costs a lot of money, it must be either really good or the best" loop.

Personally, I've been thinking of getting two Titan Z cards to run in SLI BUT not until Pascal comes out and the people who bought them start dumping them on ebay for $300. However, depending on benchmarks, I might just go with two 1070's in SLI. BTW - I do a lot of GPGPU.
 
Back