Meet the Pascal family: Nvidia reportedly working on GP102-based GTX Titan and GTX 1080 Ti

The option your describing is a GTX 1080, why would Nvidia bother releasing a lower end GTX 1080ti if they have a card more or less in that segment already? I think they're doing a good jog catering to many demographics across the price range already. When building a $2000+ PC a difference of $50 is nothing, I quoted many PCs in my days working at a local PC shop and generally if people had the option of saving a little or getting better performance they went with the better performance. In the $1000 range fine $50 could make or break the deal to some, but it's still not a considerable enough margin that people would be unwilling to dish out for the better product.
OEM cards not reference, Nvidia doesn't have to release anything. And what I'm talking about is sacrificing 5% GPU performance and gaining much much more somewhere else. This is not about saving money, but about allocating the money they have into getting a better overall PC. (like upgrading from an i5 to an i7)
Not everybody can save more money to get something better and even less have the time to waste on such things.
Your idea also ignores the people that buy the parts separately over a longer period of time. I know people that took half a year to finish their PC, but they managed to build something really good.

If the buyer can put down $2k on the table and has spare money, then yes, he should definately get the best he can and not worry about small percentages. I would also do the same.
 
People have to be crazier than s*** to buy anything for dead-end PC gaming. I kicked PC to the curb a year and a half ago for PS4 and haven't looked back. I might even get the new PS4.5 for 4k gaming yes 4k on PS4. Wake up before you are all dead broke!

The reasons I left PC was #1 no gamers left on PC because of the lack of competition. One guy running a high end card running 100+fps and the others running a $100 card at 15 fps. When I gamed I always had the best card or 2nd best and SLAUGHTERED everyone. PS4 everyone plays on the same page in fps no one has an advantage with hardware.

Dumping PC for PS4 was the best thing I've ever done for my gaming.
Some things are indeed much better for gaming on consoles, but it's certainly not value. I payed about $250 for games that are normally worth between $3000-5000 for consoles. (and let's not mention the f2p games that are very few on consoles)
as for gaming on 15FPS... nobody does that. my low end laptop can play both cs:go and BF3 at around 60 fps. you don't need to set everything to high. there is no need for people to wake up.
 
They've more than doubled the amount of RAM the stock GTX 960 has from 2 GB to 6 GB but haven't even doubled the memory bandwidth or bit size. Whether this is a bottleneck or not is going to depend on if the rest of the GPU can keep up and if that memory compression is any good.

One thing is for certain, having 3 cutdown versions of the same die means Nvidia has allot of imperfections in their yields, if they didn't they wouldn't be able to produce so many cutdown cards. The flipside of this is going to be that Nvidia is only going to be able to get a small amount of cards per wafer. The 1060's die is significantly larger than that of the GTX 960 (227mm2).
Unfortunate but at least they can do something with a lot of the wafers with imperfections. No point waiting for the process to get better before release if you have a product with great performance and customers willing to pay the extra $$$.
 
People have to be crazier than s*** to buy anything for dead-end PC gaming. I kicked PC to the curb a year and a half ago for PS4 and haven't looked back. I might even get the new PS4.5 for 4k gaming yes 4k on PS4. Wake up before you are all dead broke!

The reasons I left PC was #1 no gamers left on PC because of the lack of competition. One guy running a high end card running 100+fps and the others running a $100 card at 15 fps. When I gamed I always had the best card or 2nd best and SLAUGHTERED everyone. PS4 everyone plays on the same page in fps no one has an advantage with hardware.

Dumping PC for PS4 was the best thing I've ever done for my gaming.
Um ok. Well you've got a sample size of 3 for your study by the sounds of it. That must be representative of the whole market.

Some people like the convenience of console. Some prefer PC gaming.
 
LOL. How do you suppose you get 8GB with a 384-bit interface? Smart people aren't going to be looking for 8GB as an option.

384-bit means 6*64-bit dual channel IMC's multiplied by 2 * 8Gbit memory chips per controller. That means the options are 12GB (or 24GB in clamshell mode)

While I don't keep up with every nook and cranny on the GPU segment like I used to, I still like to believe I have a comprehensive understanding of all the computational tech used, memory controllers, bandwidth etc.
Do you think now, in 2016, at 1600p, PCIe 2.0 could bottleneck a GTX 1080 on an X58 build?

I ask because we all know PCIe 2.0 uses the 8b/10b encoding scheme delivering per-lane 4 Gbit/s max transfer rate from its 5 GT/s RDR. PCI Express 3.0's 8 GT/s bit rate effectively delivers 985 MB/s per lane, doubling the lane bandwidth.

However, older tests show that even at 4K, there is no difference. (Nov 2013)
https://www.pugetsystems.com/labs/a...-E-Speed-on-Gaming-Performance-518/#4kResults

Do you think this will change?
Know of any newer tests with games?
I am upgrading my X58 from 6GB 1600Mhz to 12GB 2000Mhz Sector 7 (I know its only a few FPS difference but newer games can be several FPS difference, only paid $40 shipped) and I plan on pairing it with a 1080.
 
What bullshit report, so we are supposed to believe that there is a "titan" version that has OVER 1300 the shaders, 384bit interface and once again almost doubling of the ROP's? Unless they've build TWO versions, one for the 1080 with the current die size of about 600mm and an even BIGGER ONE, then this report is bull.

I highly doubt they've actually build two versions, one with an even bigger die size, its crazy. If there is a 1080ti its likely going to be the similar difference as with the 980 and 980ti.
 
They've more than doubled the amount of RAM the stock GTX 960 has from 2 GB to 6 GB but haven't even doubled the memory bandwidth or bit size. Whether this is a bottleneck or not is going to depend on if the rest of the GPU can keep up and if that memory compression is any good.

One thing is for certain, having 3 cutdown versions of the same die means Nvidia has allot of imperfections in their yields, if they didn't they wouldn't be able to produce so many cutdown cards. The flipside of this is going to be that Nvidia is only going to be able to get a small amount of cards per wafer. The 1060's die is significantly larger than that of the GTX 960 (227mm2).

A ton of modern games use more than 4GB of VRAM. Releasing a new card in 2016 with a 4GB option is as confusing and hamstringing as when they released 4GB cards at the end of 2014. The VRAM was the sole reason I bought a 980 Ti to replace my 970, and why my 570's in SLI in 2011 stopped being useful: performance was fine, but physically running out of VRAM was not.
 
A ton of modern games use more than 4GB of VRAM. Releasing a new card in 2016 with a 4GB option is as confusing and hamstringing as when they released 4GB cards at the end of 2014. The VRAM was the sole reason I bought a 980 Ti to replace my 970, and why my 570's in SLI in 2011 stopped being useful: performance was fine, but physically running out of VRAM was not.

I was more or less referring to the memory bandwidth rather than the amount of RAM. I think 6 GB is a really good spot to be in vRAM wise but it seem Nvidia always bandwidth limits it's xx60 cards. Only time will tell if it will actually affect performance or it they hit the sweet spot this time.
 
The M10 is a monster for FP64 calculations.
Nope, far from it. The M10 is a virtualization (Nvidia GRID) product not HPC. The M10 uses four GM 107 GPUs as found in the GTX 750 Ti. All Maxwell GPUs have a 1:32 FP64 rate. Four GTX 750 Ti's are capable of 163.2 GFLOPs of FP64 - that is less than a one tenth of the Titan Black.
nVidia sells variants like these, at least as I see it anyway, because people buy them for whatever reason and they make a lot of money on them.
Very much so. The company have built their entire professional graphics business on creating markets, and then creating products for every price tier to maximize visibility. What they've done is continue the ethos of SGI's pro graphics division (which sold Nvidia's first Quadro's) that they acquired some sixteen years ago.
Nvidia is panicking with the announcement that AMD's Vega GPU was pulled from 2017 to Q4 2016. Seems dropping HBM2 was the only way Nvidia could match AMD's timeline now. It'll be interesting to see how this year ends up.
Yet these estimations of GP 102 using GDDR5X predate the the Vega pull-in rumour (which derives from a single post by an anonymous poster on a forum thread) by a month of more.
nobody said anything about keeping the 384-bit interface a slightly higher clocked GDDR5X memory should be enough to get close to 400GB/s
You are dreaming if you think a ~ 4000 ALU GPU would use any less, and 512-bit is overkill, a waste of die space, and an unnecessary addition to the power budget. There simply aren't any real life gaming scenario's that require 640-700GB/sec of bandwidth
the difference in performance at 4K should be only 1 or 2 FPS. (this might also drop the Pin requirements from 6+8 to just 8, but I'm not sure because of the larger GPU))
Also not going to happen. Nvidia aren't going to hamstring their top gaming card with the same 225W board power restriction as their second tier GPU even though the big-die GPU has ~50% more compute cores.
Do you think now, in 2016, at 1600p, PCIe 2.0 could bottleneck a GTX 1080 on an X58 build?
Short answer: No
Most high res/ HD texture/FSAA/DSR gaming relies upon internal bandwidth (GPU<-> vRAM). External bandwidth (over the PCI-E bus) is nowhere close to being saturated for gaming.
perfrel_3840_2160.png

[Source]

I doubt even a dual card of the next generation would tax PCI-E 2.0 x16
 
Last edited:
People have to be crazier than s*** to buy anything for dead-end PC gaming. I kicked PC to the curb a year and a half ago for PS4 and haven't looked back. I might even get the new PS4.5 for 4k gaming yes 4k on PS4. Wake up before you are all dead broke!

The reasons I left PC was #1 no gamers left on PC because of the lack of competition. One guy running a high end card running 100+fps and the others running a $100 card at 15 fps. When I gamed I always had the best card or 2nd best and SLAUGHTERED everyone. PS4 everyone plays on the same page in fps no one has an advantage with hardware.

Dumping PC for PS4 was the best thing I've ever done for my gaming.

If you think the PS4 Neo is gonna play games at 4k, you're gonna be sadly mistaken. They could double all current ps4 specs an still not hit 4k@30fps much less 60fps. It will most likely output 4k for streaming services but thats gonna be the extent of it.
 
A ton of modern games use more than 4GB, ....my 570's in SLI in 2011 stopped being useful: performance was fine, but physically running out of VRAM was not.
They made 2560MB GTX 570s.
I had a 1280MB GTX 570, felt quicker then my 6970 when frame buffer was within limits.
 
OEM cards not reference, Nvidia doesn't have to release anything. And what I'm talking about is sacrificing 5% GPU performance and gaining much much more somewhere else. This is not about saving money, but about allocating the money they have into getting a better overall PC. (like upgrading from an i5 to an i7)

OEM cards are not available to enthusiast PC builders so a moot point at best, if the likes of Dell or HP want to have them as an option it's because they're charging you more in assembly or by tacking on a warranty. Build it yourself, save that money, use the warranty provided by the individual manufacturers, it's a much better way to go about building a high end PC. And also the benefits of going from an i5 to an i7 are almost non existent, better GPU is generally the better option, we've all seen what a Pentium G3258 can do with a mild overclock, and again the i5 matches the i7, check out the latest Doom benchmark to see what I'm talking about.

Not everybody can save more money to get something better and even less have the time to waste on such things.
Your idea also ignores the people that buy the parts separately over a longer period of time. I know people that took half a year to finish their PC, but they managed to build something really good.

I ignore that idea because it's the single worst way to build a PC, your better off saving your money instead of accumulating PC parts that will drop in price well before you end up having all the components ready for the final build. Heck even better parts will be released within a 6 month time frame making the parts you already have obsolete. Time and time again I hear people bring this up and it's just plain wrong, learn to save money, you'll be able to spend that extra "$50" when the time comes on a better part.

If the buyer can put down $2k on the table and has spare money, then yes, he should definately get the best he can and not worry about small percentages. I would also do the same.

Then what are you debating about exactly? Are you for the meaningless $50 savings or the 6+ month part accumulation build strategy? Remember this about a GPU that will be in the ballpark of $800 when it eventually gets released, I can use the example of having built perfectly capable whole gaming systems for that price. It's an enthusiast part, it comes with a matching price tag, no reason for Nvidia to cut it down.
 
People have to be crazier than s*** to buy anything for dead-end PC gaming. I kicked PC to the curb a year and a half ago for PS4 and haven't looked back. I might even get the new PS4.5 for 4k gaming yes 4k on PS4. Wake up before you are all dead broke!

The insanity in this post is off the charts, either you have no concept of what your PC was capable of, or your just straight up bat **** crazy and have a really strange taste in games that's incredibly obscure on PC. But than you believe the PS4.5 will be a capable 4K gaming machine, so bat **** crazy seems more appropriate.

The reasons I left PC was #1 no gamers left on PC because of the lack of competition. One guy running a high end card running 100+fps and the others running a $100 card at 15 fps. When I gamed I always had the best card or 2nd best and SLAUGHTERED everyone. PS4 everyone plays on the same page in fps no one has an advantage with hardware.

This smells heavily of BS, PC gaming is actually on a come back if you haven't noticed and is likely to only get stronger now that the consoles can do cross play, check out the latest Rocket League post that's all about cross play, Microsoft is going to be doing that more and more with their Games for Windows marketplace too. I can list the competitive games available on PC but it's a pointless endeavor, just look up DreamHack if you think PC gaming is not a competitive platform.

Next you seem to believe hardware is what makes or breaks a PC players ability, this is simply not true, skill and FPS are not correlated, it may take more skill to be a good player on a PC, but hardware doesn't play much of a factor. Although playing at 15 FPS might hamper one's ability, this is why games have settings, people who can't figure that out are just SOL and probably could care less about your high end system and your ultra pwnzor abilities.

Dumping PC for PS4 was the best thing I've ever done for my gaming.

It really sounds like quite the opposite to me, you start of by bragging about kicking your PC to the curb but then follow up about your PC prowess and how you "SLAUGHTERED" everyone, easy to brag now that your no longer playing on PC. I'm sure you enjoying the longer loading times, lower quality graphics and numb controls on your PS4, but really it sounds more like your regretting your decision than anything else.
 
I ignore that idea because it's the single worst way to build a PC, your better off saving your money instead of accumulating PC parts that will drop in price well before you end up having all the components ready for the final build. Heck even better parts will be released within a 6 month time frame making the parts you already have obsolete. Time and time again I hear people bring this up and it's just plain wrong, learn to save money, you'll be able to spend that extra "$50" when the time comes on a better part.
Another problem with this build method is that by the time you build the PC, the warranty on some of the parts is starting to run out. If any fail or develop problems a few months after the actual build, there might not be any warranty left. Granted, most of the time nothing will go wrong but it's a needless risk.
 
People have to be crazier than s*** to buy anything for dead-end PC gaming. I kicked PC to the curb a year and a half ago for PS4 and haven't looked back. I might even get the new PS4.5 for 4k gaming yes 4k on PS4. Wake up before you are all dead broke!

The reasons I left PC was #1 no gamers left on PC because of the lack of competition. One guy running a high end card running 100+fps and the others running a $100 card at 15 fps. When I gamed I always had the best card or 2nd best and SLAUGHTERED everyone. PS4 everyone plays on the same page in fps no one has an advantage with hardware.

Dumping PC for PS4 was the best thing I've ever done for my gaming.
Too many people here bit the troll bait.

He said there was no gamers left on PC, That's literal rubbish spewing out.
You'll find he was a troll in games and he's finally been banned from most games he played on PC but the poor PlayStation crowd now have to put up with him.
 
This is most likely true. I remember when a similar story surfaced about the Titan X. The only point that was off was the price-it retailed for $999 not $1,350. We'll know more by the fall (or the end of the year).
 
What would be the reason if they don't make gp 102s without hbm2? hbm2 production had problems, didn't catch up with the chips' mass fabrication? or maybe nvidia wasn't impressed by the performance (or cost or power efficiency?) hbm2 provided vs G5X?
Some of the comments here evince such an infantile stupidity it's amazing, and so pompously proclaimed yet still 100% wrong.
"Without HBM2 no one will buy it, duh."
I think Nvidia is well aware of the marketplace, and the performance targets they need to hit
to have a successful product. They've been doing very well for years in a challenging environment. What tech they ultimately use is irrelevant as long as the performance is there. I am of course including power consumption, heat, noise, and reliability.
 
Nvidia is panicking with the announcement that AMD's Vega GPU was pulled from 2017 to Q4 2016. Seems dropping HBM2 was the only way Nvidia could match AMD's timeline now. It'll be interesting to see how this year ends up.
There was no announcement. There was a rumor on a blog.
 
No HBM, even on the Titan class?

Well, that's disappointing.

Hopefully this early leak of the Ti/Titan means they're coming much sooner than previous releases; rather wait for the big one than cave in and get a regular 1080.. but not if it's over half a year out like before.
 
What bullshit report, so we are supposed to believe that there is a "titan" version that has OVER 1300 the shaders, 384bit interface and once again almost doubling of the ROP's? Unless they've build TWO versions, one for the 1080 with the current die size of about 600mm and an even BIGGER ONE, then this report is bull.

I highly doubt they've actually build two versions, one with an even bigger die size, its crazy. If there is a 1080ti its likely going to be the similar difference as with the 980 and 980ti.

er right numpty the 980 vs 980Ti is a massive difference
 
LOL. How do you suppose you get 8GB with a 384-bit interface? Smart people aren't going to be looking for 8GB as an option.

384-bit means 6*64-bit dual channel IMC's multiplied by 2 * 8Gbit memory chips per controller. That means the options are 12GB (or 24GB in clamshell mode)
nobody said anything about keeping the 384-bit interface. a slightly higher clocked GDDR5X memory should be enough to get close to 400GB/s. the difference in performance at 4K should be only 1 or 2 FPS. (this might also drop the Pin requirements from 6+8 to just 8, but I'm not sure because of the larger GPU))

er no, how do you even come up with this nonsense

4K@60 is going to require ~1.5xGTX1080, including bandwidth
 
A ton of modern games use more than 4GB of VRAM. Releasing a new card in 2016 with a 4GB option is as confusing and hamstringing as when they released 4GB cards at the end of 2014. The VRAM was the sole reason I bought a 980 Ti to replace my 970, and why my 570's in SLI in 2011 stopped being useful: performance was fine, but physically running out of VRAM was not.

I was more or less referring to the memory bandwidth rather than the amount of RAM. I think 6 GB is a really good spot to be in vRAM wise but it seem Nvidia always bandwidth limits it's xx60 cards. Only time will tell if it will actually affect performance or it they hit the sweet spot this time.

er no

just because you do doesn't mean anyone else does! consoles already have 8GB -> cards need that as a minimum
 
And nobody was surprised.

And how often does a GPU comes with LESS memory? More is common, but less?

And besides, with a GPU as powerful as the 1080ti, "cheap" has no place here. You dont cheap out on a halo product. Considering the settings dual 1080tis could push, you'd need that much vram.
like it or not min/maxing a PC build will always involve a budget. not everybody has the money to buy the best components. even 50$ can help a lot when building a PC.

if you can't afford a decent card go buy a console
 
Back