Rumor: Nvidia GeForce GTX 880 specs revealed

Scorpus

Posts: 2,162   +239
Staff member

The latest rumor from Tyden.cz has allegedly detailed the Nvidia GeForce GTX 880, which is set to be the company's flagship Maxwell-based graphics card when it launches later this year. Of course, it's always worth taking rumors with a grain of salt, but the following specifications could well be accurate.

The GTX 880 will reportedly utilize a GM204 GPU based on Nvidia's 'Maxwell' architecture that we first saw in the GTX 750 Ti, but as the naming scheme suggests, it won't be a fully enabled part. Manufactured on a 20nm process, we should be expecting 7.9 billion transistors with more graphics processing clusters (GPCs) than the 750 Ti's GM107 GPU.

Rumored specifications include a huge 3,200 CUDA cores, plus 200 TMUs and 32 ROPs which fits with Maxwell's use of relatively fewer TMUs and ROPs compared to 'Kepler' cards. The core will run at a clock speed of 900 MHz, boosting to 950 MHz where necessary, delivering 5.7 TFLOPs of single-precision power.

As for memory, we can reportedly look forward to 4 GB of GDDR5 at 7400 MHz on a 256-bit bus for 238 GB/s of bandwidth. This is less memory bandwidth than the GeForce GTX 780, but differences in the Maxwell architecture should still mean the GTX 880 is faster, assuming the rumored information is correct.

Maxwell's inherent power efficiency sees the TDP for this flagship card sit at 230W, the same as the GTX 770 and less than the GTX 780, despite being allegedly faster. Time will only tell if these specifications are, in fact, true, with the card set to launch mid-year.

Permalink to story.

 
Odd, these specs seem odd in general because GM architecture is supposed to be much more efficient (And powerful core to core) when comparing Cuda cores from past generations. The 750ti has less cuda cores the its 650ti counterpart while delivering more performance which would just seem odd in general to cram that many on the flag ship card a necessity because it could essentially blow the 780ti away and Titan with ease based on the fact the 750ti had much better compute than its other counterparts.

Plus a 256 bit bus is pretty small and taking a step back from the 780ti... So im getting curious about this.
 
Sweet! The trigger for my new rebuild are these 800 series cards. Seems like I've been waiting forever for them.
 
I have a 7 series GPU so I'm sure I can get away with skipping the next two or three generations. On average I replace my GFX card every four years.
 
I'm using nVidia GTX 780, and my next video card will have to satisfy the following requirements:

1. Use HDMI 2.0 and DisplayPort 1.3 for outputs;
2. Perform superbly in 4K
3. Support DirectX 12 or later
4. Be at least as ergonomic as my current GTX 780 (size + power consumption).
5. Use GDDR6 (expected in 2014)

Until a product is released that satisfies all that, I won't be upgrading. I think it will take a bit longer than 1 year for such products to appear, which is fine by me :)
 
Last edited:
The specs seem off to me as well. The core count would be right if Nvidia is truly aiming for 4K gaming. A Maxwell with 3200 Cuda Cores would handle that. 4GB VRAM sounds right, although I bet the TITAN version will have 8GB. This helps them keep their market segments segregated. 256bit @7.4Ghz seems stupid. 5Ghz GDDR5 @ 512bit (like AMD is using,) is smarter in so many ways. Cost is equivalent, and you get more bandwidth. Jumping to 6Ghz GDDR5 @ 512bit would make perfect sense as it's Faster than AMD's solution for only a small price increase (6Ghz GDDR5 is much cheaper/common than 7Ghz GDDR5 & uses less power.) The only reason to use 256 bus would be to put the 512 bus on the Titan version as another incentive to shell out a few hundred dollars more. Bad move in my opinion.
 
I updated my list above to include GDDR6, which has been available for a while, but first products using it are expected this year, so I would definitely wait.
 
er·go·nom·ics(ûr′gə-nŏm′ĭks)
n.
1. (used with a sing. verb) The applied science of equipment design, as for the workplace, intended to maximize productivity by reducing operator fatigue and discomfort. Also called biotechnology, human engineering, human factors engineering.
2. (used with a pl. verb) Design factors, as for the workplace, intended to maximize productivity by minimizing operator fatigue and discomfort: The ergonomics of the new office were felt to be optimal.
 
er·go·nom·ics(ûr′gə-nŏm′ĭks)
n.
1. (used with a sing. verb)

Ergonomic refers to efficiency of using an object, with emphasis to maximizing such and minimizing the discomfort of using the object. With reference to a video card that implies keeping the same size and power consumption.

P.S. Don't do that again, please...
 
I just got my 780s two months ago. No need to upgrade again (besides, my wife would kill me if I did).
 
The specs seem off to me as well. The core count would be right if Nvidia is truly aiming for 4K gaming
If the GPU was aimed at 4K gaming then the back end would have been beefed up. 40 ROPS servicing 3200 cores? No thanks.

Just bear in mind that it is silly season. People realize that the current process node is just playing out the string, with new models (295X2, Titan Z) on aging architectures brute forcing performance at the expense of common sense. People also know that a new process is inbound, and are starting to get impatient, so it becomes a perfect environment for other people to start putting their guesses and estimates up on sites for an instant page hit deluge.

How likely is it that a card that is in all probability still in the design stage has been given a retail price? That's aside from the fact that TSMC's CLN20SOC (20nm) process isn't by all accounts geared for high power IC's, so the GM204 will need to wait for the 16nm FinFET process to arrive (20nm BEOL + 16nm FEOL hybrid node), which isn't slated to be ready for primetime until very late 2014/early 2015 at best.

BTW: That die represents a 11% increase in core count plus a sizeable reduction in uncore ( memory I/O, controllers, probable cache and SFU reduction) on a ~30% smaller node -uncore excepted, from the GK110 yet it is slated to cost the same as the 780 Ti*. New wafer prices must be exorbitant (they are rumoured to be 50% higher). As an aside, that would make the Pirate Islands GPU around the same size and the GM204 (460-470mm²) if these weird numbers from well renowned bad-guessing WCCF are to be believed:eek:

* The GM204 part should be an analogue in the product hierarchy of the GF104/GF114 and GK104 (second tier performance) not the GK110- in which case the core count seems hugely optimistic.
 
Last edited:
Now if these figures are true, it will be interesting to see what AMDs TDP response will be to keep up in performance.
 
Is this new 880 will make a significant performance compared with 780?

PS: I'm totally noob here
 
I'm using nVidia GTX 780, and my next video card will have to satisfy the following requirements:

1. Use HDMI 2.0 and DisplayPort 1.3 for outputs;
2. Perform superbly in 4K
3. Support DirectX 12 or later
4. Be at least as ergonomic as my current GTX 780 (size + power consumption).
5. Use GDDR6 (expected in 2014)

Until a product is released that satisfies all that, I won't be upgrading. I think it will take a bit longer than 1 year for such products to appear, which is fine by me :)
If all the hype surrounding Maxwell is to be believed then it's main party trick is a significant reduction in power consumption, or as they like to put it, performance per watt.
Personally I feel replacing a graphics card (or any component for that matter) just for the sake of the latest model and tech is false economy.
Methinks your GTX 780 will be sufficient for all games for a while to come.
 
Is this new 880 will make a significant performance compared with 780?

PS: I'm totally noob here


on paper yes

but we will not know until the card is actually confirmed, released, and benchmarked.

I just got my an R9 270x from AMDs offering, quite happy with it, especially as the main game I am playing at the moment Planetside2 was completely broken by the latest Nvidia drivers (PS2 players have to roll back for the game to run)
 
I'm using nVidia GTX 780, and my next video card will have to satisfy the following requirements:

1. Use HDMI 2.0 and DisplayPort 1.3 for outputs;
2. Perform superbly in 4K
3. Support DirectX 12 or later
4. Be at least as ergonomic as my current GTX 780 (size + power consumption).
5. Use GDDR6 (expected in 2014)

Until a product is released that satisfies all that, I won't be upgrading. I think it will take a bit longer than 1 year for such products to appear, which is fine by me :)
This ^^ this is me xD

I also have a 780, I am waiting for the exact same things, in the mean time though I'm going to get some g-sync in my life as the 780 tends to struggle at 1440p on a surprising amount of games :)
 
Most English speakers understand "ergonomic" to describe efficiency in relation to human effort, not machine effort. The semantic range of the word does not include your usage (perhaps you are starting or are part of a movement that is expanding the semantic range of the word, but that usage has not made it into any dictionaries; language is alive and constantly changing, but we still want to try to be understood with as little distraction as possible, so it is typically best to avoid malaprops). I am just a random reader who happened to see your response and felt sorry for the guy who you told to "not do that again." I now have a lowered opinion of this site unfortunately. Perhaps you should edit your original comment replacing "ergonomic" with "efficient" and delete the comments relating to it. That would be much better, don't you think?
 
I'm using nVidia GTX 780, and my next video card will have to satisfy the following requirements:
1. Use HDMI 2.0 and DisplayPort 1.3 for outputs
Works for me, but I'm guessing a true (non tiled) 3840x2160, @ (at least) 60Hz, 10 (or preferably 12) bit colour panel that also supports 4:3 pixel ratios wont be overly cheap...these being my parameters for a 4K screen
2. Perform superbly in 4K
That might have to wait for a generation of cards geared towards 8K gaming. It's the nature of the business to shoehorn in image quality and render enhancements when, and wherever possible to keep hardware perpetually playing catch up with gaming software. The next major addition to image quality seems to be shaping up with global illumination (and possibly path/ray tracing). Somehow I cant see Maxwell or Pirate Islands running 4K with image quality settings that have been commonplace for years, such as transparency supersampled antialiasing (or even widespread use of 4 x multisampling) and compute shader options such as depth of field, ambient occlusion, particle/water/smoke effects etc.
3. Support DirectX 12 or later
Probably a given at this stage. Some DX12 features are already available to DX11 level hardware.
4. Be at least as ergonomic as my current GTX 780 (size + power consumption).
Maximum board length is set at 312mm by the ATX specification. Power consumption is pretty much governed by OEM demands so that shouldn't change. Under 75W for entry level, 75-150W lower mainstream, 150-200W mainstream/low end performance, 200-250W performance (second tier), 250-300W top tier single GPUs.
5. Use GDDR6 (expected in 2014)
There's a good chance that both Nvidia and AMD will use Hynix's High Bandwidth Memory (HBM). Stacking DRAM chips offers much more bandwidth than GDDR6, and the latter based on DDR4 might end up quite expensive since production is tied to fairly limited sales - basically Haswell-E based server and a smattering of desktop buyers.
Until a product is released that satisfies all that, I won't be upgrading. I think it will take a bit longer than 1 year for such products to appear, which is fine by me :)
Yeah, I'm figuring late 2015/early 2016 at the earliest for a lot of this to come together, although the gaming graphics requirements (read IHV sponsorship) should always ensure that there is a market (need?) for multiple GPU setups.
 
These specs are bull all the way to the core count. Some universities have already seen engineering samples, including the Nvidia Institute at Carnegie Melon. The memory bus is 512-wide. There are 288 ROPs and 94 TMUs. Also, the core count is 3584 and the base clock rate is 972 MHz.

The wattage is about 250, or on par with the 780 TI specs. 2 DVI-D ports, 1 DisplayPort 1.3, and 1 HDMI 2.0 port.

This is what you have to look forward to courtesy of my blabbermouth friend at CMU. Merry Christmas in June everyone.
 
Back