Op-ed: The future will be measured in performance per watt

Jos

Posts: 3,073   +97
Staff

Earlier this year Nvidia launched their GeForce GTX 750 Ti and GTX 750 graphics cards. Based on the latest Maxwell microarchitecture,  the release garnered a surprising amount of attention. Normally cards in this price and performance bracket are unexciting workhorses; they’re the mainstream budget cards that people buy because they either can’t afford or simply aren’t interested in the more monstrous cards north of the $200 line.

What caught everyone’s attention wasn’t how fast the card was so much as its performance per watt. Nvidia’s Maxwell effectively doubled performance per watt against Kepler, and it thoroughly trounces AMD’s GCN architecture. It does this without the benefit of smaller process technology, an advantage Intel aggressively leverages over AMD. With the GM107 chip that powers the GTX 750 Ti and GTX 750, Nvidia has essentially produced the Pentium M of GPUs: architected to maximize performance per watt but within an envelope.

Editor’s Note:
Guest author Dustin Sklavos is a Technical Marketing Specialist at Corsair and has been writing in the industry since 2005. This article was originally published on the Corsair blog.

Of course, in a way, Nvidia is playing catch up. Intel has been aggressively pursuing increased efficiency and reduced power consumption with their processors for a long time, having suffered from plenty of egg on their faces for almost the entire Pentium 4 and Pentium D families.

Haswell is arguably a bust from a performance standpoint, requiring as much power or more for the same performance you could get from Ivy Bridge and trading some power consumption for its increased instructions per clock.

Yet the integration of voltage regulation circuitry onto the die, aggressive use of smaller manufacturing processes for the chipset, and even system-on-chip versions for mobile and all-in-ones tell a different story. Haswell’s load consumption is up, but its idle consumption can be as much as ten watts lower. Intel architects their chips to spend as much time idle as possible; the less time the chip spends actively working, the more time it spends idle and sipping power. It works out as a net gain.

AMD’s Kaveri architecture is another move in this direction. Their GCN graphics core architecture is arguably more efficient than their VLIW4 and VLIW5 architectures of yesteryear, and Kaveri itself was architected to reduce power consumption over Trinity and Richland while offering similar or better performance on both the CPU and GPU sides. A minor process transition from 32nm to 28nm completes the package.

Many of these changes are driven by the mobile sector. Originally we were looking at trying to get better performance in notebooks, but now it’s even trying to architect hardware for tablets and smartphones; Nvidia’s Maxwell was designed essentially to be ported to a smartphone SoC.

Still, better performance per watt helps us all. The benefit of the desktop PC is physics: power consumption and your thermal ceiling are far less constricted than they are in notebooks or tablets. Increased efficiency allows us to make powerful, silent machines for the living room or go the opposite direction and maximize performance in our full towers.

The run for efficiency doesn't stop there. Low voltage DDR3L has supplanted conventional DDR3 in notebooks and ultrabooks, and power supplies are getting more and more efficient. The Corsair AX1500i is 80 Plus Titanium compliant, meaning that at most loads it never drops below 90% efficiency; even a lot of entry models are pushing 80 Plus Gold now. The AX1500i is specced to a mighty 1500W, but with how efficient hardware is becoming, that 1500W can be better utilized to power a staggering amount of performance.

If you think about it, this is really the only direction we can go.

There was a period of time when brute force was a perfectly reasonable way to improve performance: increase power consumption, increase performance, call it a day. Or just throw more and more resources on to a chip, power consumption be damned. At some point, though, you’re just going to smash headfirst into a thermal/power wall like Intel’s Netburst architecture did, and that’s pretty much where we’re at.

Designs need to be scalable in multiple directions; that’s what necessitates designs like Maxwell, Haswell, and Kaveri, and that’s what makes chips like Nvidia’s old GF100 absolutely horrendous for use anywhere outside of the desktop (see the GeForce GTX 480M.) GK110 (GeForce GTX 780, 780 Ti, Titan, and Titan Black) at least has the high performance computing market to fall back on, but GK104 continues to do the heavy lifting for Nvidia in mobile.

Performance per watt is fast becoming the most important metric we’re judging hardware by, and it’s evident there are still big gains to be made in this department on the GPU side, at least if Nvidia’s Maxwell is any indication. The positive reception to the GeForce GTX 750 Ti is proof enough of that. Anyone who ignores it, be they as insignificant as a single builder or as massive as a semiconductor company, does so at their own peril.

Permalink to story.

 
An nvidia fanboy troll could have written a better article than this.

"Nvidia’s Maxwell effectively doubled performance per watt against Kepler, and it thoroughly trounces AMD’s GCN architecture."

Where is the data that shows Maxwell "trouncing" GCN?

"Their GCN graphics core architecture is arguably more efficient than their VLIW4 and VLIW5 architectures of yesteryear,"

Where is the data that shows GCN being "arguably" (instead of clearly) more efficient than VLIW?

This article is written by a Technical Marketing Specialist of Corsair. Is this the official position of Corsair?
 
So much about performance per watt - and where are the figures?

Not just that but I dunno about you but I would love a GPU that can handle next-next-next-gen games on max seeing as the "Next-Gen" stuff isnt next get but last-last-last-Gen except Star Citizen that atleast looks to be a real Next-Gen game!
 
Another interesting note, is that the cost of energy continues to increase much faster than inflation. In 15 years, it is easily possible that the cost of powering a laptop over a 3 year period will actually exceed the cost of the laptop itself - making power efficiency even more important.
 
This article kinda reminds of the old American philosophy, "if you want more performance, you just add more cubes". That still holds true to a certain extent today but to quote Bob Dylan's famous song "the times they are a changing", and fast.
No gamer or even enthusiast NEEDS a 1500W+ PSU unless they are just arrogant braggers. Hell, even the old Apollo spacecraft with it's enormous Saturn V booster required just 28 Amps to drive all it's electrical systems and that was 50 years ago.
 
Of course, in a way, Nvidia is playing catch up. Intel has been aggressively pursuing increased efficiency and reduced power consumption with their processors for a long time..
A rather tortured comparison I would have thought given that GPU efficiency and CPU efficiency aren't directly comparable given the disparate workloads.
Interestingly enough, the only product line that both vendors have comparable products are math co-processors, and Nvidia's older Kepler architecture Tesla (235w K20X) stands up pretty well to Intel's current KNC Xeon Phi (300w SE10P. The current Green500 list, the general "go to" for GPGPU efficiency tends to support this view.
"Nvidia’s Maxwell effectively doubled performance per watt against Kepler, and it thoroughly trounces AMD’s GCN architecture."
Where is the data that shows Maxwell "trouncing" GCN?
On the net?
It's not particularly difficult to find a perf/watt comparison. Many tech review sites now carry them. Here's a recent one from TPU for example - it only includes aggregated data from seventeen games, so YMMV
perfwatt_1920.gif
 
Nonsense. This article could have been written with a much better tone, but the author decided to let fanboy inside him get the best of him.
That's because it's not an article. It's an op-ed.
 
apart from the 750ti (and even that one is really only about 15% more efficient than the 265), performance/watt is very similar across the board.
You mean, except for Maxwell things are pretty even? But wasn't Maxwell the architecture being addressed?
"Nvidia’s Maxwell effectively doubled performance per watt against Kepler, and it thoroughly trounces AMD’s GCN architecture."
Where is the data that shows Maxwell "trouncing" GCN?
As for Maxwell being 15% more efficient than the 265, the latest reviews say it's easily double that
perfwatt_1920.gif
 
Back