TechSpot

Maxwell meets notebooks: Nvidia launches GeForce GTX 980M, 970M mobile GPUs

By Shawn Knight
Oct 7, 2014
Post New Reply
  1. Nvidia's new desktop GPUs, the GeForce GTX 970 and 980, are now available in a mobile flavor. The chip maker took the wraps off the GeForce GTX 970M and 980M today and if you're a gamer on the go, there's plenty to...

    Read more
     
  2. amstech

    amstech TechSpot Enthusiast Posts: 1,455   +606

    My 144 Cuda Core GT550M is officially prehistoric.
     
  3. ikesmasher

    ikesmasher TS Evangelist Posts: 2,550   +852

    Great now I have to wait for a GTX 960m to get a laptop so it wont be obsolete right away
     
  4. VitalyT

    VitalyT Russ-Puss Posts: 3,149   +1,424

    Is 8xxM closer to 7xxM or 9xxM?
     
  5. Adhmuz

    Adhmuz TechSpot Paladin Posts: 1,648   +521

    So the claims are that a 180 watt GTX 980 (average GPU power draw, not system load) can run at 80% in a laptop, so roughly 145 watts at full load... Seriously you have to be joking, the power adapter for such a laptop would need to be 250 watts and the laptop itself would be too hot to handle. And then to top that you can have two 970M in SLI, which again, 165 watts x 2 x .80 is 265 watts... And that would just be the GPU's, ignoring the rest of the laptop. Needless to say I'm a tad skeptical about how benchmarks are going to score these, 80% the desktop equivalent is just not believable at this stage. And all this doesn't take into account the way these new GPU's can spike on current draw from the power supply at any given moment.
     
    Arris likes this.
  6. ikesmasher

    ikesmasher TS Evangelist Posts: 2,550   +852

    Power consumption and performance are never directly proportionate.
     
  7. GhostRyder

    GhostRyder This guy again... Posts: 2,191   +590

    Wow 80%...That is very tempting, heck I might wanna put one in my MSI GT70!

    If true this is going to be a seriously awesome card (Or cards) on the mobile levels!

    Basic specs listed leave a lot to be desired and do not explain much yet...
     
  8. Adhmuz

    Adhmuz TechSpot Paladin Posts: 1,648   +521

    Actually, when comparing a Maxwell GPU to, say, another Maxwell GPU using the same technology and fabrication you can, unless there are some dramatic changes in the silicon that are yet to be detailed. And also worth noting I just used average consumption for both cards, they have been shown to spike as much as 100 watts over TDP under extreme loads. I guess it depends how you calculate your performance to watt ratio, what your saying would be true if I was comparing a Haswell series GPU to a Tonga series GPU. But I was not and you must be confused.
     
  9. dividebyzero

    dividebyzero trainee n00b Posts: 4,891   +1,258

    Depends upon what the "xx"'s are.
    700M and 800M are the same Kepler architecture, just a silicon revision.

    The 980M is basically a GTX 970 with one shader module deactivated and lower clocks. For comparison the desktop GTX 970 has a ~12-14% performance deficit with three shader modules deactivated and a lower GPU and memory clock in relation to the GTX 980.

    Since the 780M / 880M is basically a downclocked GTX 680 / 770 and the 980M is a downclocked GTX 970, the difference should be reasonably apparent. For a numbers comparison I'd suggest Notebookcheck.net - they are the "go to" source for mobile GPUs
     
  10. ikesmasher

    ikesmasher TS Evangelist Posts: 2,550   +852

    I feel like there have to be dramatic changes in the silicone to make it work for mobile chips. But I suppose youre right.
     
  11. dividebyzero

    dividebyzero trainee n00b Posts: 4,891   +1,258

    Almost guaranteed that the 980M module is a 100W TDP module since that is the MXM-B max power specification.
    @Adhmuz Your mathematics are also based upon a false assumption. From the press release (and also noted in the TS article above)
    The GTX 980M's desktop equivalent is the GTX 970 - same core/SMM count, not the GTX 980.
    Some back of the hand mathematics:
    GTX 970 official TDP: 145W * 80% = 116W * 0.92 (difference between base clocks) = 107W

    Now we all know that base clocks with Kepler and Maxwell are basically meaningless, so the obvious factor is boost clock - which if previous examples are anything to go by, will be set by the OEMs depending upon their own design requirements (performance vs cooling usually). Maxwell gobbles power once the core voltage exceeds its nominal 1.20V. Even the max standard boost of 1.218V will show a large power usage hike, while adding another voltage bin like Gigabyte does with the GTX 970 G1 Gaming (1.225V the same as the reference GTX 980) will see a much higher usage still. Incidentally, this higher voltage is the reason why Steve's review saw little difference on power consumption between the 970 and 980 - both are pegged at Nvidia's 1.225V stock 980 voltage.
    Drop the boost in the mobile parts in relation to the desktop parts and you'll see the 900M's at the usual 100W limit.
    That said, the 80% performance number is from a vendors press release...now, I ask you, how often does a press release overstate the products real world capabilities?
    Gun to my head, I'd peg the actual number at around 65-70%, which is a little better than previous mobile to desktop comparisons but not greatly so. The bigger point of interest is actually the performance you get for your 100W budget.
     
    Last edited: Oct 7, 2014
    ikesmasher likes this.
  12. Not every GPU that comes out of the production line has the same power efficiency. Just because full-featured desktop GM204 chips are sold with a TDP rating of 165W (the official TDP from Nvidia is 165W for the GTX 980 and 145W for the GTX 970, not sure where you got your numbers) doesn't mean every single chip produced will perform like that. Nvidia can very well save the best-performing chips (regarding efficiency) for using on laptops. And given how we're talking about a chip that runs at 145W to 165W TDP on desktop, how Nvidia's flagship mobile chip are usually around 100W, and the GTX 980M is a harvested chip (with only 3/4 of stream processors enabled) with reduced frequencies and likely to be some of the most efficient chips out of the production line, this is all perfectly plausible.
    Also, remember GM204 is not Big Maxwell. There is still going to be a larger chip (GM200 or GM210) that will be exclusive to desktop and workstations. The GTX 980 will not be Nvidia's desktop flagship.
     
  13. Adhmuz

    Adhmuz TechSpot Paladin Posts: 1,648   +521

    For reference purposes where I am getting power consumption from the GPU as I stated. Never believe what Nvidia says, actually, that goes for both parties. The TDP stated and actual consumption are not the same, and if you read the article I linked to you will see what I am referring to. I'll take DZ's formula and add the "Actual" tested consumption from Tom's, unfortunately it wasn't a reference 970 they used, but it should be closer to what the card is using, consumption wise, vs Nvidia's claimed 145W

    GTX 970 Tom's Tested Consumption: 177W * 80% = 141W * 0.92 (difference between base clocks) = 130W

    Which still puts them well over spec without factoring the observed peaks in power consumption under extreme load.

    Also my math is more of a rough estimate based on provided information, and estimates of performance, again something we should all know means nothing until they are tested in the real world. And based on the current press release I find it very hard to believe that the GTX980m will have 80% the performance of the desktop counterpart, in this case they used the 980. Had they went along with the claim that it would be 80% the performance of a 970, it would be a different story, but then Nvidia wouldn't be feeding us all lies, and that's just not expected.
     
  14. dividebyzero

    dividebyzero trainee n00b Posts: 4,891   +1,258

    Not really an apples-to-apples comparison. The only acceptable method of benchmarking a reference card is to either benchmark the actual reference SKU, or flash the reference BIOS on a board that features the reference PCB and voltage circuitry. Tom's did neither. For a site banging on about their state of the art power metering to:
    1. Produce an obvious outlier result* and publish rather than consult the hardware vendor, and,
    2. Not bother to monitor the voltages, nor check the BIOS when the card produced an anomalous result
    3. Not bother to source a reference BIOS (as Anandtech amongst others did) from the vendor they got their samples from - or any other vendor/Nvidia for that matter.
    defies rational explanation. As Tom's partial mea culpa* added on the page after your link intimates.
    Well, a couple of observations:
    As I alluded to, the 80% of desktop will be some odd outlier that can be confirmed but likely isn't indicative of actual performance. PR bumpf never is, so grain of salt time with that number. I'm picking ~70% might be more "real world".
    Secondly, since base clock means nothing, you'll need boost states to make a comparison. A quick check of Anand's in-game clock frequency shows the card running consistently around 1200MHz
    [​IMG]
    The boost for the 900M's is unknown. What is known is that a hike in boost requires a commensurate raising of voltage. Thus the opposite is also true. A 100MHz drop in achieved boost frequency might drop the GPU input voltage closer to 1.1 - 1.15V which would cut power consumption (and performance to a degree, which is why I think 70% of desktop is a more reasonable expectation).
    Another back of the hand calculation
    160W * 70% = 112W * 0.9 (picked-at-random difference in boost state) = 100.8W
    Thirdly, I don't trust Tom's. Never have, probably never will. Their benching leaves a lot to be desired in general. More to the point HT4U have an impeccable record for benchmarking, testing and metering procedure. As such I'd be more inclined to trust their 160W figure as a basis. They certainly didn't have their GTX 970 using more power than a higher boosting, higher base clocked, fully enabled GTX 980 as Tom's did.
    Lastly, as I mentioned, the MXM-B specification has a maximum module board power of 100W, and whereas the desktop cards carry 4GB of GDDR5, the 970M is specced for 6GB - which requires a further ~10-15W from the 100W power budget.

    Desktop parts have fudged TDP's most of the time - it's par for the course. But they don't flaunt international component and electrical standards (R9 295X2 excepted). Mobile parts are certainly more constrained ( no aux power, heatsink module constraints), and somehow I doubt Nvidia would suddenly break an MXM standard they introduced when they're pushing efficiency.
     
    Last edited: Oct 7, 2014
  15. Arris

    Arris TS Evangelist Posts: 4,606   +287

    My Clevo based laptop with a GTX680M in it has a power supply approximately the size of a house brick, I'd imagine these would require power supplies the same size or even bigger!
     
  16. Many of you are making the assumption that performance, frequency, power all scale linearly with respect to each other, when, in fact, that is not correct.

    For CMOS, power consumption scales as the square of frequency, while performance is more a linear function of frequency. By using these assumptions, we can dial the frequency to 0.8 whatever the desktop counterpart is and use only 64% of the power, which gives some additional breathing room to put this into a laptop. Additionally, if the device does not need to run at highest frequency, it's operating voltage can be further reduced, I wouldn't be surprised if for the same 28nm technology at TSMC, there's a high power and low power variant to meet different customers' needs. Of course, all this assumes that TSMC is delivering on the perfect devices and NVIDIA's delivering the perfect architecture.

    /2c
     

Similar Topics

Add New Comment

You need to be a member to leave a comment. Join thousands of tech enthusiasts and participate.
TechSpot Account You may also...