For
reference purposes where I am getting power consumption from the GPU as I stated. Never believe what Nvidia says, actually, that goes for both parties. The TDP stated and actual consumption are not the same, and if you read the article I linked to you will see what I am referring to. I'll take DZ's formula and add the "Actual" tested consumption from Tom's, unfortunately it wasn't a reference 970 they used, but it should be closer to what the card is using, consumption wise, vs Nvidia's claimed 145W
Not really an apples-to-apples comparison. The only acceptable method of benchmarking a reference card is to either benchmark the actual reference SKU, or flash the reference BIOS on a board that features the reference PCB and voltage circuitry. Tom's did neither. For a site banging on about their state of the art power metering to:
1. Produce an obvious outlier result* and publish rather than consult the hardware vendor, and,
2. Not bother to monitor the voltages, nor check the BIOS when the card produced an anomalous result
3. Not bother to source a reference BIOS (as Anandtech amongst others did) from the vendor they got their samples from - or any other vendor/Nvidia for that matter.
defies rational explanation. As Tom's partial
mea culpa* added on the page after your link intimates.
GTX 970 Tom's Tested Consumption: 177W * 80% = 141W * 0.92 (difference between base clocks) = 130W
Well, a couple of observations:
As I alluded to, the 80% of desktop will be some odd outlier that can be confirmed but likely isn't indicative of actual performance. PR bumpf never is, so grain of salt time with that number. I'm picking ~70% might be more "real world".
Secondly, since base clock means nothing, you'll need boost states to make a comparison. A quick check of Anand's in-game clock frequency shows the card running consistently around 1200MHz
The boost for the 900M's is unknown. What is known is that a hike in boost requires a commensurate raising of voltage. Thus the opposite is also true. A 100MHz drop in achieved boost frequency might drop the GPU input voltage closer to 1.1 - 1.15V which would cut power consumption (and performance to a degree, which is why I think 70% of desktop is a more reasonable expectation).
Another back of the hand calculation
160W * 70% = 112W * 0.9 (picked-at-random difference in boost state) = 100.8W
Thirdly, I don't trust Tom's. Never have, probably never will. Their benching leaves a lot to be desired in general. More to the point
HT4U have an impeccable record for benchmarking, testing and metering procedure. As such I'd be more inclined to trust their 160W figure as a basis. They certainly didn't have their GTX 970 using more power than a higher boosting, higher base clocked, fully enabled GTX 980 as Tom's did.
Lastly, as I mentioned, the MXM-B specification has a maximum module board power of 100W, and whereas the desktop cards carry 4GB of GDDR5, the 970M is specced for 6GB - which requires a further ~10-15W from the 100W power budget.
Desktop parts have fudged TDP's most of the time - it's par for the course. But they don't flaunt international component and electrical standards (R9 295X2 excepted). Mobile parts are certainly more constrained ( no aux power, heatsink module constraints), and somehow I doubt Nvidia would suddenly break an MXM standard they introduced when they're pushing efficiency.