Nvidia's Jensen Huang once again claims Moore's Law is dead

Jimmy2x

Posts: 142   +11
Staff
Why it matters: Earlier this week, Nvidia pulled back the curtain on the much-anticipated RTX 40-series graphics card lineup. To no one's surprise, the new additions to the RTX family bring consumers significant increases in capability, power draw, and, unfortunately, overall cost. According to Nvidia CEO Jensen Huang, the trend of chips and other components going down in price "is a story of the past."

"Moore's Law is dead," Huang declared during a Q&A session with the media when asked about the rising GPU prices.

According to Huang, a 12-inch wafer is "a lot more expensive today" and insisted that the price increases were justified when compared to the performance levels offered by previous generations. The statements support Nvidia's already uncomfortable pricing narrative, which has thus far been ill-received and instead viewed as a justification for inflated prices.

"The idea that the chip is going to go down in price is a story of the past." — Jensen Huang

It's not the first time Nvidia's leather-clad leader called Moore's Law dead. In 2017, Huang challenged the validity of the long-standing technical observation during a speech at the GPU Technology Conference in Beijing. During the speech, he also said that the continued increase in GPU technology would someday lead to GPUs completely replacing some CPUs.

Fast forward to 2018, Nvidia and Huang went as far as to posit a new observation, stating that advancements in GPU technology far exceeded traditional CPU advancements. The outlook, aptly named Huang's Law, says that where Moore's law is no longer applicable to CPUs, GPU performance will more than double every two years.

Nvidia's views on Moore's Law and technological observation don't stop there. In 2019, Huang continued to pound nails into Moore's coffin at that year's Consumer Electronics Show in Las Vegas. The Nvidia CEO elaborated on the slowing rate of technological advancement, stating that "Moore's Law isn't possible anymore."

Industry experts, including the man himself, Gordan Moore, have hypothesized that the law cannot persist forever. In 2005 Moore stated that the industry could not sustain the projection indefinitely and that transistors would eventually reach the limits of miniaturization.

He later clarified his forecast, predicting the law could remain a viable observation through 2025. With the current string of architectural advancements, such as Intel's big.LITTLE design approach and AMD's 3D V-Cache, there's always the possibility of CPU manufacturers breathing new life into Moore's 58-year-old law.

Permalink to story.

 

envirovore

Posts: 533   +982
TechSpot Elite
Man, people are in for a surprise whenAMD GPUs are more expensive than they thought they'd be due to increased wafer and other component costs being passed onto the consumer through them as well.

This is simple business practices. Nvidia, AMD, nor Intel are going to just eat the cost of these products, we are and it's not going to be exclusive to Nvidia either.

 

terzaerian

Posts: 1,517   +2,259
Man, people are in for a surprise whenAMD GPUs are more expensive than they thought they'd be due to increased wafer and other component costs being passed onto the consumer through them as well.

This is simple business practices. Nvidia, AMD, nor Intel are going to just eat the cost of these products, we are and it's not going to be exclusive to Nvidia either.
Of course, inflation is a *****. As long as AMD's next gen isn't using power delivery standards that are still basically prototypes, and aren't drawing appliance-level loads, however, that's a win in my books.
 

envirovore

Posts: 533   +982
TechSpot Elite
Of course, inflation is a *****. As long as AMD's next gen isn't using power delivery standards that are still basically prototypes, and aren't drawing appliance-level loads, however, that's a win in my books.

I'm quite interested in seeing how moving to a chiplet design will come into play with their power consumption.
I *really* want to see them on par with NV as far as additional features (specifically raytracing) and performance goes.

That said, as power use goes, I'm already pulling 400 under load on the GPU as it is, so *for me* as long as whatever I decide to upgrade to (if I choose to do so) is at or below that is fine.
 

takaozo

Posts: 421   +641
I'm not expecting any miracol from the AMD prices and not looking for a new gen card. Just for 6700-6800 to drop enough to make sense for me.
Last days I've seen some 6700 below $300 where I live. Shouldn't be long to hit $250.
And btw I'm not a beta tester for AMD chiplet GPU design either.
 

R00sT3R

Posts: 744   +2,304
Imagine the scene at Nvidia HQ when meeting to set the prices of the 40xx cards..

Jensen: 'Guys, I've come up with a genius solution to beating the scalpers, Lets just set the MSRP at scalper prices!'

Rest of the room: 'Jensen, you're so smart, we love you'
 

DSirius

Posts: 366   +760
TechSpot Elite
I'm not expecting any miracol from the AMD prices and not looking for a new gen card. Just for 6700-6800 to drop enough to make sense for me.
Last days I've seen some 6700 below $300 where I live. Shouldn't be long to hit $250.
And btw I'm not a beta tester for AMD chiplet GPU design either.
I may become a beta tester, I'll keep you updated.
 

yRaz

Posts: 4,813   +5,995
I'm quite interested in seeing how moving to a chiplet design will come into play with their power consumption.
I *really* want to see them on par with NV as far as additional features (specifically raytracing) and performance goes.

That said, as power use goes, I'm already pulling 400 under load on the GPU as it is, so *for me* as long as whatever I decide to upgrade to (if I choose to do so) is at or below that is fine.
Frankly, I don't see the cost of silicon being the biggest driving factor in the increased cost of these cards. They are using absolutely REDICULOUS coolers, I wouldn't be surprised if the cost on nVidia's side for the 4080 cooler is well over $100 a unit. The other part to this is since they are making absolutely crazy power limits they need to put the electronics back bone on the chips to provide proper, clean power to these chips. nVidia's solution to performance is just pump more power into the chip, there is very little innovation on their part.

nVidia is catering to the deep learning crowd and putting al this AI stuff on their chips taking up valuable silicon. These cards don't seem like consumer cards, they seem like enterprise cards and that consumers are an after thought. Then they find software solutions to use up the extra silicon they wasted on the chip. Going on power limits alone, these cards look like more like something that belongs in a server rack, not a computer case. And I would cite these MASSIVE coolers and videocard support beams as evidence of that.

And now, if we want to use these cards, not only do we have to go out and buy a new powersupply, we have to buy a VERY EXPENSIVE powersupply. Something in the 1000W+ range. I'd say if you want a 4080 you'd likely need something in the 1200w+ range. That is getting very close to what you find in server racks.

It just seems like laziness and that nVidia doesn't want to develop cards for the consumers. They want to keep development costs low and pass the secondary prices to use their cards onto the consumer(ie: coolers, powersupplies, video card support beams, ect.)

Like what is actually going on with this crap? It was crazy watching things like the GTX 280 and vega 64 power consumption numbers but these cards are DOUBLE those. These numbers are DOUBLE what people already thought was absurd and now they're just saying, "shut up, buy it, moores law is dead."

We all complained about the 280 and vega 64 cards but the industry accepted it. In my 25 years of following the tech industry I have never seen anything this absurd. This is craziness and I don't think the market will accept it like it did in the past. These cards are too expensive and there are too many downsides to owning them. They want me to pay $1200 for a card that's going to make it uncomfortable to sit in my room? I mean I guess I can run some cables and keep it in the garage or something but I like having the aesthetics of buying a computer in my room. It's fun choosing coolers and picking out a case I like. That's a major part of the hobby for me. Building something I think is cool and showing it off in my gaming room.

I guess what I'm getting at is that I don't really see where these products fit into the market. It isn't just a question of price anymore, there are too many things going on surrounding the cards that make them unattractive to consumers. I have never seen such a strong push back from consumers in all my years involved in the hobby.

Sorry for the long post
 
Last edited:

neeyik

Posts: 2,255   +2,718
Staff member
nVidia's solution to performance is just pump more power into the chip, there is very little innovation on their part.
Without any independent testing of power consumption, one simply can't make that criticism of a chip that has 2.7 times more transistors than its predecessor (and into a smaller die, too). Nvidia are claiming a peak consumption of 450W for the 4090 - the same amount that they claimed for the 3090 Ti. TechPowerUP recorded the latter hitting 445W during gaming, with a maximum of 478W, so it was a reasonably accurate claim.
 

yRaz

Posts: 4,813   +5,995
Without any independent testing of power consumption, one simply can't make that criticism of a chip that has 2.7 times more transistors than its predecessor (and into a smaller die, too). Nvidia are claiming a peak consumption of 450W for the 4090 - the same amount that they claimed for the 3090 Ti. TechPowerUP recorded the latter hitting 445W during gaming, with a maximum of 478W, so it was a reasonably accurate claim.
nVidia is claiming 450W on reference cards. Just like the 3090ti, board partners cranked up the power past nVidias specifications. I also don't understand why nVidia would openly say they're allowing a 600watt power limit if they're expecting board partners to stay within the 450watt power limit. Then look at the 4 slot coolers, they are massive compared to the 3090ti. It is my opinion that nVidia fully expects board partners to go beyond the 450watt power limit and they can point to the reference card saying, "look, we said 450, they're the ones who made 600 watt cards."

I would also like to point to gamers nexus testing of transient electrical spikes. We don't know if nVidia has found a solution to this on the 40 series. I would argue that a 450watt card that has transient spikes of 600 watts is a 600 watt card
 

yRaz

Posts: 4,813   +5,995
Nvidia do not "expect" they dicatate to AIB's what to do. Just look at EVGA.
These high end cards have support for upto 675watts of power draw between the PCie slot and the 2x12 pins. Why would they put that in there if they didn't expect people to use it or need it in the future? And with as scummy as a company they are I wouldn't doubt they hide behind their board partners. From what I could see of the EVGA situation, they were sick of being told what to do and taking the heat for what nVidia was telling them to do. "the reference cards are 450, we're giving you 675 watts to play with. We want you to go wild"
 

neeyik

Posts: 2,255   +2,718
Staff member
I would also like to point to gamers nexus testing of transient electrical spikes. We don't know if nVidia has found a solution to this on the 40 series. I would argue that a 450watt card that has transient spikes of 600 watts is a 600 watt card
TechPowerUp noticed this too - 528W in 20 millisecond intervals. Not quite as bad as the RX 6900 XT which was hitting 619W (same time interval) in their testing.
 

takaozo

Posts: 421   +641
I guess just by looking at the Gigabyte Aorus 4090, you may be right.
That thing it's not going to fit in 90% current cases
 

yRaz

Posts: 4,813   +5,995
TechPowerUp noticed this too - 528W in 20 millisecond intervals. Not quite as bad as the RX 6900 XT which was hitting 619W (same time interval) in their testing.
That's interesting you brought up the 6900. I was thinking about why that was and came up with the thought that maybe one of the reasons AMD cards are slightly cheaper is that they don't have as many capacitors or lower quality capacitors on the board. That would lead to increased transient spikes going to the power supply. However, capacitors are so cheap that it doesn't really make sense that that would be a cost cutting measure. Maybe it requires increased board complexity and therefore cost?

I don't have an answer, just an observation