Moore's Law isn't dead according to AMD, but it has changed

AlphaX

Posts: 98   +20
Staff
The big picture: On multiple occasions in recent years, Nvidia CEO Jensen Huang has stated that "Moore's Law is dead" to defend GPU price increases. During a summit, AMD's chief technology officer, Mark Papermaster, disputed these repeated claims and detailed the true reasoning behind higher costs.

Since 2017, Jensen Huang has declared Moore's Law dead several times. It has typically been in response to consumer inquiries about the steadily increasing prices of Nvidia's graphics cards.

While Nvidia graphics cards are usually good products -- the GTX 1630 being an exemplary instance -- buyers are justified in being curious about the ever-growing prices for the products. For example, in 2013, Nvidia launched the GTX 780 for a retail price of $649. Meanwhile, last month, the RTX 4080 started at a staggering $1199.

That's a price increase of nearly 85%, and while the performance boosts have certainly been more than that, the dollar's value has not grown in the same manner over the last nine years. Following the reveal of the RTX 4090 and both RTX 4080 models, Huang once again claimed that "Moore's Law is dead" during a Q&A with media reporters.

As is tradition lately with the graphics card market, AMD has shown up in an attempt to one-up Nvidia. Team Red's chief technology officer, Mark Papermaster, spoke at a recent summit and refuted Jensen's claim, emphasizing that Moore's Law is still alive and well.

"It's not that there's not going to be exciting new transistor technologies... it's very, very clear to me the advances that we're going to make to keep improving the transistor technology, but they're more expensive," said the CTO. "So you're going to have to use accelerators, GPU acceleration, specialized function…" he added to explain how AMD manages manufacturing costs.

According to Papermaster, AMD had expected the price increases and described them as a leading factor in the company's recent switches to chiplet designs in its processors and graphics cards.

The conflicting reports from AMD and Nvidia are puzzling once you remember that both companies receive processor wafers from TSMC. Team Red seems to be pushing to circumvent the supposed "death" of Moore's Law, while Team Green has seemingly decided to fully embrace the "Moore's Law is dead" philosophy.

Overall, in the words of the great Mark Twain, the reports of Moore's Law's death were greatly exaggerated. (Okay, he didn't say that.) Nonetheless, the law isn't dead, but it is becoming more complex and expensive to keep it aloft. It's just a matter of how each company approaches the situation.

Permalink to story.

 
We need to push into their blockheads that we WON'T pay those prices whether the Moore's law is dead or is alive and kicking. So, deal with your costs, Nvidia, in a way it is not crushing our wallets because we WON'T pay those exorbitant prices for graphics cards in order to game. The message MUST be clear enough.
 
Yeah, well, we all know Nvidia are blatant liers without reminders, you know.
AMD was more "addressing the industry" for the technically inclined rather than justifying absurd pricing like nVidia tried. AMD didn't just say "things are more expensive, deal with it." They said "here are the technical challenges the industry as a whole is facing and this is what we, as a company, are doing about it"
 
AMD was more "addressing the industry" for the technically inclined rather than justifying absurd pricing like nVidia tried. AMD didn't just say "things are more expensive, deal with it." They said "here are the technical challenges the industry as a whole is facing and this is what we, as a company, are doing about it"
They’ve had to, though. Despite good revenues for past few years, AMD aren’t generating particularly large net income figures. That in itself isn’t necessarily a bad thing but cutting edge chip R&D sucks up lots of money and AMD is competing in pretty much every major processor sector. Their graphics division is especially tight, income-wise, and they’re being sensible about saving as much as possible. That said, the 7900 XTX is still $999 which is a fair bit more than some of their previous top end cards.
 
They’ve had to, though. Despite good revenues for past few years, AMD aren’t generating particularly large net income figures. That in itself isn’t necessarily a bad thing but cutting edge chip R&D sucks up lots of money and AMD is competing in pretty much every major processor sector. Their graphics division is especially tight, income-wise, and they’re being sensible about saving as much as possible. That said, the 7900 XTX is still $999 which is a fair bit more than some of their previous top end cards.
I agree, I just don't want people unfairly comparing this to nVidia saying, "Moore's law is dead" which is what I immediately associated his original post with. It's also interesting reading about the technical challenges and what options we have to solve.

I'd like to cite Intel as an example for a moment. People kept criticizing them for staying on 14nm for so long. Well, it wasn't that simple. While not leaps and bounds an increase in performance, they made several changes on their 14nm gate design to increase transistor density. So while a node shrink is a very good shotgun approach to increasing performance, it isn't the only answer.

AMD is now showing us that the shotgun approach is actually a poor one now. Many parts of the chips show almost no performance increase on smaller nodes while drastically decreasing yields.

I find what AMD is doing interesting in the same why I find what Intel did with 14nm interesting.
 
They’ve had to, though. Despite good revenues for past few years, AMD aren’t generating particularly large net income figures. That in itself isn’t necessarily a bad thing but cutting edge chip R&D sucks up lots of money and AMD is competing in pretty much every major processor sector. Their graphics division is especially tight, income-wise, and they’re being sensible about saving as much as possible. That said, the 7900 XTX is still $999 which is a fair bit more than some of their previous top end cards.
What are you talking about brah!? 6900xt was 999 at launch. Same fracking price and the 7900xtx will be 70 percent faster. WTF Brah!?
 
I agree, I just don't want people unfairly comparing this to nVidia saying, "Moore's law is dead" which is what I immediately associated his original post with. It's also interesting reading about the technical challenges and what options we have to solve.

I'd like to cite Intel as an example for a moment. People kept criticizing them for staying on 14nm for so long. Well, it wasn't that simple. While not leaps and bounds an increase in performance, they made several changes on their 14nm gate design to increase transistor density. So while a node shrink is a very good shotgun approach to increasing performance, it isn't the only answer.

AMD is now showing us that the shotgun approach is actually a poor one now. Many parts of the chips show almost no performance increase on smaller nodes while drastically decreasing yields.

I find what AMD is doing interesting in the same why I find what Intel did with 14nm interesting.
Intel 14nm+... showed intel laziness.
They underestimated amd and didn't think that amd+tsmc euv could challenge them.

Who really wants to buy Xeon now?
They are only server buyers who can't wait for epyc delivery which is no longer a problem as chip shortages already ends
 
What are you talking about brah!? 6900xt was 999 at launch. Same fracking price and the 7900xtx will be 70 percent faster. WTF Brah!?
The 6900 XT was released during the height of Covid, when all graphics cards were highly priced. Besides, I said ‘some of their previous top-end cards’ not all of them - the Radeon VII, for example, was $699. The point I was making is that AMD can’t absorb all of the costs or reduce them through new manufacturing techniques to keep prices to those levels.
 
Intel 14nm+... showed intel laziness.
They underestimated amd and didn't think that amd+tsmc euv could challenge them.

Who really wants to buy Xeon now?
They are only server buyers who can't wait for epyc delivery which is no longer a problem as chip shortages already ends
Transistors are now more about gate design and basically nothing to do with transistor size. Intel started that in their 14nm+/++/+++.


You also can't compare nodes from different fabs. Intel's 10nm specs are nearly identical to TSMC's and Samsung's 7nm or Global Foundries 12nm
 
Matt Frusher said:
While Nvidia graphics cards are usually good products --the GTX 1630 being an exemplary instance
And this is an exemplary instance of low level shitposting that I didn't expect to find in a serious and unbiased tech news site.But maybe I had the wrong expectation.
 
Transistors are now more about gate design and basically nothing to do with transistor size. Intel started that in their 14nm+/++/+++.


You also can't compare nodes from different fabs. Intel's 10nm specs are nearly identical to TSMC's and Samsung's 7nm or Global Foundries 12nm
I know nm naming is not physically relevant anymore.
If Intel 10nm/7 duv is comparable to tsmc n7 euv, then why tiger lake and alder lake's performance per watt is much worse than Zen 3/4?
Qualcomm significantly improves snapdragon 8 gen 1 performance and efficiency simply by switching from samsung to tsmc at same 5 nm class.
 
I know nm naming is not physically relevant anymore.
If Intel 10nm/7 duv is comparable to tsmc n7 euv, then why tiger lake and alder lake's performance per watt is much worse than Zen 3/4?
Qualcomm significantly improves snapdragon 8 gen 1 performance and efficiency simply by switching from samsung to tsmc at same 5 nm class.
I really have no idea. There are many people who are successful at under-volting and getting a significantly better performance per watt. I guess since manufacturers have gone the "performance at any cost" route they overclock the highend chips as much as they can out of the box. One of the 1800x's in my server room likes to boost to 4.3 all core all the time and it's only suppose to have a 4.0 singlecore boost.

I read a lot about people seeing how far they can drop the stock voltage while keeping the stock clocks. There are some cases where I read about people achieving higher clocks on lower voltages than what comes out of the box. I wish I had an answer for you, but if you can tinker around you can apparently get your efficiency up pretty high.

I did a lot of overclocking back in the Opteron/FX-60days and even got some good ones on my 3770k, but after moving back to AMD on the AM3 platform I never cared much. I guess I find it more interesting having 4, 1800x's running in another room than cranking out the highest clocks for gaming these days.
 
I agree, I just don't want people unfairly comparing this to nVidia saying, "Moore's law is dead" which is what I immediately associated his original post with. It's also interesting reading about the technical challenges and what options we have to solve.

I'd like to cite Intel as an example for a moment. People kept criticizing them for staying on 14nm for so long. Well, it wasn't that simple. While not leaps and bounds an increase in performance, they made several changes on their 14nm gate design to increase transistor density. So while a node shrink is a very good shotgun approach to increasing performance, it isn't the only answer.

AMD is now showing us that the shotgun approach is actually a poor one now. Many parts of the chips show almost no performance increase on smaller nodes while drastically decreasing yields.

I find what AMD is doing interesting in the same why I find what Intel did with 14nm interesting.

However AMD uses different node sizes on a Chip - they only shrink mainly things that get back for bucks . That's why there GPUs are much cheaper to produce than Nvidia - it also means - they can keep 14nm , or 6nm things nearly identical from one gen to next. they shrink the logic part- so no wasted space on top node.

How AMD is Fighting NVIDIA with RDNA3 - Chiplet Engineering Explained
 
If Intel 10nm/7 duv is comparable to tsmc n7 euv, then why tiger lake and alder lake's performance per watt is much worse than Zen 3/4?
That's partly down to Intel chasing performance uplift figures at any cost. They also no longer state transistor counts for their CPUs so any estimates of die density figures are very open to interpretation - how close Intel 7 is to N7 is really anyone's guess.

The biggest aspect behind perf-per-watt, though, is the fundamental architectural layout of everything inside the die. Take Intel's Pentium 4 EE 965 - one of the last of the Netburst chips, made on their 65nm process node. In the same year that it was released, Intel also shipped the Core 2 Duo E6700, which was also made on the same node. The perf-per-watt (see Anandtech's review of the E6700) was far superior to the 965's.
 
However AMD uses different node sizes on a Chip - they only shrink mainly things that get back for bucks . That's why there GPUs are much cheaper to produce than Nvidia - it also means - they can keep 14nm , or 6nm things nearly identical from one gen to next. they shrink the logic part- so no wasted space on top node.

How AMD is Fighting NVIDIA with RDNA3 - Chiplet Engineering Explained
I know, that was exactly what I was talking about.
 
I know, that was exactly what I was talking about.
Fair enough.
"AMD is now showing us that the shotgun approach is actually a poor one now"

wasn't sure what you meant by shotgun approach - so I now take it you mean AMD is showing NVidias approach is poor.

Re-read your comment - and realized skimmed too fast - as you mentioned shotgun previous paragraph - my bad
 
The 6900 XT was released during the height of Covid, when all graphics cards were highly priced. Besides, I said ‘some of their previous top-end cards’ not all of them - the Radeon VII, for example, was $699. The point I was making is that AMD can’t absorb all of the costs or reduce them through new manufacturing techniques to keep prices to those levels.
Well I personally do not think the pandemic had anything to do with the actual MSRP because the price was actually way higher than 999 due to scalpers and miners. I would also say the 6900xt is the first true enthusiast high end card AMD has made in over 10 years because it could actually compete with Nvidias enthusiast high end card. Before the 6900XT AMD could not come close to Nvidias high end offerings. So that being said about your 699 price is that the Radeon 7 was not a super high end card because it could not come close to Nvidias high end. The Radeon 7 was a rtx 2080 and priced in that category, which is not an enthusiast class card like the 7900xtx is. The 2080ti was well above and priced over 999 depending on the card. So what you are saying is that the reason the 2080ti was over 999 was because of the pandemic? Better yet let’s go back to the GTX 690 at 999. I guess the excuse that card was so much was because of the pandemic too. Ok….. So yeah I think my point is very valid about the 999 price point being true enthusiast high end price.
 
Well I personally do not think the pandemic had anything to do with the actual MSRP because the price was actually way higher than 999 due to scalpers and miners. I would also say the 6900xt is the first true enthusiast high end card AMD has made in over 10 years because it could actually compete with Nvidias enthusiast high end card. Before the 6900XT AMD could not come close to Nvidias high end offerings. So that being said about your 699 price is that the Radeon 7 was not a super high end card because it could not come close to Nvidias high end. The Radeon 7 was a rtx 2080 and priced in that category, which is not an enthusiast class card like the 7900xtx is. The 2080ti was well above and priced over 999 depending on the card. So what you are saying is that the reason the 2080ti was over 999 was because of the pandemic? Better yet let’s go back to the GTX 690 at 999. I guess the excuse that card was so much was because of the pandemic too. Ok….. So yeah I think my point is very valid about the 999 price point being true enthusiast high end price.
One cannot say that the pandemic didn't have anything to do with the MSRP of graphics cards launched between 2020 and 2021 -- no vendor was going to completely absorb the increase in costs due to affected supply and distribution chains, and the higher production fees from TSMC and packaging companies.

As for the remark about $999 and top-end cards, I said this:
the 7900 XTX is still $999 which is a fair bit more than some of their previous top end cards.
Emphasis added to point out that I was referring to AMD's top-end graphics cards, I.e. the ones that have the highest specifications out of any of the consumer models they release. And again, I said "some" -- they've had more expensive top-end cards in the past, such as the Radeon RX 295 X2, which was $1499; the R3 390 XT was $1399, as was the R9 290 X2. They were, of course, dual GPU cards, just like Nvidia's GTX 690 and the majority of those types of models were hugely overpriced, partly due to them being extremely low-volume sellers.

Whatever you feel about the Radeon VII, AMD specifically marketed it as a top-end card when they launched it, and it certainly out-performed their previous 'best' graphics card, the Vega 64 LC (that particular card was released at $699 in the same year that Nvidia launched the GTX 1080 Ti at $699.)

====

Anyway, keeping to the news item's topic, Gordon Moore made his observation ("The complexity for minimum component costs has increased at a rate of roughly a factor of two per year") on the basis that semiconductor manufacturing advances would ensure that the increase in production costs would be offset by the ability to make chips twice as complex. in terms of component count - I.e. newer chips would be so much better than their predecessors that they'd sell in sufficient numbers to pay for the advances, due to the economies of larger scales.

Nvidia's Jensen Huang is basically saying that this is no longer the case and if one is expecting the same continued rate of GPU complexity, everyone is going to have to pay more for it. AMD's Mark Papermaster has essentially said that the traditional methods of chip manufacturing do result in higher costs, which is why they've gone down the chiplet route.

Ironically, they're actually agreeing with each other -- they both agree that it's becoming increasingly more expensive to fabricate new GPUs (and to a certain degree, associated global memory) and it's no longer simply a case that making a greater number of smaller and faster processors will pay for it.

It's how they are going about addressing it that's different. Nvidia = significantly increase prices and keep on doing everything the same way as they have for years. AMD = increase prices a bit but explore alternate designs and systems to keep price rises under control.
 
TBH, gamers in general, remind me of three particular segments of American society, children, labor unions, and welfare recipients.

What they share in common is; no matter how much you give them, they always want more.

The more habitual of gaming aficionados, tipped their hand by paying double what cards were worth from scalpers. "Want", becomes "need", in the minds of the badly afflicted. Together they "have a baby", and name it "greed".

Couple that up with the fact that, whatever shiny new electronic trinket the industry dangles in front of your faces, you can't live without, and you have the perfect recipe for "ripoff stew".

Once upon a time, the premium >> Pentium 4 "Ultimate", (or whatever it was called) listed for $1,000. By how many multiples do you think a lowly $125.00 Alder Lake i3-12100 would outperform it? I'm thinking 5 times at minimum.

So, many of you need to realize you've brought some of this on yourselves, and deal with it.

In related news, the GTX 3090 ti blows AMDs flagship out of the water. So if you feel the need for 500 (or so) FPS, suck it up and pay the piper.
 
Last edited:
And this is an exemplary instance of low level shitposting that I didn't expect to find in a serious and unbiased tech news site.But maybe I had the wrong expectation.
Well Mr.Bear, if you think this article was a "sh!tpost", it seems to me more likely you simply forgot to light a match when you farted in the bathroom.

The x30 Nvidia series cards have always been serviceable cards, for the purpose for which they were intended. Which BTW, isn't to make you king of the hill in a pro gamers contest.

The GT-1630 is more capable than the GT-1030, which of course you'd likely consider beneath your dignity as well. It also introduces 4 GB of GDDR6 It's utility could be as a mild hardware accelerator for photo editing, low res gaming, (Agreed it's not a "console killer"), and a workaday card to "take a load off", an older IGP.

The price at present is outrageous. However I saw them at Newegg @ $150. so they may come down as volume sales increase. (That's a big "maybe", though). The GT-1030, at the end of its run went for about $100.

Were you to need a card for those purposes today, you would be better out buying one of AMD's lower end cards which are being blown out ATM. But, make up your mind quickly, I doubt they'll be around forever.

Still, the GT-1630 is not a bad card, nor the article a "sh!tpost".
 
Moore's law isn't dead.

But, then again, neither is Moore's second law.
"Rock's law or Moore's second law, named for Arthur Rock or Gordon Moore, says that the cost of a semiconductor chip fabrication plant doubles every four years."
 
Back