Intel might be catching up to AMD's discrete GPU market share

"bandwidth here between the MCDs and GCD is 5.3TB/s"
That’s total bandwidth between all of the MCDs and the GCD — an individual MCD connection is 883 MB/s. That’s very good but for the application you’re suggesting, it’s not enough,. AMD would need to adopt a tile system like Intel is going to be using with Meteor Lake and that would only add to the packaging costs. One can price a high end model high enough to absorb that but at the low and mid-range, it would be harder to achieve. Not impossible, just harder (and more costly).

Server/datacenter applications of multi/GPU systems either use a really expensive connection system or get used in situations where inter-GPU communications don’t need low latency, high bandwidth systems.

It’s also worth noting that AMD does use chiplets for all its products — mobile CPUs are still monolithic.
 
Inflation makes people complain about everything even without understanding the reality, analyzing the cost of production, development and support via software in the long term it is very difficult to produce cheap low-end GPUs and make a profit. AMD has to sell a few million GPUs just to pay the design cost.

In my view, AMD could further alleviate production costs if it could adopt a chipplet strategy similar to the CPU line where one piece (CCD) is developed and used from entry-level to high-end. For example, AMD would develop a low-end base GPU with 32CUs then put two chips together and have a Mid-end product (64CUs), and 4x32CUs for the High-end.
MCM solutions or dual card setups on video cards have proven to be very difficult to work without problems. That's why AMD and Nvidia both basically abandoned SLI/Crossfire. With RDNA3 AMD made some soft of MCM solution but only cache chips are separate.
Those two points aren't mutually exclusive -- one can criticize a product as being awful, even if the competition offers nothing in the same price category. Take a look at this monstrosity:


That's a dual-slot, 11.1" long card with nothing more than a 3% overclock. Sure, one can buy it on Amazon for $162 but nothing about that appeals. Gigabyte's Eagle version of the 6500 XT isn't overclocked but at least the card is a lot shorter -- it's around $190 on Amazon, though.

For the price and size, one may as well go with an RX 580, which are $120 on Amazon. Sure it won't be quite as fast as the 6500 XT in the latest games, but neither will it be that far off, and it's 26% cheaper.


Arguably, it did nothing wrong -- at the time, there was scope for a product in the market and it filled it.
Right now 6500XT surely looks pretty bad but when it launched, there was not much wrong with it like said on last paragraph. And that's why so low score really made me wonder what was so much wrong with it.
However, they could have used the dregs of die bins for the Navi 23 to furnish that model. AMD could have disabled an entire Shader Engine and half the L3 cache (halving the number of memory controllers at the same time) to have a "Navi 24" die. It still would have been a better choice -- no complaints about the PCIe bus, no complaints over the number of video outputs, and no complaints of the lack of an encoder.

Given that there's no chip shortage now and AMD doesn't have any RNDA 3 models in this sector yet, it could easily update the RX 6500 XT with castrated Navi 23 dies to address matters.
I agree this partially. Right now AMD could do quite a lot to replace 6500XT. There were rumours (facts?) about Navi 3x design flaw that was supposed to take some months to correct. Perhaps AMD rather waits for those fixed versions than makes much 7nm stuff. Today it's hard to keep track what is just rumour and what is at least somewhere partially confirmed fact. Anyway mid end GPU market is still pretty dead and I'm waiting for new models to arrive before buying anything new.
 
Intel should just shut up and work, despite being a gigantic company they are not doing well and continually failing in many points.

Intel GPUs only exist because AMD allows it through its IP license, so no... less arrogance, intel. it helps.

Its an investor meeting. Their legally required to "talk". Not to mention it would be stupid not to tell your investors when the media is saying incorrect things (and if they outright lie when they release that finacial data and predictions they can and do go to jail).

If intel can get process equivalent to within a node of TSMC then they will likely severely threaten AMD because intel has such a huge IP chest and some amazing packaging technologies. We have already seen that if you could make any of the recent intel CPUs on the same node as AMD they would be faster and at least as power efficient.

People forget that AMDs biggest advantage was process node superiority. Their chiplet stuff is mainly a great way to save money more then its a big performance improvement. If intel gets access to the same node or even one node behind, at a similar cost, they have a number of packaging tricks and other techniques that will really make life difficult for AMD (since AMD is totally reliant on TSMC and that node superiority).

I think AMD made a mistake not developing multiple proprietary graphics techniques to put into the consoles and then offering those on their PC GPUs. So they would have something no one else could duplicate (easily) and built in developer support through the consoles.

Im glad they didn't, but I think it was a mistake (its possible Microsoft and Sony demanded the more compatible specs that they have to make opening the pc gaming market easier though).
 
If intel can get process equivalent to within a node of TSMC then they will likely severely threaten AMD because intel has such a huge IP chest and some amazing packaging technologies. We have already seen that if you could make any of the recent intel CPUs on the same node as AMD they would be faster and at least as power efficient.

People forget that AMDs biggest advantage was process node superiority. Their chiplet stuff is mainly a great way to save money more then its a big performance improvement. If intel gets access to the same node or even one node behind, at a similar cost, they have a number of packaging tricks and other techniques that will really make life difficult for AMD (since AMD is totally reliant on TSMC and that node superiority).
That is total BS. Intel 7 node is pretty much comparable against TMSC 7nm node. At least we can say neither is clearly superior against other.

Then it becomes on what companies achieve with somewhat similar nodes. AMD desktop Ryzens are basically cut down server chips so focusing on server line up makes sense. AMD released Zen2 Epycs ("Rome") on TSMC 7nm node around three and half years ago. Intels latest Sapphire Rapids (Intel 7/10nm) cannot even match over three year old AMD server chip when it comes to efficiency and/or amount of cores. Too bad, since then AMD has released two more server chips (Milan, Zen3 7nm and Genoa 5nm, Zen4) that are even more ahead.

So saying Intel is behind because AMD uses TSMCs superior node is total BS. Right now Intel would need to be at least 4 nodes ahead AMD to be even remotely competitive.
I think AMD made a mistake not developing multiple proprietary graphics techniques to put into the consoles and then offering those on their PC GPUs. So they would have something no one else could duplicate (easily) and built in developer support through the consoles.

Im glad they didn't, but I think it was a mistake (its possible Microsoft and Sony demanded the more compatible specs that they have to make opening the pc gaming market easier though).
Would be pretty hard since Microsoft and Sony co-operated together when AMD designed console APUs. Basically both use same chips, but both also wanted some of their own there.
 
That is total BS. Intel 7 node is pretty much comparable against TMSC 7nm node. At least we can say neither is clearly superior against other.

Then it becomes on what companies achieve with somewhat similar nodes. AMD desktop Ryzens are basically cut down server chips so focusing on server line up makes sense. AMD released Zen2 Epycs ("Rome") on TSMC 7nm node around three and half years ago. Intels latest Sapphire Rapids (Intel 7/10nm) cannot even match over three year old AMD server chip when it comes to efficiency and/or amount of cores. Too bad, since then AMD has released two more server chips (Milan, Zen3 7nm and Genoa 5nm, Zen4) that are even more ahead.

So saying Intel is behind because AMD uses TSMCs superior node is total BS. Right now Intel would need to be at least 4 nodes ahead AMD to be even remotely competitive.

Would be pretty hard since Microsoft and Sony co-operated together when AMD designed console APUs. Basically both use same chips, but both also wanted some of their own there.

Its been to long since ive looked over the differences between the process nodes but last I checked intels 7 was really their 10nm process slighty improved which barely matched TSMCs 7nm node.

That said, I was really comparing consumer level chips where core count isn't nearly as important as single threaded performance with moderate core count (and intels E cores seem to handle background tasks quite well freeing up the P cores for gaming, etc).

At the server level AMD has a pretty commanding lead thanks to that chiplet design and where the chip to chip latency isn't nearly as big a deal (and its something server software has had to contend with for decades due to multi CPU systems anyway).

So I'd say we are both right in a way. But since the big money is in servers, that lead AMD has is something Intel will need to fix, and quickly. And the real monkey wrench in the whole system is the fact that more and more large companies are just rolling their own ARM servers (or someday maybe RISC-V). Its certainly a much more interesting time to be watching the field!
 
Its been to long since ive looked over the differences between the process nodes but last I checked intels 7 was really their 10nm process slighty improved which barely matched TSMCs 7nm node.

That said, I was really comparing consumer level chips where core count isn't nearly as important as single threaded performance with moderate core count (and intels E cores seem to handle background tasks quite well freeing up the P cores for gaming, etc).

At the server level AMD has a pretty commanding lead thanks to that chiplet design and where the chip to chip latency isn't nearly as big a deal (and its something server software has had to contend with for decades due to multi CPU systems anyway).

So I'd say we are both right in a way. But since the big money is in servers, that lead AMD has is something Intel will need to fix, and quickly. And the real monkey wrench in the whole system is the fact that more and more large companies are just rolling their own ARM servers (or someday maybe RISC-V). Its certainly a much more interesting time to be watching the field!

Its an investor meeting. Their legally required to "talk". Not to mention it would be stupid not to tell your investors when the media is saying incorrect things (and if they outright lie when they release that finacial data and predictions they can and do go to jail).

If intel can get process equivalent to within a node of TSMC then they will likely severely threaten AMD because intel has such a huge IP chest and some amazing packaging technologies. We have already seen that if you could make any of the recent intel CPUs on the same node as AMD they would be faster and at least as power efficient.

People forget that AMDs biggest advantage was process node superiority. Their chiplet stuff is mainly a great way to save money more then its a big performance improvement. If intel gets access to the same node or even one node behind, at a similar cost, they have a number of packaging tricks and other techniques that will really make life difficult for AMD (since AMD is totally reliant on TSMC and that node superiority).

I think AMD made a mistake not developing multiple proprietary graphics techniques to put into the consoles and then offering those on their PC GPUs. So they would have something no one else could duplicate (easily) and built in developer support through the consoles.

Im glad they didn't, but I think it was a mistake (its possible Microsoft and Sony demanded the more compatible specs that they have to make opening the pc gaming market easier though).
Are you kidding? Intel lies, scam and cheat on purpose or without being aware all the time, in addition to continually failing and stumbling in several areas, memory, nand, lithography, internet chips, servers and now it has decided to embarrass itself thinking it can compete with AMD and Nvidia that owns IP enough to nullify any competition. Just review the last 10 years.

Having superior lithography is only part of the story, the modular architecture and chipplet strategy that allows AMD to create a design that scales from a Ryzen 5 to an EPYC, saving billions in development/design cost was the main factor that allowed AMD finally surpasses intel, even if infinitely limited financially compared to intel.
 
Its been to long since ive looked over the differences between the process nodes but last I checked intels 7 was really their 10nm process slighty improved which barely matched TSMCs 7nm node.

That said, I was really comparing consumer level chips where core count isn't nearly as important as single threaded performance with moderate core count (and intels E cores seem to handle background tasks quite well freeing up the P cores for gaming, etc).

At the server level AMD has a pretty commanding lead thanks to that chiplet design and where the chip to chip latency isn't nearly as big a deal (and its something server software has had to contend with for decades due to multi CPU systems anyway).

So I'd say we are both right in a way. But since the big money is in servers, that lead AMD has is something Intel will need to fix, and quickly. And the real monkey wrench in the whole system is the fact that more and more large companies are just rolling their own ARM servers (or someday maybe RISC-V). Its certainly a much more interesting time to be watching the field!
Intel's first 10nm tech was late around 3.5 years (won't even bother to check but anyway) and it was quite bad. Intel's (probably reworked) latest 10nm enhaced superfin is renamed as Intel 7. That's not original one https://www.intel.com/content/dam/w...ccelerating-process-innovation-fact-sheet.pdf

That is somewhat comparable against TSMC 7nm. At least we can conclude that if AMD 7nm surpasses Intel 7, it's mostly something else than process.

On consumer level we hardly can say Intel have any real chance. Yes, they could do 8 core chip (P-cores) with tons of crap cores (E-cores) but when it comes to gaming, problem is that E-cores are basically disabled and are totally useless. If playing game that uses more than 8 cores (I played one on 2008...), Intel is essentially 8 core chip while AMD (7950X like) is 16 core chip. Even considering AMD has huge lead in power consumption despite AMD chips are more downgraded server chips rather than desktop chips like Intel ones. For desktop, I can safely say Intel needs at least two nodes advantage over AMD to be on same level right now, if we also take power consumption into consideration.

ARM has very few advantages over AMD chips, mainly architecture is somewhat free for tweaking. Apart from that, ARM is still pretty far away except for "gazillion of low power cores" -scenarions that AMD will somewhat solve with C-cores.
 
Are you kidding? Intel lies, scam and cheat on purpose or without being aware all the time, in addition to continually failing and stumbling in several areas, memory, nand, lithography, internet chips, servers and now it has decided to embarrass itself thinking it can compete with AMD and Nvidia that owns IP enough to nullify any competition. Just review the last 10 years.

Having superior lithography is only part of the story, the modular architecture and chipplet strategy that allows AMD to create a design that scales from a Ryzen 5 to an EPYC, saving billions in development/design cost was the main factor that allowed AMD finally surpasses intel, even if infinitely limited financially compared to intel.
They might have better hardware but they still have quite a ways to go to contain more market share then intel, either in servers or consumer gear. Fab output is one area where that can be seen. AMD (via TSMC) simply doesn't have the capacity to consume a majority share of the server market even if the buyers were there. A big part of their allocation was going to the consoles (thats likely to lessen as servers and consumer grade stuff is all moved to 5nm and below).

Intel still contains the majority of the laptop market as well.


Don't get me wrong, AMD is great, I have a ryzen machine (as well as a intel). But AMD is definitely still playing catchup and intel is working hard and fast to recapture lost market share. Intels foveros die packaging solution is quite a bit different then AMDs and likely to work much better in mobile segments and consumer level systems (im not sure how they plan to leverage it in servers but it won't have the same obvious advantages there because chip to chip latency is usually less of a problem).

A good example of what I mean can be seen by comparing game performance between a 7950x and a 7800x. As soon as a game has to go off die to another chiplet it takes a significant latency penalty and it shows in performance. Intel likely won't see the same issue since their packaging technique is vertical die stacking, so its still all on a single die and not multiple dies/chiplets spread around a single processor that have to communicate over the much slower infinity bus.
Intel's first 10nm tech was late around 3.5 years (won't even bother to check but anyway) and it was quite bad. Intel's (probably reworked) latest 10nm enhaced superfin is renamed as Intel 7. That's not original one https://www.intel.com/content/dam/w...ccelerating-process-innovation-fact-sheet.pdf

That is somewhat comparable against TSMC 7nm. At least we can conclude that if AMD 7nm surpasses Intel 7, it's mostly something else than process.

On consumer level we hardly can say Intel have any real chance. Yes, they could do 8 core chip (P-cores) with tons of crap cores (E-cores) but when it comes to gaming, problem is that E-cores are basically disabled and are totally useless. If playing game that uses more than 8 cores (I played one on 2008...), Intel is essentially 8 core chip while AMD (7950X like) is 16 core chip. Even considering AMD has huge lead in power consumption despite AMD chips are more downgraded server chips rather than desktop chips like Intel ones. For desktop, I can safely say Intel needs at least two nodes advantage over AMD to be on same level right now, if we also take power consumption into consideration.

ARM has very few advantages over AMD chips, mainly architecture is somewhat free for tweaking. Apart from that, ARM is still pretty far away except for "gazillion of low power cores" -scenarions that AMD will somewhat solve with C-cores.
I need to get to work, but I wanted to point out your 7950x comment. Even this site here has a recent review pointing out that the 7950x is a poor choice for gaming and that the 7800x or 7800x3d are far better choices. The latency incured when crossing dies to another chiplet isnt an issue for server software designed to deal with that, but it can be seriously impactful for games. Its why Intels CPUs currently lead most benchmarks for gaming. Power hungry, and hot, but fast and all on a single die. For gaming you don't want a cpu spread across multiple chiplets (7900x or 7950x 12-16 core) (quoted from toms hardware) "The inter-core latencies within the L3 cache range from between 15 ns and 19 ns. The inter-core latencies between different cores within different parts of the CCD show a larger latency penalty of up to 79.5 ns,"
Thats a significant latency jump and something they need to work on.

Regardless, for a primary gaming machine id go for a 8 core zen or a intel 13000 series.
 
They might have better hardware but they still have quite a ways to go to contain more market share then intel, either in servers or consumer gear. Fab output is one area where that can be seen. AMD (via TSMC) simply doesn't have the capacity to consume a majority share of the server market even if the buyers were there. A big part of their allocation was going to the consoles (thats likely to lessen as servers and consumer grade stuff is all moved to 5nm and below).

Intel still contains the majority of the laptop market as well.

Don't get me wrong, AMD is great, I have a ryzen machine (as well as a intel). But AMD is definitely still playing catchup and intel is working hard and fast to recapture lost market share. Intels foveros die packaging solution is quite a bit different then AMDs and likely to work much better in mobile segments and consumer level systems (im not sure how they plan to leverage it in servers but it won't have the same obvious advantages there because chip to chip latency is usually less of a problem).

A good example of what I mean can be seen by comparing game performance between a 7950x and a 7800x. As soon as a game has to go off die to another chiplet it takes a significant latency penalty and it shows in performance. Intel likely won't see the same issue since their packaging technique is vertical die stacking, so its still all on a single die and not multiple dies/chiplets spread around a single processor that have to communicate over the much slower infinity bus.
Intel has majority of laptop market but not because Intel has better products. It's purely mindset thing, nothing else. If Intel loses that mindset and is forced to compete with AMD just making better products, AMD drives away to horizon very quickly.

We'll see about Foveros. So far it has not been proven to be anything revolutionary and has suffered delays just like almost everything Intel has.

Exactly. AMD desktop solution is supposed to be cheap and/or to save valuable die space, not to be fast. There are certain scenarios where MCM loses to monolithic on speed.
I need to get to work, but I wanted to point out your 7950x comment. Even this site here has a recent review pointing out that the 7950x is a poor choice for gaming and that the 7800x or 7800x3d are far better choices. The latency incured when crossing dies to another chiplet isnt an issue for server software designed to deal with that, but it can be seriously impactful for games. Its why Intels CPUs currently lead most benchmarks for gaming. Power hungry, and hot, but fast and all on a single die. For gaming you don't want a cpu spread across multiple chiplets (7900x or 7950x 12-16 core) (quoted from toms hardware) "The inter-core latencies within the L3 cache range from between 15 ns and 19 ns. The inter-core latencies between different cores within different parts of the CCD show a larger latency penalty of up to 79.5 ns,"
Thats a significant latency jump and something they need to work on.

Regardless, for a primary gaming machine id go for a 8 core zen or a intel 13000 series.
Intel leads in some games because Golden Cove core is just stronger than Zen4 core when it comes to pure speed. And still many games want strong single thread performance. However that performance advantage also comes with price. Golden Cove has simply pathetic power efficiency and Intel can only put 8 on single desktop CPU.

Inter core latency does not have lot to do with that one since not many games utilize more than 6 or 8 cores. Problem is much more single thread performance. After all, Zen4 was made quite lot with servers in mind. However AMD knew they would be so much ahead Intel on servers that sacrificing some efficiency for higher clock speeds made sense. While Zen2 was designed purely for servers, Zen4 is designed pretty much for servers.
 
Inflation makes people complain about everything even without understanding the reality, analyzing the cost of production, development and support via software in the long term it is very difficult to produce cheap low-end GPUs and make a profit. AMD has to sell a few million GPUs just to pay the design cost.
My main grips about it is that it shouldn't have been the price it was, just because they could get away with it doesn't mean they should. It was such a heavily cut down chip the die is only 107mm². The RX 480 was 232mm² and had the option of coming with more RAM.
AMD fully well knows that 4GB of RAM isn't enough - they're the ones that wrote an article about it after all. An article they deleted shortly after releasing the RX 6500 XT.

No media encoding forgivable.
The PCI-E 4 x4 bottlenecking on PCI-E 3, unfortunate especially with their own value offerings at the time. But understandable, likely a bit of an undertaking redesigning that

The performance of basically a 4gb RX480 at the same price of that card but released 6 years later. Even accounting for 20% inflation, and a more expensive node (heavily offset by a much smaller die) there's no way that card should have cost that much. 6 years of so called progress in a 'fast moving market' and the product in the same price class is arguably worse.
It should have cost less and come with more RAM.

They saw an easy opportunity to make money in a market that was desperate for anything affordable. I do wonder how much effort/costs it would've been to add PCI-E and more RAM to the RX 6500 XT vs simply shrinking the RX 480 (580) down to the 7nm node (down from 14nm). Or just rebooting RX580 production, pretty sure Nvidia was selling plenty of newly made 1600 series cards
 
My main grips about it is that it shouldn't have been the price it was, just because they could get away with it doesn't mean they should. It was such a heavily cut down chip the die is only 107mm². The RX 480 was 232mm² and had the option of coming with more RAM.
AMD fully well knows that 4GB of RAM isn't enough - they're the ones that wrote an article about it after all. An article they deleted shortly after releasing the RX 6500 XT.

No media encoding forgivable.
The PCI-E 4 x4 bottlenecking on PCI-E 3, unfortunate especially with their own value offerings at the time. But understandable, likely a bit of an undertaking redesigning that

The performance of basically a 4gb RX480 at the same price of that card but released 6 years later. Even accounting for 20% inflation, and a more expensive node (heavily offset by a much smaller die) there's no way that card should have cost that much. 6 years of so called progress in a 'fast moving market' and the product in the same price class is arguably worse.
It should have cost less and come with more RAM.

They saw an easy opportunity to make money in a market that was desperate for anything affordable. I do wonder how much effort/costs it would've been to add PCI-E and more RAM to the RX 6500 XT vs simply shrinking the RX 480 (580) down to the 7nm node (down from 14nm). Or just rebooting RX580 production, pretty sure Nvidia was selling plenty of newly made 1600 series cards
The RX 480 won't even open some recent games because it doesn't have full DX12 support... I recommend you read these articles to understand for yourself how expensive chip design can cost:

"according to an online marketplace called Digi-Key, GDDR6 chips from Micron are now selling for around US$13-16 per GB"


"On a foundry process node, at 90nm to 45nm, mask sets cost on the order of hundreds of thousands of dollars. At 28nm it moves beyond $1M. With 7nm, the cost increases beyond $10M, and now, as we cross the 3nm barrier, mask sets will begin to push into the $40M range."

https://www.semianalysis.com/p/the-dark-side-of-the-semiconductor

"While a 7nm chip design is predicted to cost $223.3 million, a 5nm design represents $463.3 million in expenses. That number skyrockets to $650 million for a next-generation 3nm chip design."

 
My main grips about it is that it shouldn't have been the price it was, just because they could get away with it doesn't mean they should. It was such a heavily cut down chip the die is only 107mm². The RX 480 was 232mm² and had the option of coming with more RAM.
AMD fully well knows that 4GB of RAM isn't enough - they're the ones that wrote an article about it after all. An article they deleted shortly after releasing the RX 6500 XT.

No media encoding forgivable.
The PCI-E 4 x4 bottlenecking on PCI-E 3, unfortunate especially with their own value offerings at the time. But understandable, likely a bit of an undertaking redesigning that

The performance of basically a 4gb RX480 at the same price of that card but released 6 years later. Even accounting for 20% inflation, and a more expensive node (heavily offset by a much smaller die) there's no way that card should have cost that much. 6 years of so called progress in a 'fast moving market' and the product in the same price class is arguably worse.
It should have cost less and come with more RAM.

They saw an easy opportunity to make money in a market that was desperate for anything affordable. I do wonder how much effort/costs it would've been to add PCI-E and more RAM to the RX 6500 XT vs simply shrinking the RX 480 (580) down to the 7nm node (down from 14nm). Or just rebooting RX580 production, pretty sure Nvidia was selling plenty of newly made 1600 series cards
Some additions to WhiteLeaff reply:

6500XT was mobile chip and was designed years before release. That explains missing features and cut down PCIe, as it was supposed to be paired with PCIe 4.0 APU only. Redesigning it to have more PCIe lanes would take at least a year. Also on mid 2020 AMD had no idea what situation will be at end of 2021 so that redesign might have been total waste. Both design and die size wise.

RX480/RX580 was made on GlobalFoundries where AMD will end all production at end of next year. It's unknown what WSA exactly determines about wafer prices on GF but considering die size too, RX480/RX580 on 14nm was not expected to be cheap. If there even was spare capacity on GF.
 
Back