AMD patent potentially describes future GPU chiplet design

Daniel Sims

Posts: 1,375   +43
Staff
Why it matters: Chiplets are gaining increasing prominence in processor design. AMD has embraced them for its CPUs, and Intel will soon follow suit. Although GPUs have been slow to migrate to chiplets, AMD kickstarted the shift with its RDNA 3 lineup, and a new patent suggests the company plans to push onward with future Radeon generations.

AMD recently submitted a patent describing a complex chiplet-based design that may (or may not) drive future graphics card lineups. It could signify the beginning of a radical change in GPU design similar to what's currently occurring with CPUs.

A chiplet-based design splits a processor into multiple smaller dies, each possibly specializing in tasks like logic, graphics, memory, or something else. This has multiple advantages compared to traditional monolithic products that pack everything into one large chip.

Also read: What Are Chiplets and Why They Are So Important for the Future of Processors

Chiplets can lower manufacturing costs by allowing companies to mix different process nodes into the same product, using older and cheaper components in places where cutting-edge transistors aren't necessary. Also, smaller dies result in higher yields and fewer defective chips. Moreover, splitting up the chips enables greater flexibility in processor design, possibly creating room for additional dies.

Click to enlarge

AMD fully committed to chiplets beginning with Zen 2 CPUs, and the reorganization enabled the rise of Ryzen 9-class products. Intel will debut its take on chiplets (dubbed "tiles") when the Meteor Lake CPU series debut later this month.

While graphics cards might not benefit as much from chiplets at first, AMD took some initial steps with the Radeon 7000 series. Each of the GPUs incorporates a large graphics die and several memory dies.

The patent, titled "Distributed Geometry" describes a system that splits graphics tasks into multiple dies without a central unit to direct the others. Questions remain regarding how the dies would stay in sync. Clearly patents don't always become actual products, but the document hints at AMD's commitment to a chiplet-based future. If AMD pursues the design outlined in the patent, it would likely first emerge in the RDNA 5 lineup, expected to appear in 2025.

RDNA 4 is expected to launch sometime next year, but if rumors prove true, the GPU family will only consist of mid-range cards headlined by a model that could outperform the $900 Radeon RX 7900 XT (and Nvidia's GeForce RTX 4080) for about half the price.

Meanwhile, Nvidia is rumored to be pursuing chiplets for the compute GPUs in its upcoming GeForce RTX 5000 series, but consumer products will likely remain monolithic.

Permalink to story.

 
I am curious how the fabric/ringbuss/chiplet interconnect will affect frame time. once you go to chiplets you incure latency no matter what you do ..
 
Jesus, AMD.

All you need to do to get back in the race is to achieve parity or better with Nvidia in RT performance.

Do that and the rest will follow.
 
I am curious how the fabric/ringbuss/chiplet interconnect will affect frame time. once you go to chiplets you incure latency no matter what you do ..
It'll be an HBM per chiplet arrangement. RAM will be all in-package. Take a look at the MI300X for the big daddy version.
 
Jesus, AMD.

All you need to do to get back in the race is to achieve parity or better with Nvidia in RT performance.

Do that and the rest will follow.
They can, but most ray tracing is developed specifically for nVidia hardware. AMD can't make hardware optimizations without violating nVidia patents
 
They can, but most ray tracing is developed specifically for nVidia hardware. AMD can't make hardware optimizations without violating nVidia patents

And why is that? Correct me if I'm wrong but for what I know basically Nvidia just added fixed function hardware for their GPU's but AMD did not. And that makes basically all difference.

If AMD just wanted RT performance crown, that would be efforthless to achieve. However that would also mean bigger chips. And that is why AMD won't do it.
 
And why is that? Correct me if I'm wrong but for what I know basically Nvidia just added fixed function hardware for their GPU's but AMD did not. And that makes basically all difference.

If AMD just wanted RT performance crown, that would be efforthless to achieve. However that would also mean bigger chips. And that is why AMD won't do it.
Because it makes financial sense for any developer to optimize for nVidia hardware because they have market share. I understand that people see me as an nVidia hater and AMD apologist, or that's my take from the vocal minority of rebutting my posts.

The thing is that AMD has their own hardware optimized platform and people don't use it because it doesn't make financial sense. nVidia could open up things like CUDA but that would mean giving up market share. What that means is it doesn't make financial sense for nVidia to allow AMD to optimize their hardware.

I don't agree with how copyright law is enforced but this is just the order of things and businesses have to navigate around it
 
Because it makes financial sense for any developer to optimize for nVidia hardware because they have market share. I understand that people see me as an nVidia hater and AMD apologist, or that's my take from the vocal minority of rebutting my posts.

The thing is that AMD has their own hardware optimized platform and people don't use it because it doesn't make financial sense. nVidia could open up things like CUDA but that would mean giving up market share. What that means is it doesn't make financial sense for nVidia to allow AMD to optimize their hardware.

I don't agree with how copyright law is enforced but this is just the order of things and businesses have to navigate around it
For developers perspective, all RT on Sony and MS consoles are AMD and not Nvidia. That makes it pretty stupid to optimize for Nvidia unless game is going to be PC only or Nvidia pays for optimizing for their hardware.

Also someone could tell if there is much difference between "Nvidia RT" and "AMD RT" from optimization perspective. I doubt there is massive difference.
 
For developers perspective, all RT on Sony and MS consoles are AMD and not Nvidia. That makes it pretty stupid to optimize for Nvidia unless game is going to be PC only or Nvidia pays for optimizing for their hardware.

Also someone could tell if there is much difference between "Nvidia RT" and "AMD RT" from optimization perspective. I doubt there is massive difference.
Consoles aren't using ray tracing
 
Consoles aren't using ray tracing
What? There actually are multiple console games that use RT. Not on same extent that PC titles do but still. X Box Series X/S and PS5 all have RDNA2 architecture so that's not really surprising. Also partially explains why AMD didn't use dedicated RT unit.
 
What? There actually are multiple console games that use RT. Not on same extent that PC titles do but still. X Box Series X/S and PS5 all have RDNA2 architecture so that's not really surprising. Also partially explains why AMD didn't use dedicated RT unit.
They aren't using them enmasse. Console hardware just can't handle it the way developers have intended
 
Of course not. They are consoles. Still, consoles use RT and that is only because RDNA2 supports it. If RNDA2 lacked RT support, current gen consoles would be totally without RT.
But consoles are the dominant market. They exceed PC sales nearly 10 fold
 
Software patents , algorithm . mathematical patents whatever in the USA are holding tech back.
None of that has any standing in NZ
Patents should be for manufacturing processes and products

Other laws can protect products like software

Imagine some if someone patented hitboxes in games - or took a fastest/best route algorithm and patented on a computer , on a cloud device, using BT , in the bathroom , down the sewer pipe

Anyway most RT is done in Universities etc - combining statistical models , AI whoopee do

Standards should always be open with nominal fees at best
 
They can, but most ray tracing is developed specifically for nVidia hardware. AMD can't make hardware optimizations without violating nVidia patents
AMD can absolutely make optimizations, they jsut cant use nvidia IP to do so. Given that intel managed to do it just fine, I dont buy that argument.

I think its much more likely that AMD severely underestimated RT and its impact on sales. rDNA1 didnt have RT and was somewhat outdated on feature support VS nvidia's 2000 series. When they added RT in rDNA2, it hit them like a truck that they were a full gen+ behind nvidia. rDNA3 made effectively no changes, they doubled shader count but CU for CU is no faster then rDNA2.

Hopefully rDNA4 is a major adjustment. Given AMD does groundwork for arches years in advanced, rDNA4 would be the first one that is designed with nvidia's major lead in mind.
 
AMD's GPU division (RTG) is the underdog, they don't have the mind share and they don't have the marketshare. Even within AMD they're second by far to the CPU division where AMD makes their big bucks selling 75mm2 chiplets for $300 bucks (even a RX 7600 is 200mm2). Their whole philosophy is to do more with less.

They built RT so that some of their Shader Units have some extra oomph to accelerate RT calculations. It was the safe bet, support RT but don't put R&D and dev effort into fixed function units. Keep die sizes small, make sure there is no wasted silicon cause that silicon could be sold as a high margin CPU part instead. When raster workloads are used that RT silicon doesnt just idle, but when it is used the drop from raster is larger than NV.

AMD has some very competent engineers, but the fiscal realities at AMD means they have to design parts with a cost first approach rather than NV's performance first approach.

RDNA3 is really a very clever design. For the end user we can ***** and moan about idle power consumption and clock speeds and whatnot, but it's the start of of an incredible revolution in GPU tech and I salute AMD for it.
 
Finally, someone will dare to rebel and innovate. Can someone explain why a die regulating workload distribution and synchronization among GDCs wouldn't be beneficial?
 
Yeah, maybe Radeon's disappointing next architecture number 6 will be competitive?
It's more about development costs and market segmentation, not architecture. AMD and Nvidia both use same TSMC 5nm process on current gen cards. Considering 7900XTX GCD is only 304.35 mm² and it's trading blows with RTX 4090 that's whopping 609 mm², AMD could have easily beaten RTX 4090 on every performance metric if they just have wanted it.

Cost is another question but if AMD really wanted performance crown on this generation, they could have had it. In other words, AMD double sized GCD vs current 7900XTX would be around 1.8-1.9 times faster than 7900 XTX, something Nvidia would have no chance against.
 
It's more about development costs and market segmentation, not architecture. AMD and Nvidia both use same TSMC 5nm process on current gen cards. Considering 7900XTX GCD is only 304.35 mm² and it's trading blows with RTX 4090 that's whopping 609 mm², AMD could have easily beaten RTX 4090 on every performance metric if they just have wanted it.

Cost is another question but if AMD really wanted performance crown on this generation, they could have had it. In other words, AMD double sized GCD vs current 7900XTX would be around 1.8-1.9 times faster than 7900 XTX, something Nvidia would have no chance against.
Doubling that die would also require double the memory controller size,leading to a total die ares of over 1000mm2. That would be insanely expensive, even with chiplets, and would set a record for GPU size. That would also make the main GCD over 600mm2.

Now granted, AMD could have scaled the GCD up by 50% or so and still maintained decent yields. Given the 7900 series is selling well it would have been nice to see.

Their rasterization performance is great now, would have been nice to see that at launch to embarrass nvidia's 4080. The 7900 would have sold a lot better in that case. Their RT performance however doesnt hold up as well. Some say its due to their compiler being bugged, MW2 shows that it's RT performance CAN be significantly better then rDNA2, but the vast majority of games show no difference. Given they doubled shader count to achieve almost nothing, that lends credibility to something going wrong.

Hopefully rDNA4 fixes this.
 
Doubling that die would also require double the memory controller size,leading to a total die ares of over 1000mm2. That would be insanely expensive, even with chiplets, and would set a record for GPU size. That would also make the main GCD over 600mm2.

Now granted, AMD could have scaled the GCD up by 50% or so and still maintained decent yields. Given the 7900 series is selling well it would have been nice to see.

Their rasterization performance is great now, would have been nice to see that at launch to embarrass nvidia's 4080. The 7900 would have sold a lot better in that case. Their RT performance however doesnt hold up as well. Some say its due to their compiler being bugged, MW2 shows that it's RT performance CAN be significantly better then rDNA2, but the vast majority of games show no difference. Given they doubled shader count to achieve almost nothing, that lends credibility to something going wrong.

Hopefully rDNA4 fixes this.
Of course. That was just about "could AMD get performance crown if they just wanted" -scenario. And answer is quite clearly yes, because AMD could always match GCD size against Nvidia and on same die size, AMD would be much faster.

Price is totally another question and previously AMD has decided not to invest too much on top performance chip.

RT performance on RDNA 4 mostly depends on if AMD will add dedicated RT unit and/or fixed function RT hardware. No information so it's just guessing at this point.
 
Back