AMD patent unveils multi-chiplet GPU designs for unrivaled flexibility

zohaibahd

Posts: 934   +19
Staff
Forward-looking: Chipmakers are getting serious about splitting up complex chip designs across multiple smaller "chiplets." AMD is leading the charge, having already implemented multi-chiplet architectures for its CPUs and data center GPUs. The company's latest RDNA 3 PC graphics cards even incorporate basic chiplets. However, a new patent filing shows these designs have not realized their full potential.

The December 2022 patent describes a GPU design split across multiple GPU chiplet sets. Each chiplet has a frontend die paired with several other shader-engine dies. The clever part is that these GPU chiplets can flexibly combine in various configurations.

For instance, chiplet sets can work together as a single, unified GPU, operating like a traditional monolithic GPU design--or AMD can split the sets into distinct groups, each functioning as an independent GPU. There's even a hybrid mode where some chiplet sets act as a unified GPU while others operate independently.

This modular GPU design has several benefits. For one, it allows scaling up or down GPU resources and performance based on product needs or operating modes.

"By dividing the GPU into multiple GPU chiplets, the processing system flexibly and cost-effectively configures an amount of active GPU physical resources based on an operating mode. In addition, a configurable number of GPU chiplets are assembled into a single GPU, such that multiple different GPUs having different numbers of GPU chiplets can be assembled using a small number of tape-outs and a multiple-die GPU can be constructed out of GPU chiplets that implement varying generations of technology."

This chiplet design could also offer cost optimizations by using different dies produced on a mix of process nodes. Only the most critical components, like shader cores, need to be fabbed on expensive leading-edge process technologies. The supporting frontend logic could live on cheaper, older silicon.

Of course, AMD has already adopted rudimentary versions of this chiplet philosophy for their current RDNA 3 GPUs. However, those are relatively simple designs using two types of chiplets – a primary GPU die, and memory cache dies. Rumors indicate that Nvidia is working on chiplets for the compute GPUs in its upcoming GeForce RTX 5000 series.

However, the patent looks to take things further by enabling various chiplet combinations and configurations. It's unclear exactly when or where we might see this patent turn into reality--not all patents see the light of day. Still, it aligns with recent industry trends that point to a transition to disaggregated chip designs. Red Team has experimented with other unique chiplet designs. A newer patent filed in August 2023 describes a multi-die GPU design with no central processor directing the various chiplets.

Permalink to story:

 
Speaking of patents, when will they include the neural network part in FSR, as the 2019 patent showed? It is what FSR lacks to achieve the final quality that it could give.. with RDNA3 it can be done, of course, it would be a specific FSR, not functional on previous GPUs
 
Unlike 3dfx, AMD is still in business

3dfx today is known as nVidia (that is who owns them). Maybe more like AMD Radeon division than we think if things continue (sales) of Radeon's though 3dfx at that time had the features and advancement's gamers was looking for, even had demand (to an extent), they lacked funding and ability manufacture their vision whereas AMD has raw (raster) power, though lacks some modern features (that sells in nvidia cards, by sales number pretty well at that).
 
Okay, now do the drivers.
Glad to see this old can is still being kicked down the road.

I’ve not ran into problems with their Radeon drivers in the two years I’ve had my 6950XT. Games seem to perform ok (Nvidia optimisation not withstanding) and nothing is crashing, so they’re a lot better than they were 8-10 years ago.
 
July 2030 AMD blames inefficiencies between the front side scheduler and shader engines located on the n+(x) dies of the multi-chiplets used on the upper mid- to highend tier GPUs of its RDNA7 generation for disappointing performance test results in recent games, as brought to light last month in highly critical videos from popular Youtube tech channels Gamers Nexus and Hardware Unboxed.

In a five page slide deck issued as part of its response to public concern, AMD highlights developer outreach as key to overcoming the challenges that some have discovered when attempting to extract optimal performance from AMD's latest GPU architectures. Multi-chiplet technology, common practice now for some years and often touted as a great solution for meeting customer expectation of ever increasing performance in graphics, computational, and AI tasks (among others), has garnered something of a reputation for overpromising and underdelivering when compared to competing solutions.

However, AMD's CEO Lisa Su continues to express confidence that the company's efforts are on the right track, pointing to the benefits of lower production costs as well as expected improvements in future GPU generations, such as its upcoming line of refreshes provisionally known as RDNA7+. And despite recently falling to a record new low 4% market share in Jon Peddie's desktop GPU sales data, Su played up a lower than predicted decrease in shipments in its most recent quarter as a 'highly encouraging development'.
 
Glad to see this old can is still being kicked down the road.

I’ve not ran into problems with their Radeon drivers in the two years I’ve had my 6950XT. Games seem to perform ok (Nvidia optimisation not withstanding) and nothing is crashing, so they’re a lot better than they were 8-10 years ago.

Yeah... I haven't had a single driver problem in recent years. Given the diversity of software/game combinations and hardware, it's clear that we'll never be completely free of driver bugs, but these days AMD has been much more solid than half a decade ago.
 
The chiplet concept is fine and all, but it's often held back by the interconnects between the chiplets.When and if AMD or any other company can ever get the same bandwidth as they can using a megalithic design then it'll be worth the effort, until then I'll give it a pass.
 
The chiplet concept is fine and all, but it's often held back by the interconnects between the chiplets.When and if AMD or any other company can ever get the same bandwidth as they can using a megalithic design then it'll be worth the effort, until then I'll give it a pass.
I honestly don't think AMD's infinity Fabric interconnect between chips is too far off being more than quick enough.

Their fastest consumer inter-package Infinity Fabric speed is slow, being around the 9.2 Gb/Sec area on RDNA3 graphics cards. In their datacentre cards it's a staggering 400 Gb/Sec each way and goes all the way up to 896 GB/Sec each way on their highest end MI Instinct card (and about £30,000).

Long way to go before us mere consumers could afford such things, but it might trickle down in part somewhere down the line some years from now?
 
Back