AMD patents a chiplet GPU design quite unlike Nvidia and Intel's

mongeese

Posts: 643   +123
Staff
Something to look forward to: AMD has published its first patent on chiplet GPU designs. In typical AMD fashion, they're trying to not rock the boat. Chiplet GPUs are just beginning to emerge. Intel has been forthright about their development process and confirmed the employment of chiplets in their first-generation discrete GPUs. Nvidia, while coy about specifics, have published numerous research papers on the topic. AMD was the last holdout – which only adds to the intrigue.

Chiplets, as the name suggests, are smaller less complex chips, that meant to work together into more powerful processors. They're arguably the inevitable future for all high-performance components, and, in some cases, the successful present; AMD's use of chiplet CPU designs has been brilliant.

In the new patent dated December 31, AMD outlines a chiplet design fashioned to mimic a monolithic design as closely as possible. Their hypothetical model uses two chiplets connected by a high-speed inactive interposer called a crosslink.

A crosslink connection sits between the L2 cache and L3 cache on the memory hierarchy. Everything beneath it, such as the cores and L1 cache and L2 cache, are aware of their separation from the other chiplet. Everything above, including the L3 cache and GDDR memory, are shared between the chiplets.

This design is beneficial because it is conventional. AMD claims that compute units can access low-level cache on other chiplets almost as fast as they can access local low-level cache. Should that prove true, software won't need updating.

The same cannot be said of Intel and Nvidia's designs. Intel intends on using two new technologies, EMIB (embedded multi-die interconnect bridge) and Foveros. The latter is an active interposer that uses through-silicon-vias, something AMD explicitly states they will not use. Intel's design lets the GPU house a system-accessible cache that powers a new memory fabric.

Nvidia has not disclosed everything, but have indicated a few directions they might pursue. A research paper from 2017 describes a four-chiplet design and a NUMA (non-uniform memory access) aware and locality aware architecture. It also experiments with a new L1.5 cache, which exclusively holds remote data accesses and is bypassed during local memory accesses.

AMD's approach might sound the least imaginative, but it also sounds practical. And if history has proven anything, it's that developer-friendliness is a huge advantage.

Below are additional diagrams from the patent.

Figure 2 is a cross-sectional view that descends from two chiplets to the circuit board. The two chiplets (106-1 and 106-2) are stacked vertically on the passive crosslink (118) and use dedicated conductor structures to access the crosslink's traces (206) and subsequently communicate with each other. Conductor structures not attached to the crosslink (204) connect to the circuit board for power and other signaling.

Figure 3 depicts the cache hierarchy. WGPs (work group processors) (302), which are collections of shader cores, and GFXs (fixed function units) (304), which are dedicated processors for singular purposes, connect directly to a channel's L1 cache (306). Each chiplet contains multiple L2 cache (308) banks that are individually addressable, and also coherent within a single chiplet. Each chiplet also contains multiple L3 cache (310) cache banks that are coherent across the whole GPU.

The GDF (graphics data fabric) (314) connects the L1 cache banks to the L2 cache banks. The SDF (scalable data fabric) (316) combines the L2 cache banks and connects them to the crosslink (118). The crosslink connects to the SDFs on all the chiplets, as well as the L3 cache banks on all the chiplets. The GDDR memory lanes (written as Memory PHY) (312) connect to L3 cache banks.

As an example, if a WGP on one chiplet required data from a GDDR bank on another chiplet, that data would be sent through to an L3 cache bank, then over the crosslink to an SDF, then to an L2 bank, and finally, through a GDF to an L1 bank.

Figure 4 is a bird's eye view of one chiplet. It shows more accurately the potential locations and scales of various components. The HBX Controller (404) manages the crosslink, which the chiplet is connected to by HBX PHY (406) conductors. The small square in the bottom-left corner (408) is a potential additional connection to the crosslink to connect more chiplets.

Permalink to story.

 
There will probably be latency / speed vs. cost trade offs in the first one or two iterations.

So I suspect that on one hand, performance will be lower vs single die solutions, but the advantages will be better yields and a lot lower cost - you just need one die vs. the multiple ones that are necessary now.

Low end GPU = one chiplet, mid range two,....

Bonus points if GPU dies can be combined with accelerator dies.
 
This would mean cheaper graphics cards and better margins for AMD as a whole. They need to fast track this if they want to gain any ground on Nvidia.

Agreed, although they have just gained tremendous ground on Nvidia at least in rasterization performance. When was they last time they competed on the high end?
 
There will probably be latency / speed vs. cost trade offs in the first one or two iterations.

So I suspect that on one hand, performance will be lower vs single die solutions, but the advantages will be better yields and a lot lower cost - you just need one die vs. the multiple ones that are necessary now.

Low end GPU = one chiplet, mid range two,....

Bonus points if GPU dies can be combined with accelerator dies.

They can have more stream processors to give more potential since they are chiplet and don't have to be monolithic. So that might help in performance as well.
 
Good article!

It is more or less obvious that chiplet design is not to stay for a prolonged period, meaning ten years at maximum for mobile and home computer chips. It is an innovation for the current day when single dies are hard to improve, but the cost will be more actual silicon used for relatively less computational power due to the fabrics/links. Potentially some chips could get massive depening how much enthustiastic customers are willing to pay, since many low powered chips in one chiplet is a natural result of this innovation, you just need to make the caches bigger.

I'm not extremely fancy about chiplet design, rather I view them as a satisfying solution for now. It is hard to say whose solution is the best, so I'm not willing to place any bets, but it seems Intel's approach is more ambitious, which could somehow benefit the whole system if I understood correctly. They might want to make communication between CPU and GPU faster.

Interesting to see how the actual products will turn out, but I already assume latency dependent tasks are going to suffer to some degree, while other tasks will benefit greatly.
 
Intel started "chplet" r&d, and they alone deserve recognition for their engineering development.
 
lol she didnt invent anything or has done anything against nvidia. People still prefer nvidia over amd, amd hasnt changed anything. They may be challenging Intel but nvidia is whole other ball game that they arent even close too.
High End RDNA2 Graphics Cards just now have been released.
AMD will gain market share.
People are creatures of habit. It will take some time.
 
Patent = something which is easy to think but hard to avoid.

Their purpose is to eliminate competition not by higher quality products but with legal procedures. The result is less innovations, lower quality and more expensive products.

So patents should be illegal.
If someone can produce for example an exact copy of CPU and offer it at lower price should be allowed no questions asked. There are 8 billion people on the planet and because of patents we wait from few hundreds to design and produce electronic chips.
 
Patent = something which is easy to think but hard to avoid.

Their purpose is to eliminate competition not by higher quality products but with legal procedures. The result is less innovations, lower quality and more expensive products.

So patents should be illegal.
If someone can produce for example an exact copy of CPU and offer it at lower price should be allowed no questions asked. There are 8 billion people on the planet and because of patents we wait from few hundreds to design and produce electronic chips.
What you're proposing is definitely the end of innovation. If everything you come up with is free for anyone else with bigger resources to steal, no individual will want to innovate.
Patents are necessary, abusing them is an unavoidable side effect, just like any other regulation or law. We do have courts to decide whether someone abuses patents that are only designed to prevent competition.
 
If everything you come up with is free for anyone else with bigger resources to steal, no individual will want to innovate.
That's not true, patents stifle innovation. Patents limit those who don't hold the patent in their ability to innovate. It is one of the reasons why monopolies are illegal.
 
What you're proposing is definitely the end of innovation.
Back in the era of 486 where AMD and Intel compete freely was that pressure which gave the Pentium design.
But because Intel had patent that design AMD became a zombie (until the recent resurrection after decades) and we still have the same Pentium design until today with just few more instructions and higher clock frequencies.

If Intel had patent the 486 today we will never had Pentium but we still had 2 core 32 bit processors at higher frequency clocks and AMD wouldn’t exist. Same applies and to other sectors like drugs, software etc A person has 8 hours a day as productive time, if you limit via patents the persons you limit the total sum of time of thought...
 
Back in the era of 486 where AMD and Intel compete freely was that pressure which gave the Pentium design.
But because Intel had patent that design AMD became a zombie (until the recent resurrection after decades) and we still have the same Pentium design until today with just few more instructions and higher clock frequencies.

If Intel had patent the 486 today we will never had Pentium but we still had 2 core 32 bit processors at higher frequency clocks and AMD wouldn’t exist. Same applies and to other sectors like drugs, software etc A person has 8 hours a day as productive time, if you limit via patents the persons you limit the total sum of time of thought...
that's not exactly true, intel was never the only game in town for compute patents. The need for more powerful PCs is always growing so eventually someone, like ARM, would step up and provide people with those needs. And this isn't just a "let the economy sort things out". This is something that's predominantly be driven by military needs. Intel does not own the patents to all processing, people would have developed architectures outside of x86/x64 in the same way that ARM and powerPC(remember that one) have. Yes, things would need to be recompiled to run on other things but it's not like it's impossible. A RaspberryPi is a lot faster than a core2duo at this point, after all.

 
Agreed, although they have just gained tremendous ground on Nvidia at least in rasterization performance. When was they last time they competed on the high end?

Why? I don't see that at all. The way I see it, AMD sacrificed ray tracing performance, focused on rasterization, and still only managed to match Nvidia. Shouldn't you be trouncing Nvidia, when you don't waste die area on ray tracing and AI related silicon? Nvidia is being competent in all three areas, so how exactly is AMD competing with Nvidia on the high end?

And where I am from, 6000 series are both more expensive and it's stock non existent. The only 6800 (not xt) I can find costs 1100eur...
 
Why? I don't see that at all. The way I see it, AMD sacrificed ray tracing performance, focused on rasterization, and still only managed to match Nvidia. Shouldn't you be trouncing Nvidia, when you don't waste die area on ray tracing and AI related silicon? Nvidia is being competent in all three areas, so how exactly is AMD competing with Nvidia on the high end?

This whole "AMD is bad on ray tracing" thing comes from few Nvidia optimized titles. So far there is no real evidence AMD's ray tracing is weaker than Nvidia's.

Also to keep in mind, Nvidia uses GDDR6X that has pretty serious availability problems, only one company is making them etc. AMD uses GDDR6 that is widely available. Also AMD's chip is smaller than Nvidia's despite huge amount of transistors went into infinity cache. Also AMD cards consume less power than Nvidia's partly because of that cache.
 
It seems the patent doesn't discuss the where the gigathread engine in nvidia speak, or graphics command processor in AMD speak will go, and how it will work. Having these parts distrubuted over the die will only further increase overheads. This patent may not be intended for gaming GPUs.
 
Back