Intel patent hints at future chiplet-based GPU designs

zohaibahd

Posts: 934   +19
Staff
Editor's take: Intel's patent filing provides a glimmer of hope that we may one day see disaggregated GPUs become a reality, but it's still early days. The company's upcoming Arc Battlemage architecture, slated for early 2025, will likely feature a monolithic design once again. However, the exploration of chiplet approaches could lay the groundwork for future GPU generations that take advantage of this cutting-edge tech.

Intel, according to a recently discovered patent filing, is exploring the use of disaggregated architectures, which involve dividing GPU designs into smaller, specialized "chiplets."

For those not familiar with the term, disaggregated, or chiplet-based architectures, represent a departure from traditional monolithic GPU designs. Instead of housing everything on a single, massive chip, the various components are split across multiple smaller chiplets, each tailored for specific tasks like computing, graphics rendering, or AI workloads. These chiplets are then interconnected using cutting-edge packaging technologies.

There are several potential advantages to this modular approach. For starters, it could lead to significantly improved power efficiency, which is a big win in an era where energy costs and environmental impact are major concerns. Since individual chiplets can be powered down when not in use, thanks to power-gating capabilities, overall energy consumption can be reduced.

But that's not all – disaggregated GPU designs also offer increased flexibility, customization, and modularity. Manufacturers could mix and match different chiplets to create specialized GPU configurations optimized for various workloads or market segments.

Of course, Intel isn't the only one eyeing this innovative GPU design strategy. Primary rival AMD has also filed patents related to chiplet-based GPU architectures in the past. Last year, even Nvidia was rumored to use a multi-chiplet design for its Blackwell GPUs, though more recent reports debunked that.

Either way, it's getting increasingly clear that all three companies are eyeing such architecture. AMD's latest Instinct MI300 series AI GPUs already offer a dual chiplet-based design, as do several of Nvidia's server offerings. However, we have yet to see a true chiplet-based GPU hit the consumer market.

Making the leap from monolithic to disaggregated GPU designs is no small feat, though. There are significant manufacturing and engineering challenges to overcome, not to mention the need for advanced interconnect technologies to seamlessly link all those tiny chiplets.

Permalink to story:

 
Does anyone edit these articles? Last time I checked, Navi 31 was chiplet based, and has been out for quite a while. I guess it only counts if Intel does it. What's next, "Intel finally does chiplets right with Arrow Lake" for an in depth article? For that matter, the MI300 can be had with 6 gpu chips or 1 cpu and 5 gpu's all sharing the HBM memory like a monolithic chip. Word has it that the Navi 41 would have been further down the chiplet pat if it had not been scrapped.

The headline should read "Intel patent shows Intel is looking at chiplets for GPU's" rather that showing the rest of us the future. LOL
 
Arc Battlemage is expected to be monolithic, but future generations may embrace disaggregated tech
That's bold of you to assume that there is a future beyond Battle mage.

I had low hopes initially of Intel sticking to it. But to their credit they did pretty good with keeping at it with the drivers. So I got my hopes up somewhat.
Now with their recent financial struggles I wouldn't be surprised if they pull the plug if Battlemage does poorly .
 
Last edited:
Back