Are 'chiplets' the answer to extending Moore's Law?

Cal Jeffrey

Posts: 4,188   +1,430
Staff member
Why it matters: Moore's Law used to state that the number of transistors that could be squeezed on a chip would double every year. Later that was changed to every two years, but in 2016 manufacturers hit a wall. Moore's Law has been stalled or at least slowed, and new innovations are needed to reignite it. Chiplets might be the answer.

Moore’s Law has been flagging for several years now. Transistors cannot get much smaller. The end of the paradigm is in sight, and it is bad news for chip manufacturers as that is what has driven chip sales for decades now.

“We're seeing Moore's law slowing,” AMD’s CTO Mark Papermaster recently told Wired. “You’re still getting more density, but it costs more and takes longer. It’s a fundamental change.” So foundries are scrambling for new ways to extend the cycle and continue bringing more powerful processors to the market. After all, what is the point of getting the latest and greatest processor or PC if it doesn’t offer any more power than the one already sitting on your desk?

One of the newer ideas is something being referred to as “chiplets.” Chiplets are modular pieces of silicon that can be put together somewhat like Lego blocks. Instead of printing a whole circuit on a single chip, a number of chiplets can be put together in various configurations allowing multi-die processors tailored for specific tasks such as machine learning or cloud computing.

Both AMD and Intel believe that the industry is moving in this direction because it allows them to quickly ship more powerful processors.

“[It’s] an evolution of Moore’s law,” said Ramune Nagisetty, a senior principal engineer at Intel.

The current processes of manufacturing the smallest transistors and chips are complicated and expensive. Chiplets could provide a way to continue building powerful processors with lower overhead and fewer flaws.

“The latest, greatest, and smallest transistors are also the trickiest and most expensive to design and manufacture with. In processors made up of chiplets, that cutting-edge technology can be reserved for the pieces of a design where the investment will most pay off. Other chiplets can be made using more reliable, established, and cheaper techniques. Smaller pieces of silicon are also inherently less prone to manufacturing defects.”

Chiplets are not just in the future of processor fabrication — they are already here. Last year AMD tested this approach with a server processor called Epyc. Epyc was comprised of four chiplets, and AMD engineers estimate that if they had tried to make it as a single large chip, it would have at least doubled the manufacturing cost.

“Intel has a very deep roadmap for chiplets. This is the future.”

The success of Epyc is apparent since earlier this week AMD announced it would be producing a second-generation of Epyc server processor that would be made from eight chiplets (64 cores), effectively doubling its power.

Intel has also been working with the modular design idea. It is designing processors for laptops that combine an Intel CPU with a “custom-designed graphics module from AMD.” It marks the first time Intel has used a core from another manufacturer in its own main-line processors.

“Combining the components chiplet-style allows them to work together more closely than if the graphics processor were a separate component,” said Nagisetty.

The chip is already being used in notebooks from HP and Dell, and future generations are already on the drawing board.

The Pentagon is also interested in this new process and has committed $1.5 billion to further research in the area through DARPA’s Electronic Resurgence Initiative. Universities, defense contractors, and chip foundries will receive grants through the program to advance chiplet technology. DARPA is also looking to come up with a standard that will allow modules from different manufacturers to work together. Intel has already agreed to produce a “royalty-free interconnect technology” for its chiplets.

As this modular approach to processor manufacturing continues, it will be interesting to see how the rest of the industry adapts to the new architectures, and whether it can extend Moore's Law in the years to come.

Permalink to story.

 
Chiplets are the future. They allow designs not previously possible. AMD have not even shown us their full hand yet either. A study from the university of Toronto on chiplets showed that chiplets not only enable extremely high core counts CPUs of the future but they also can create faster chips as well. They studied the affect of an active interposer that acted as a routing mesh between the cores and certain topologies achieved better core to core performance than a monolithic design. If done well the use of active interposers in addition to chiplets will not only allow companies to greatly expand core counts (among other benefits of modular CPUs) but to also get better IPC then any monolithic chip.

I suspect that after 7nm zen coming in 2019, AMD's next big architecture change will have to be an active interposer.
 
I imagine Chiplets will be extremely cheaper to manufacturer the question then becomes will they pass that savings on to customers or will they just reap the money as a way to self inflate stock prices with mega imcreased profit margins....
 
Chiplets are the future. They allow designs not previously possible. AMD have not even shown us their full hand yet either. A study from the university of Toronto on chiplets showed that chiplets not only enable extremely high core counts CPUs of the future but they also can create faster chips as well. They studied the affect of an active interposer that acted as a routing mesh between the cores and certain topologies achieved better core to core performance than a monolithic design. If done well the use of active interposers in addition to chiplets will not only allow companies to greatly expand core counts (among other benefits of modular CPUs) but to also get better IPC then any monolithic chip.

I suspect that after 7nm zen coming in 2019, AMD's next big architecture change will have to be an active interposer.

I am never opposed to more cores per die, but what will happen on the software side, GPU wise I really can't see it changing to much as rendering is already parallel but couldn't side of things we already have to pull teeth getting native 64 bit processes and 8c16t to be populated with work it's more low core frequency dependant.
 
I am never opposed to more cores per die, but what will happen on the software side, GPU wise I really can't see it changing to much as rendering is already parallel but couldn't side of things we already have to pull teeth getting native 64 bit processes and 8c16t to be populated with work it's more low core frequency dependant.

Software has been tied to low core counts because that's the way Intel wanted it. They sold 4 core processors for a long time so of course software followed suite. If AMD sells a lot of higher core count processors, software will be made to take advantage of it. There is also the GPU side that can push the CPU core count as well. For example, BFV recommends at least a 6 core 12 thread processor if you enable Ray Tracing vs the standard 4 core 8 thread without ray tracing.
 
The entire "Moore's Law" conjecture is absolutely ridiculous to begin with. They phrase it like a law of physics, when it's the exact opposite. It wasn't ever a law, and it's been broken in every iteration since it was first declared by some media-hype that had nothing real to write about - just like this article.

Laws of physics and nature aren't broken. They are the rules evident to us. Any appearance that they are broken is due to a fundamental lack of understanding of the phenomena, and thus it was never a law to begin with.
 
Round and round we go, where we stop nobody knows. Anyone else remember the transputer?
 
Meh. I've got a QX6850 sitting right in front of be that was quad core / dual die ten years ago. There is nothing new under the sun!
 
What about cooling of the chiplets? How much heat they will produce and how to encounter that?

Everyone talks about of ways to increase the CPU power but no one mentions the cooling part of it....Recently everyone discusses the quantum computing but no one mentions how to cool down the tremendous heat it generates!
 
Software has been tied to low core counts because that's the way Intel wanted it. They sold 4 core processors for a long time so of course software followed suite. If AMD sells a lot of higher core count processors, software will be made to take advantage of it. There is also the GPU side that can push the CPU core count as well. For example, BFV recommends at least a 6 core 12 thread processor if you enable Ray Tracing vs the standard 4 core 8 thread without ray tracing.
Honestly do you remember the train wreck that was itanium? Both Intel and AMD have this problem, they both have problems with developers not using modern instruction sets properly and equivically using parallel workloads properly. Intel went as far as to throw millions in with a university to come up with a solution, they came to the conclusion that mounting something like an arm processor on the die that works by taking single core code and raids it across all cores using Branch prediction and piecing it together at the end with the proper results.....It massively reduces the latency of software based parallel solutions.

It's not really a hardware problem it's a software Issue, the problem for the most part comes from the fact people cut and paste what they learn in software development classes rather than putting the effort into cutting out the fat and taking chances on new ways of doing things, these people joined the industry as a means of just getting a job. Before people who worked in the computer fields for the most part were or started as garage, basement, and armchair enthusiasts, they were the pioneers and in alot of cases taught the new generations of programmers. It's hilarious but about 4months ago there was an article disscussing this and how programmers for 30 years have been cut and pasting the same routines over and over again the issue is the routines are not optimized and not correct for modern times and the same shortcuts from 40 years ago that are sloppy programming are still prevalent.
 
Back