IBM's new 2nm design can pack over 50 billion transistors onto a chip the size of a fingernail

Shawn Knight

Posts: 15,312   +193
Staff member
The big picture: IBM has unveiled what it is calling the world’s first 2 nanometer (nm) chip technology. According to the tech giant, the new 2nm chip design is projected to achieve 45 percent higher performance compared to today’s most advanced 7nm node chips at the same power level. Alternately, the chip can be configured to run at the same performance as today’s 7nm chips while consuming 75 percent less energy.

IBM predicts that smartphones based on the new chip technology could have up to four times more battery life compared to 7nm-based handsets. Laptops with 2nm chips could better perform tasks like language translation while self-driving vehicles could realize faster object detection and better reaction times, making them safer to use.

As AnandTech’s Dr. Ian Cutress reminds us, however, nothing about the nomenclature really resembles what you’d expect 2nm to actually be. As he explains:

In the past, the dimension used to be an equivalent metric for 2D feature size on the chip, such as 90nm, 65nm, and 40nm. However with the advent of 3D transistor design with FinFETs and others, the process node name is now an interpretation of an ‘equivalent 2D transistor’ design.

Some of the features on this chip are likely to be low single digits in actual nanometers, such as transistor fin leakage protection layers, but it’s important to note the disconnect in how process nodes are currently named.

The real talking point, then, comes down to transistor density. IBM anticipates that with the new tech, it’ll be able to stuff as many as 50 billion transistors on a chip the size of a fingernail.

IBM, as you may know, doesn't actually manufacture chips. As Cutress highlights, they're a research facility and their goal with 2nm and other tech is to develop it, patent it and license it out to partners like Intel and Samsung, and they're really good at it. Earlier this year, IBM topped the US patent list for the 28th year in a row with more than 9,100 patents granted.

A chief engineer with IBM told Cutress that products based on the new tech could hit the market in late 2024 and in higher quantities in 2025.

Permalink to story.

 
Only so that programming languages can become even slower and programmers even sloppier. Resulting in equally fast apps (or slower), despite switching to a 10x faster CPU. I think they should start making slower CPUs with less dense chips and less memory. So that we can easily separate real programmers from amateurs.
 
2nm is already a thing? Sweet! Can somebody say GeForce GTX 2060 graphics performance on a future AMD Zen 4/5 APU? Hell yay-uh! :)
 
Only so that programming languages can become even slower and programmers even sloppier. Resulting in equally fast apps (or slower), despite switching to a 10x faster CPU. I think they should start making slower CPUs with less dense chips and less memory. So that we can easily separate real programmers from amateurs.
I don't know where you are referring to but programming languages are in an interesting phase of getting a lot of performance efficiency boosts especially from large scale open source contributions.
 
I don't know where you are referring to but programming languages are in an interesting phase of getting a lot of performance efficiency boosts especially from large scale open source contributions.

Those "boosts" are only making them faster because they are now horribly slow. And with all that boost they are still a lot slower per core than programming languages 30 years ago.

On top of that, their "user interface" aka synax is mostly horrible. Their solutions are dreadful. They don't learn from each other. They don't steal from each other. Everything that they are now reinvening was invented long time ago, but they still do it wrong. I never suspected that programming languages will devolve this badly.

I also never believed that web will be this crappy. It's a miracle it still works. Mostly because TCP/IP was properly designed. Everything on top of it - total garbage. And aside from WASM, it's not really getting better.

I sent lots of emails to web consortium people 10 years ago, in the end they implemented some of it, like WASM, but a lot is still waiting.

And they still implemented it wronly. Why would they use stack-machine? WTF? Is there any popular CPU based on stack? No, I'm not talking about FPU, but CPU. And yet, they've chosen a slower solution. Fantastic.

Then in 10 years they'll make a proper solution and say: "Look, we made it 3 times faster." Sure you did. Because you screwed up the first time.
 
Those "boosts" are only making them faster because they are now horribly slow. And with all that boost they are still a lot slower per core than programming languages 30 years ago.

On top of that, their "user interface" aka synax is mostly horrible. Their solutions are dreadful. They don't learn from each other. They don't steal from each other. Everything that they are now reinvening was invented long time ago, but they still do it wrong. I never suspected that programming languages will devolve this badly.

I also never believed that web will be this crappy. It's a miracle it still works. Mostly because TCP/IP was properly designed. Everything on top of it - total garbage. And aside from WASM, it's not really getting better.

I sent lots of emails to web consortium people 10 years ago, in the end they implemented some of it, like WASM, but a lot is still waiting.

And they still implemented it wronly. Why would they use stack-machine? WTF? Is there any popular CPU based on stack? No, I'm not talking about FPU, but CPU. And yet, they've chosen a slower solution. Fantastic.

Then in 10 years they'll make a proper solution and say: "Look, we made it 3 times faster." Sure you did. Because you screwed up the first time.
I'm a desktop programmer (fmrly C/C++ and embedded background at tertiary level) not a web programmer. I agree web is a steaming pile of poop.
 
Back