Intel's Itanium is finally dead

Molematt

Posts: 36   +2
Looking back: After 20 years of failing to make a mark on the wider computing world, Intel finally stopped shipping its Itanium processors this past Thursday. While the company shifted its focus back to the more familiar x86 instruction set architecture (ISA) back in 2004, it kept Itanium going for another decade and a half, until it was put on the chopping block in 2019.

Itanium was the offspring of a 1990's partnership between HP and Intel, back when the range of ISAs in use was far more diverse than the x86 and Arm titans of today. The IA-64 architecture was designed to push forward into the realms of then-exotic 64-bit computing, as well as replace the proprietary solutions in use by many individual companies.

However, the project was quickly dubbed "Itanic" for the amount of cash being spent on it, its ambition, and its eventual results. Itanium's promise ended up sunken by a lack of legacy 32-bit support and difficulties in working with the architecture for writing and maintaining software.

The dream of a single dominant ISA wouldn't come about until a few years later, but it would come about thanks to the AMD64 extension to the incumbent x86 instruction set. Then-senior VP (and now-CEO) Pat Gelsinger was steering the Intel's Digital Enterprise Group at the time, and when 64-bit capability and multi-core computing came to x86, the company's Xeons proved much better suited to the market's demands.

The rest is history -- ol' reliable x86-64 remains the ISA of choice to this day, only challenged by Arm, and ended up substantially outpacing its Itanium cousin handily in both core counts and clock speeds. Even so, Intel continued to work on Itanium over the years, until the last generation was announced in 2017.

That has finally come to a conclusion this week when the last Itanium silicon shipped. But if you're an extremely brave enterprise user with a very specific platform from two decades ago, The Register seems to have spotted a whole load of Itanium parts on the second-hand market. Go wild.

Permalink to story.

 
Amen!

A little longer, it could outlive Intel. But let's see what that 12-th gen brings, just in case. Maybe Intel can live (or die) another day ;)
 
Which is ironically part of the reason why those CPUs aren't good at certain workloads. x86 at it's core is a pretty horrid CPU architecture; it needed to be replaced, not extended upon.
Damn right! Most of it is a huge set of parasitic legacy extensions that nobody needs anymore, but they are still there for compatibility reasons. Old crap like: MMX, 3DNow! SSE, SSE2, SSE3, SSSE3, SSE4, AVX, AVX2, AVX512, etc. So it is correct to think of x86 today as a CPU bloatware.

It can only die with the x86 itself, which I think has already outlived itself, and we need something better than CPU-s that linger on compatibility with some 30+ year old technology, because it contradicts the speed of modern technology development, which is generally exponential, but I guess Intel and AMD didn't get the memo.

That's why Apple M1 chips show such good performance, despite being incredibly weak - they simply dumped all the legacy crap, in favour of optimum performance. Not that I favour those, just saying - this is the way to do things moving forward. It's like in physics, Newton's Third Law of Motion - the only way to move forward is to leave the crap behind.
 
Last edited:
Glad it suffered for so long, sadly, it didn’t take intel with it.

Thanks to the lies promised by intel, some really good cpus were killed, like the amazing DEC Alpha.

To the ones talking negative about supporting legacy software, I understand that because of that support, x86 cpus are “bloated”, but the market has spoken, legacy support it’s important.

That doesnt mean that it needs be done in the way its done today, perhaps some type of software emulation as currently done by the apple m1.
 
Last edited by a moderator:
Good comments, would love to see us evolve towards a new 64 bit architecture for PC, process should have started a decade ago in parallel, so by now we would be well on way to phasing out x86 and it mess of extensions. Imagine Apple Mx series CPUs in another 2-3 years.

I think AMD and Intel are working on this but will milk current architecture for a very long time. We know both have stated ~75% IPC uplifts over today's chips by 2025 alone. I'd like to see Qualcomm come out with something competitive with M1x, M2 to create more incentive in PC space if Intel and AMD aren't getting the message loud enough.
 
Good comments, would love to see us evolve towards a new 64 bit architecture for PC, process should have started a decade ago in parallel, so by now we would be well on way to phasing out x86 and it mess of extensions. Imagine Apple Mx series CPUs in another 2-3 years.

I think AMD and Intel are working on this but will milk current architecture for a very long time. We know both have stated ~75% IPC uplifts over today's chips by 2025 alone. I'd like to see Qualcomm come out with something competitive with M1x, M2 to create more incentive in PC space if Intel and AMD aren't getting the message loud enough.
Couple of items:

Qualcomm has proven to be as bad as intel towards the industry, so no thanks.
As good as the M1 is, apple needs to show something that competes with ThreadRippers and Epyc that also allows a system with memory and gpu slots.
Personally, I want ARM to be independent and see Samsung and the others licensees to move up the performance bar.
 
Couple of items:

Qualcomm has proven to be as bad as intel towards the industry, so no thanks.
As good as the M1 is, apple needs to show something that competes with ThreadRippers and Epyc that also allows a system with memory and gpu slots.

A couple of points:

- no-one is talking about RISC-V, why?

- Apple architecture is engineered to be an optimization, not a Watt- or space-eater. At the moment a lot of people (me included) are working with the M1 faster and smoothly than with 65 W CPU and with 100-150W GPUs, die to optimization

- Apple will achieve slots and those HEDT needs but not just now. Their SoCs have to work in chiplets and be much bigger, which is not a present need, Apple can just focus where they earn money and for those small markets use x86 tech.

- ARM64 chips IF they stop the 4-6W limitation, if you let them use 15 - 25W and 128-bit fast memory bus, then everything's well be on a high level. Even the Samsung SoCs with AMD graphics, if one version is made to use up to 15W and very fast memory (LPDDR5?), I assure you that most 12th gen i7 will be overtaken. My S20 Ultra and most iPhone 12 are making multimedia tasks or web browsing much faster and smoother than most x86 chips out there
 
- no-one is talking about RISC-V, why?
Simple, its not ready yet, but hopefully soon.
Apple architecture is engineered to be an optimization, not a Watt- or space-eater. At the moment a lot of people (me included) are working with the M1 faster and smoothly than with 65 W CPU and with 100-150W GPUs, die to optimization
not sure what that has to do with my comment.
Apple will achieve slots and those HEDT needs but not just now. Their SoCs have to work in chiplets and be much bigger, which is not a present need, Apple can just focus where they earn money and for those small markets use x86 tech.
Now that sounds like an excuse.


ARM64 chips IF they stop the 4-6W limitation, if you let them use 15 - 25W and 128-bit fast memory bus, then everything's well be on a high level. Even the Samsung SoCs with AMD graphics, if one version is made to use up to 15W and very fast memory (LPDDR5?), I assure you that most 12th gen i7 will be overtaken. My S20 Ultra and most iPhone 12 are making multimedia tasks or web browsing much faster and smoother than most x86 chips out there
In less words, ARM cpus needs a proper desktop oriented version released. That I agree and hope its done soon, perhaps with someone also releasing an equivalent risc-v cpu.
 
Gosh I'm amazed it took this long to finally kill it.

...But (fat, giant but) Itaninum got one thing right. I've seen this in the comment too. The only way forward is to leave old stuff behind. People demanded compatibility. Sure in some transition period I can understand that. Even so after 2-3 years it's enough of compatibility backward. It's not like IT corporations care about e-waste. They do it daily to millions of people.

Look at current PC hardware and OSes. There is built-in compatibility stretching back 50 years. Who uses anything from XT era? Do you still have floppy on W10? Perhaps some parallel printer support on W11 too or ZIP/MO drive. At least in Linux you can add some obscure binaries if you really need them (assuming there is one in the first place) keeping the kernel relatively lean. But Windows is just mess of ancient compatibilities which should've been disposed off years ago.

I like what Apple Mac has done with M1. Leave bagage behind and start fresh. M1 small tiny 15W chip outperforms colossal i9. If you don't force software to move forward with hardware they will copy/paste/repeat same old for as long as they want - looking at you M$. x86-64 deserves do die and sooner the better because there is no way to expand with it.

Ask yourself one simple question. Have you seen Capitan Picard using PC with Windows 11 compatibility mode on USS Enterprise?

Me neither :p
 
The comment sections here are just bizarre.

Anything Windows 11:
"I'm not getting Windows 11 because it calls for too new a CPU!"

Articles on old CPU architectures:
"Why haven't we seen ARM Desktop CPU's or VISC-V CPU's".
"They keep bloating x86 to cater for legacy software".

The lack of consistency on people's arguments on here is astounding.
 
Amen!

A little longer, it could outlive Intel. But let's see what that 12-th gen brings, just in case. Maybe Intel can live (or die) another day ;)
LOL. Intel is making a few times the money that AMD makes and Intel is going bankrupt. Yeah, right :))

I know Intel isn't the absolute performance king right now, but their product, even 11th gen is quite decent. Nowhere near the disaster AMD has been in the pre Zen era where their CPUs were absolute garbage.
 
The lack of consistency on people's arguments on here is astounding.
It's not really the same thing though is it? One is hardware and the other is software. Software restrictions on hardware that otherwise would be fine is a slap in the face.
 
As an M1 user. I can tell the wattage advantage M1 has over other laptop/desktop CPU's is the process node it is manufactured-on 5nm and the configuration I.e Big Little cores OS optimization.
I know for a fact that other CISC CPU's will match those 2 performance metrics come 2022/23
At the end of the day it will all come down to ISA performance advantage I.e RISC vs CISC vs the rest.
This decade is turning out to be an interesting one, The death of EPIC ISA (Itanium) have heralded the battle of the ISA's. The RISC vs CISC battle being drawn again ARM vs AMD/Intel vs Loongson vs....
Long Live EPIC
 
Damn right! Most of it is a huge set of parasitic legacy extensions that nobody needs anymore, but they are still there for compatibility reasons. Old crap like: MMX, 3DNow! SSE, SSE2, SSE3, SSSE3, SSE4, AVX, AVX2, AVX512, etc.

It can only die with the x86 itself, which I think has already outlived itself, and we need something better than CPU-s that linger on compatibility with some 30+ year old technology.

That's why Apple M1 chips show such good performance, despite being incredibly weak. They simply dumped all the legacy crap, for utmost performance. Not that I favour those, just saying - this is the way to do things moving forward. It's like in physics, Newton's Third Law of Motion - the only way to move forward is to leave the crap behind.
Apple has been able to do that because they have an ecosystem...they control the software and the hardware.
Intel on the other hand needs to support basically all the developers in this world, new and old and there are a TON of applications that use old code, so rewriting everything just to remove the legacy x86 stuff from the CPUs would be an idiocy of epic proportions.

That x86 stuff that everyone is complaining about is getting smaller and smaller as we advance into process fabrication. x86 cpus are nowadays risc cpus so very similar to arm inside, with only an added decoding logic on top. So stop BS-tting about the x86 legacy and understand that the legacy support is the REASON why x86 has stayed relevant and on top.
 
Why intel has not gone 128bit CPU is beyond my limited understanding when every good things points to the massive benefits which would come... I mean how massive where the changes from 8bit to 16 to 32bit to 64bit..
Its a stupid thing to not already be there
 
Why intel has not gone 128bit CPU is beyond my limited understanding when every good things points to the massive benefits which would come... I mean how massive where the changes from 8bit to 16 to 32bit to 64bit..
Its a stupid thing to not already be there
Because there's no point. Going to 64-bit was necessary as we were running into the 32-bit 4GB Address Space limitation. 64-bit pushes the Address Space into the Exabyte range; it's so big neither AMD or Intel even bother to wire up all the address lines.

Fact is: We're basically stuck with x86-64 for at least two or three generations now. We had a shot to replace x86, and we blew it.

And here's some food for thought: You know all those HW security issues that have been found in recent years? Anyone notice that Itanium was *never* listed as effected? There's something to say about having a simple CPU design, rather then all those HW hacks x86/x86-64 has put in over the years to squeeze ever smaller performance gains out of a tapped architecture.
 
But (fat, giant but) Itaninum got one thing right. I've seen this in the comment too. The only way forward is to leave old stuff behind.
One of the main reasons why Intel pushed this was because they didnt want to continue sharing the x86 market with AMD and the others.

They even renamed X86-32 into IA-32 and Itanium was named IA-64, as to pretend that they were both the same thing.

Personally, I believe that a compromise can be achieved. X86-64 could strip a lot of the hardware they currently have for backwards compatibility and replace it via software emulation/translation, hence reducing the insane amount of silicon currently used.

Granted, I am not an expert on that, so my "suggestion" could be absolutely wrong and the only way forward is for x86 to die.
 
Last edited by a moderator:
The comment sections here are just bizarre.

Anything Windows 11:
"I'm not getting Windows 11 because it calls for too new a CPU!"

Articles on old CPU architectures:
"Why haven't we seen ARM Desktop CPU's or VISC-V CPU's".
"They keep bloating x86 to cater for legacy software".

The lack of consistency on people's arguments on here is astounding.
They are different people posting, according to the size of their wallet. I am for removing support of legacy software because I have a modern CPU.
 
As I remember it , DEC ( I worked for them at the time (great place to work)) sort of started the move to 64-bit. The DEC alpha, Tru64 UNIX and eventually VMS with 64-bit hardware and OS pushed HP to go 64-bit with Itanium. Intel used some of the same design concepts as Alpha. There were actually 64-bit Microsoft NT Alpha machines. Eventually the PC space went 64-bit.
FYI: There was a company that have a 128-bit machine for technical computing. I can't remember the name.
 
"After 20 years of failing to make a mark on the wider computing world, Intel finally stopped shipping its Itanium processors this past Thursday."

Wait a minute... This was still a thing?! I thought that AMD killed the Itanium over a decade ago! It is difficult for me to imagine why Intel kept that dead-end alive this long. The Itanium was an extremely potent processor but, as usual, Intel threw in a few deal-breaker caveats:

1: It was horrifically expensive ($1200-$4200USD[2001] if you bought 1000).
2: It had no x86 compatibility so anyone who chose it needed all new software.
3: Intel was the sole manufacturer of all parts so all upgrades cost a pantload.

AMD saw what a failure that IA-64 was destined to be and so created AMD64 with the first Athlon 64 which was a 64-bit architecture that remained compatible with i386 and x86. As a result of Intel's abject failure with IA-64, Intel was forced to licence the AMD64 architecture which is about the time that Intel's illegal shenanigans began. It's probably because this was the first time that AMD ever did something without Intel doing it first and just copying them. Intel realised that AMD was an actual threat in the market and so decided to "bump them off" like any good criminal organization would have done in their place.

Itanium was just another thing in an amazingly long list that makes the existence of Intel fanboys that much more mind-boggling.
 
Last edited:
And here's some food for thought: You know all those HW security issues that have been found in recent years? Anyone notice that Itanium was *never* listed as effected? There's something to say about having a simple CPU design, rather then all those HW hacks x86/x86-64 has put in over the years to squeeze ever smaller performance gains out of a tapped architecture.
Itanium has no speculative execution, no SMT, no out of order execution. Those reasons make it immune to Spectre and Meltdown. Main reason why Itanium failed is that unless compiler makes "perfect" code, it's quite slow. Something x86-64 CPU's that "squeeze ever smaller performance gains" don't really need...
AMD saw what a failure that IA-64 was destined to be and so created AMD64 with the first Athlon 64 which was a 64-bit architecture that remained compatible with i386 and x86. As a result of Intel's abject failure with IA-64, Intel was forced to licence the AMD64 architecture which is about the time that Intel's illegal shenanigans began.
I'd like to have more information about "Intel's x86-64 license". Intel reverse-engineered x86-64 without license and now there is cross-license agreement between AMD and Intel.
 
Back