The Last Time Intel Tried to Make a Graphics Card

"When Intel claimed Larrabee was faster than existing GPUs, it was taken as a given, considering their talent pool and resource budget."
Anyone who knew x86 architecture vs NVIDIA and AMD knew that was a complete load of horse **** sorry. I majored in CPU architecture and knew the claim was rubbish - there's absolutely no way in hell a CISC architecture (remember x86 is a mess of an architecture to begin with) could possibly compete with masses of graphics focused specialised vector units. Sure it won't be as flexible but for throughput, Intel was NEVER going to be remotely close to the performance. It's frankly hilarious they tried something so stupid.

This topic is dear to my heart because Project Offset was destroyed by these clowns.
 
sorry. I majored in CPU architecture and knew the claim was rubbish - there's absolutely no way in hell a CISC architecture...could possibly compete with masses of graphics focused specialised vector units.
Larrabee had both specialized vector and texture sampling units. And given that modern GPUs are becoming increasingly CISC-like, with many of the capabilities that Larrabee introduced, I would chalk up its failure to its overall MIMD- and cache-coherency approach, rather than CISC.
 
Larrabee had both specialized vector and texture sampling units. And given that modern GPUs are becoming increasingly CISC-like, with many of the capabilities that Larrabee introduced, I would chalk up its failure to its overall MIMD- and cache-coherency approach, rather than CISC.
They are way too general compared to the current gfx eco. That's the point. And they are weighed down by legacy crap architecture decisions - this isn't just CISC. It's x86 CISC. There was NEVER any way they could do more perf than existing GFX cards. The gap on functionality vs throughput was huge.

Larrabee's win was the fact it was more general - you had to do something novel with it that existing eco could not do efficiently. They couldn't pull that off.

To say it could outperform existing gfx architecture was obviously just crap. The proof is in the pudding - they didn't get remotely close.

To me it's a pretty back of hand math to see if it is remotely feasible. Transistors per core inc cache like L3 - how do you consolidate that etc. Number of cores you can do per equivalent wafer. Clock speed (quite critical). Effective IPC (remember on Pentium arch this was quite poor). Each of those factors it just didn't make sense to me that this could do better than NVIDIA GPU arch. Where are you blowing NVIDIA out of the water there? There's not a single win there apart from flexibility.
 
Back