Intel CEO argues inference is better for the industry, Nvidia CUDA days are numbered

emorphy

Posts: 67   +0
Staff
In context: Intel CEO Pat Gelsinger has come out with the bold statement that the industry is better off with inference rather than Nvidia's CUDA because it is resource-efficient, adapts to changing data without the need to retrain a model and because Nvidia's moat is "shallow." But is he right? CUDA is currently the industry standard and shows little sign of being dislodged from its perch.

Intel rolled out a portfolio of AI products aimed at the data center, cloud, network, edge and PC at its AI Everywhere event in New York City last week. "Intel is on a mission to bring AI everywhere through exceptionally engineered platforms, secure solutions and support for open ecosystems," CEO Pat Gelsinger said, pointing to the day's launch of Intel Core Ultra mobile chips and 5th-gen Xeon CPUs for the enterprise.

The products were duly noted by press, investors and customers but what also caught their attention were Gelsinger's comments about Nvidia's CUDA technology and what he expected would be its eventual fade into obscurity.

"You know, the entire industry is motivated to eliminate the CUDA market," Gelsinger said, citing MLIR, Google, and OpenAI as moving to a "Pythonic programming layer" to make AI training more open.

Ultimately, Gelsinger said, inference technology will be more important than training for AI as the CUDA moat is "shallow and small." The industry wants a broader set of technologies for training, innovation and data science, he continued. The benefits include no CUDA dependency once the model has been trained with inferencing and then it becomes all about whether a company can run that model well.

Also read: The AI chip market landscape – Choose your battles carefully

An uncharitable explanation of Gelsinger's comments might be that he disparaged AI training models because that is where Intel lags. Inference, compared to model training, is much more resource-efficient and can adapt to rapidly changing data without the need to retrain a model, was the message.

However, from his remarks it is clear that Nvidia has made tremendous progress in the AI market and has become the player to beat. Last month the company reported revenue for the third quarter of $18.12 billion, up 206% from a year ago and up 34% from the previous quarter and attributed the increases to a broad industry platform transition from general-purpose to accelerated computing and generative AI, said CEO Jensen Huang. Nvidia GPUs, CPUs, networking, AI software and services are all in "full throttle," he said.

Whether Gelsinger's predictions about CUDA become true remains to be seen but right now the technology is arguably the market standard.

In the meantime, Intel is trotting out examples of its customer base and how it is using inference to solve their computing problems. One is Mor Miller, VP of Development at Bufferzone (video below) who explains that latency, privacy and cost are some of the challenges it has been experiencing when running AI services in the cloud. He says the company has been working with Intel to develop a new AI inference that addresses these concerns.

Permalink to story.

 
Man alive, this Intel uber-nerd sure makes a lot of noise, but has he actually delivered anything? Every week it's some new nonsense he's spouting about everything that Intel does being the right thing and the competition being wrong.

He can huff and puff all he wants, but Intel has a lot of catching up to do and it will take a long time to regain their leadership position. Pretending you're the leader doesn't make you the leader.

Maybe he should save his chest beating until Intel actually delivers a solid, energy efficient processor instead of just polishing and re-packaging the same turd over and over every six months.

[edit] Actually, scratch that. He shouldn't beat his chest at all, it might cave in.
 
Last edited:
Well if Nvidia is a Trillion dollar company - if should be responding to customers needs - would be extremely stupid if they do not have a a few plans sketched out at least.

AMD is giving more working memory in it's latest - so let's see Nvidia's next iteration

Android chips do not have to be as good as Apple ones - as for 96% of customers no difference

But here where time , energy , investment, ability is paramount

A big company like MS or google can try a range of strategies

Just like a company trying a Ford or GM fleet program in different regions
 
Intel out here acting like OpenCL doesn't exist. Intel even has their own flavor of OpenCL.
Well, anything that Intel didn't invent or patent doesn't exist to them or is irrelevant. Reminds of Sony's glory days when they tried to tell customers what they need instead of listening to them.
 
This one of the few things I heard from Gelsinger that isn't hyperbole and BS. As far as AI is concerned CUDA's days certainly are numbered. Companies want to move away from proprietary software and run software that is platform agnostic. AMD is making huge strides with ROCm and it's performance is good. Along with openAI and Pytorch etc people no longer need to be locked into Huang's egregiously overpriced ecosystem.
 
Context is important. He's talking about AI specifically and he's actually talking sense. CUDA willlive on for a long time for other applications but in the AI world things are a changing rapidly.
While this is true, this IS pat we are talking about. The Steven Moleneux of chips.
Man alive, this Intel uber-nerd sure makes a lot of noise, but has he actually delivered anything? Every week it's some new nonsense he's spouting about everything that Intel does being the right thing and the competition being wrong.

He can huff and puff all he wants, but Intel has a lot of catching up to do and it will take a long time to regain their leadership position. Pretending you're the leader doesn't make you the leader.

Maybe he should save his chest beating until Intel actually delivers a solid, energy efficient processor instead of just polishing and re-packaging the same turd over and over every six months.

[edit] Actually, scratch that. He shouldn't beat his chest at all, it might cave in.
Pat was CEO during the netburst era. That explains a lot.
 
It's more like intel's day are numbered.
intel "fake" 4 cant compete againts tsmc n5/n6 that even the gpu, soc and io chiplets are manufacuted by tsmc.
 
May be what Pat said is true, but the fact that many companies are already heavily invested in Nvidia hardware and will find it difficult to reduce its usage rate in the short to mid term. In any case, the AI "hot air balloon" will burst 1 day. Couple with the fact that the US started sanctioning Nvidia from selling to multiple rich nations, I am expecting demand to take a dive as fast as the initial craze over it.
 
Pat cannot say that CUDA is great. In order to make a commercial chance, he must create doubt with customers, so that they diversify.
 
CUDA existed for how long? Very long. Other companies can definitely make something better, especially having learned from CUDA and how it works.
But it will definitely take time.
 
Back