Nvidia is reportedly investing $30 billion in new custom chip unit

Daniel Sims

Posts: 1,375   +43
Staff
In context: The booming AI sector has propelled Nvidia's financials to new heights, as its AI acceleration GPUs have become highly sought after. However, these general-purpose components are facing increasing competition, prompting the company to develop more efficient chips tailored for specific clients and tasks.

Confidential sources have told Reuters that Nvidia is embarking on a $30 billion venture to form a new unit for custom chip solutions. The products would focus on multiple sectors, with AI as the chief concern. The company has reportedly begun talks with Microsoft, Meta, Google, and OpenAI.

Aside from AI, the new unit might also design custom telecom, cloud, automotive, and gaming hardware. For example, much of rival AMD's business comes from designing custom processors for Xbox and PlayStation consoles. Nvidia's expensive push into custom hardware is likely a response to intensifying competition in the generative AI arena, which has ballooned the company's profits.

Since the beginning of the generative AI craze, Nvidia has dramatically boosted its stock price and market cap by selling H100 data center GPUs at a 1,000 percent profit. While the company doesn't reveal its expenses, estimates indicate that manufacturing an H100 costs a few thousand dollars, while they sell for between $25,000 and $60,000.

Despite their astronomical price tag, clients can't get enough of them. Meta plans to buy hundreds of thousands of H100s before the end of the year. The situation has pushed Nvidia's shares to over $720 this week, up 50 percent since the beginning of the year and over 200 percent compared to 2022 – the company's market cap of $1.77 trillion trails only slightly behind Amazon and Alphabet.

Nvidia's H100, A100, and the new H200 AI chips currently dominate the general-purpose AI and high-performance computing workloads markets. However, increasingly heated competition is emerging from companies like AMD, which claims its MI300X outperforms the H100 at a fraction of the cost.

Furthermore, other companies are designing proprietary silicon, which affords greater control over costs and energy consumption. The energy footprint of generative AI is approaching that of small countries, so improving efficiency has become a primary area of research in the sector, where bespoke hardware might become essential. Late last year, Amazon revealed the Trainium2 and the Arm-based Graviton4, which promise significant efficiency gains.

Permalink to story.

 
All that money to be locked into proprietary software that can't be run on other hardware. This is why there are so many competitors popping up and Nvidia will never get that. Cuda's days in AI are rapidly coming to an end.
 
This is why Intel should be throwing the kitchen sink at developing their GPUs not the slightly half-arsed approach they have had so far. I can't understand why they haven't been more determined with this given how integral it is to what they do.
 
If there is competition for Nvidia, then that's a good thing, at least from the buyer's perspective..
the more the merrier, right..?
 
If there is competition for Nvidia, then that's a good thing, at least from the buyer's perspective..
the more the merrier, right..?
Well, it should drive more competition and lower prices with more innovation in theory, but then again its enterprise level stuff, and companies innthis space love to gouge from businesses who will just happily hand over the money by and large to just get the stuff they need, AI is very much the current trend, but the problem for Nvidia is if that bubble bursts and the demand for their super high end stuff ceases, or the way we work with AI shifts from having massive systems crunching models, so we'll see
 
Back