As long as AMD can offer better GPUs than Intel, and better CPUs than Nvidia, they can...

Jay Goldberg

Posts: 75   +1
Staff

AMD held an analyst event last week, their second of the year. During their June event, they unveiled the impressive Instinct MI300, a GPU specifically designed for AI. The event featured numerous high-profile partners on stage and positive commentary on various initiatives.

And, of course, their stock fell the next day. Apparently investors were disappointed that AMD did not announce any paying customers for the product, nor did they release many performance metrics for the Mi300. However, at last week's event, they revealed that the Mi300 matches Nvidia's H100 in performance. Additionally, they announced two major paying customers: Meta and Microsoft.

Surprisingly, the stock declined following the news (it's since bounced back handsomely and then some).

Editor's Note:
Guest author Jonathan Goldberg is the founder of D2D Advisory, a multi-functional consulting firm. Jonathan has developed growth strategies and alliances for companies in the mobile, networking, gaming, and software industries.

AMD's had a great run over the past several years. They have capitalized on Intel's stumbles and picked up a healthy amount of share in the profitable data center market. Moreover, CEO Lisa Su does not get enough credit for the turnaround she put in place taking the company from a bumbler at execution, always missing deadlines, to a solid operations machine delivering compelling products at a regular clip.

Seen in that light, these latest news continue that trend. AMD announces a product, gets it into customers' hands, and are now poised to generate real revenue from them. Textbook execution. The Mi300 demonstrates that they have real capabilities in the AI market, and not just in the form of a product, but an entire ecosystem of hardware partners like Dell and Lenovo, and a steadily growing roster of software partners.

But, there are still plenty of obstacles ahead. Performance is one thing, actual customer usage is another. For companies buying data center silicon, especially the hyperscalers, many factors beyond raw performance matter. Moreover, AMD's software still has a long way to go.

The dominant force in AI systems today is Nvidia's CUDA software, which has become the de facto platform for most AI applications. In response, AMD developed RocM, now a complete product, but it hasn't yet emerged as a true contender to CUDA's dominance. RocM's support is limited to a select few of AMD's products, and more crucially, it remains largely unfamiliar to the majority of AI developers. The issue isn't inherently with RocM; rather, it's that CUDA has over a decade's head start, and Nvidia is not standing still.

Which brings us to the real challenge for AMD. Despite their consistent efforts over the years, the market has moved past them. Nvidia has overtaken everyone to become the leader in data center processors. In the new data center, AMD is in second place in CPUs and second place in GPUs. This has to be frustrating for a company that has worked so hard.

However, there's still potential for AMD to succeed. Even if Nvidia manages to dominate the data center market like Intel did for a decade with a 90% share (which seems unlikely), AMD represents an alternative GPU source that every customer says they want as they write ever bigger checks to Nvidia.

And if Nvidia's share eventually levels off somewhere below 70%, then that means a three-way contest for everything. In the past, hyperscalers saw the advantage in standardizing purchases with a few vendors. Now, in the era of heterogeneous computing, multiple vendors are necessary, and AMD should maintain its relevance.

As much as AMD seems destined to remain the perpetual second source in the data center, that may not be such a bad place to be. They don't need to have better CPUs than Intel or better GPUs than Nvidia. So long as their GPUs are better than Intel's and their CPUs are better than Nvidia's CPUs, AMD will have a healthy roster of paying customers. They do not need to outrun all the bears, they just have to outrun the other racers.

Permalink to story.

 
AMD's first success came as a second source for Intel CPUs and as the article correctly highlights, that very much remains part of its DNA to this day. As long as there are more impressive companies around whose innovations it can attempt to duplicate AMD should be able to survive.
 
The pure hatred for Intel and Nvidia on this site annoys me, I have top of the range Intel and Nvidia kit and love it. I also have a pure AMD setup for when family friends pop round and want to game with me, again I love it. I have a preference for neither as they both get the job done and play the same games. So why is EVERY article on this site anti Intel and Nvidia? Sly dig here another pop there test rigs results are always way off from what I get. It's like Droid V's IOS but everlasting, and I'm way to old for this type of JOURNALISTIC tomfoolery. Either get professional or leave the job, it isn't a hard choice for a writer unless there's £££ involved in which case it's your duty to tell us about it as JOURNALIST'S.
Both Nvidia and Intel are guilty of monopoly practices, so it is not about this site or that site. Remember the backslash when AMD tried revoke its promise to support AM4 or FSR mandatory thing. it's all about bad practice and both Intel and Nvidia excels in this domain
 
All AMD really has to manage is consistently put out good products. The professional world doesnt switch brands on a dime. They've managed that with EPYC, and it shows in their market share climbing a few % points per year consistently. The longer intel underperforms and the longer AMD delivers, the larger their share will get.

Their GPU division hasnt managed this, well, ever. FirePro has been a joke for a long time. Their new instinct cards are impressive, now we wait to see if they can maintain this or if they will drop the ball again.
 
Facts? All you said was it was the worst article ever because .. you don't agree with it.
No, I pointed out based on facts why it was a lie, from the title to the end the article is a biased narrative that ignores reality and narrows the view on points that strengthen its thesis. It's been a week since AMD stock has been up more than 15% on the subsequent mi300 news.

Disagreement you leave for subjective or uncertain matters, not for facts, with no room for that. If you want the truth, do your own research.
 
leewheeler said
The pure hatred for Intel and Nvidia on this site annoys me, I have top of the range Intel and Nvidia kit and love it. I also have a pure AMD setup for when family friends pop round and want to game with me, again I love it. I have a preference for neither as they both get the job done and play the same games. So why is EVERY article on this site anti Intel and Nvidia? Sly dig here another pop there test rigs results are always way off from what I get. It's like Droid V's IOS but everlasting, and I'm way to old for this type of JOURNALISTIC tomfoolery. Either get professional or leave the job, it isn't a hard choice for a writer unless there's £££ involved in which case it's your duty to tell us about it as JOURNALIST'S.
Ironically, this article is not pro-AMD at all.
 
I think AMD should also focus a bit more on their recycling, Green Sources of renewable and recyclable resources, and show a map/outline of how their products can be a part of a 'Green Initiative' towards a sustainable future. Nvidia doesn't do this, nor Intel. If they want a leg up, this could be one means of doing so.

For example, Newegg offers a Trade-in Incentive for GPUs. Granted, they only accept the prior 1-2 recent models in recent years, but it is a good start to help gamers, data centers, and AI enthusiasts get the newest tech, while also recycling older tech and repurposing them.

The reason I suggest this is because AMD's Top Reason for Consumer Purchases (and sometimes Business Purchases) has been Budgeting. If you want the best-of-the best, no matter the cost, then you go Nvidia or Intel, but if you want a dang good build on a Budget, then you go AMD.

So, if AMD offered a program to trade-in older model GPUs and CPUs similar to how AT&T and Verizon do-so with phones, then they could potentially increase their income, while increasing their reputation, while also increasing their public relations and their relations with their partners (what business wouldn't want to pay $5 million USD with a $1 million USD trade-in of their older model GPUs?).

I realize the logistics of all of this would be a problem that I don't have the solution to, but the idea is solid, in my opinion.
 
Article seems a bit biased... AMD has been dominating Intel on data center CPUs for a little while now, eating at Intel's market share bit by bit.

Also their stock isn't down, it's +12% (5d) for now.

Dominating in performance yes. Not dominating in total AMD hardware actively deployed on DC's.

I mean it will take some years to catch up - you just don't replace your whole intel stack of servers with AMD. It needs to be tested and converted, time consuming.

But yeah - AMD since the Ryzen made a huge thrive forward including their now RDNA graphics - they will be around in the future and likely even more important then most would think.
 
I think AMD should also focus a bit more on their recycling, Green Sources of renewable and recyclable resources, and show a map/outline of how their products can be a part of a 'Green Initiative' towards a sustainable future. Nvidia doesn't do this, nor Intel. If they want a leg up, this could be one means of doing so.

For example, Newegg offers a Trade-in Incentive for GPUs. Granted, they only accept the prior 1-2 recent models in recent years, but it is a good start to help gamers, data centers, and AI enthusiasts get the newest tech, while also recycling older tech and repurposing them.

The reason I suggest this is because AMD's Top Reason for Consumer Purchases (and sometimes Business Purchases) has been Budgeting. If you want the best-of-the best, no matter the cost, then you go Nvidia or Intel, but if you want a dang good build on a Budget, then you go AMD.

So, if AMD offered a program to trade-in older model GPUs and CPUs similar to how AT&T and Verizon do-so with phones, then they could potentially increase their income, while increasing their reputation, while also increasing their public relations and their relations with their partners (what business wouldn't want to pay $5 million USD with a $1 million USD trade-in of their older model GPUs?).

I realize the logistics of all of this would be a problem that I don't have the solution to, but the idea is solid, in my opinion.

Sounds great doesn't it..?
What.... renewable resources should AMD use..? What recyclable resources...?

No Company buys back their old chips to recycle them, the companies who purchased said hardware, can have their old inventory processed for "recyclables"..! (There are already companies that do this.)

So why are You saying AMD needs to buy back their stuff, just to recycle them..?



Secondly, NewEgg is owned by China now and recycling older tech and are repurposing them for their Country, not because it's "Green".
 
Having used ROCm... it's no bueno.

CUDA? You have a nice layering, the CUDA tools are not card-specific, they generate some type of byte code, and the Nvidia driver accepts the bytecode and converts it to whatever instruction set your Nvidia card actually uses. So, you can have tensorflow just support CUDA 11.1 or 12.0 or whatever, and it'll support older and newer cards, including being able to run an older tensorflow version and run it on cards that didn't even exist yet when that tensorflow version shipped.

ROCm? Maybe this has changed (I used it a couple years ago), but then GPU instructions for the SPECIFIC models it supported were built in throughout the stack. The ROCm stack, from the base code all the way up to ROCm build ot Tensorflow, ends up riddled with card-specific executable code. This ROCm supports these 4 cards? Those 4 cards are all the tensorflow build will support, if you want to support some other model, or newer cards, or older cards, the ENTIRE software stack must be rebuilt from base all the way up through tensorflow itself. Frankly it was a tad gross compared to CUDA's nice seperation.

I found it VERY odd -- ROCm IS using LLVM, so I have no idea why they placed all this pre-built GPU-specific executable code throughout rather than using LLVM bytecode and compiling for the GPU on-the-fly. But (at least 2 or 3 years ago), they didn't!
 
Last edited:
In my opinion, the font size on the "opinion" label on this article should be larger and more apparent. Some of the commenters are taking this as an indictment of Techspot as a whole.
 
This wrongly assumes that NVIDIA and Intel are the only competitors of AMD in server space. This is only somewhat true in consumer space, but data centre has ARM CPUs are competition and a lot of companies are working on AI accelerators.

Just look at where AMD was before Zen. It was hardly anywhere. Being "better than NVIDIA" in CPU space didn't give AMD any leverage because buying something bad just because it's an alternative isn't something that companies tends to do.

AMD doesn't have to pass Intel or NVIDIA on every single measure to win data centre market share, but it has to be competitive in things that matter: performance/space (how many servers for a certain performance), performance/power and performance/price.

I don't think that this opinion is based on much. AMD does have good hardware these days, which is why it's winning market share. What matters less, contrary to consumer space, is the software ecosystem.
 
While it is true that Intel reigns supreme in the CPU space, and Nvidia in the GPU space, that does not mean that AMD is failing. Being the market leader only means that they are successful at some point in time, and now need to try and fend off competition to maintain their market share. Intel for example, still dominates when it comes to CPU market share, but have been losing market share to AMD. Where AMD have not made inroads will be its GPU department which I guess may likely be by choice. Why struggle so hard uphill when they can focus their resources on CPU which is likely cheaper to make and hence, more profitable? GPU is important, but secondary to AMD. They need it because this allows them to sell custom SOCs to maintain a steady source of income. And you can tell that they have been dominating in the console space.So they just need their GPUs to be "good enough".
 
Back