The big picture: At this year's Dell Technologies World event, the company announced new products that include AI accelerator chips from AMD, Intel, Nvidia, and Qualcomm across its server and PC lines. Given that AI chips now offer some of the broadest choices in the semiconductor market, the move makes sense. Still, it's an impressive range of offerings that highlights how rapidly the AI hardware ecosystem has evolved in recent years.

Choice is a beautiful thing. That's especially true for companies building products to meet the diverse needs of a wide range of customers. So, it's no surprise to see Dell Technologies embrace this mindset in its latest hardware offerings, unveiled at the Dell Technologies World event.

What's also notable about Dell's approach is that it reflects the growing momentum and increasing sophistication of products entering the server and PC markets.

After years of stagnation, enterprise servers are enjoying a resurgence in interest.

After years of stagnation, enterprise servers are enjoying a resurgence in interest. Companies are recognizing the value of running their own AI workloads and building AI-capable data centers. As a result, traditional server vendors like Dell – and its competitors – are seeing renewed demand.

Nvidia's push for enterprise AI factories has also played a role. To Dell's credit, it was actually the first to introduce the concept through its Project Helix collaboration with Nvidia two years ago. Nvidia has since leaned into this trend, developing both hardware and software stacks optimized for enterprise AI workloads.

The reasons behind this are fairly straightforward. According to multiple sources, most companies still house the majority of their data behind corporate firewalls. More importantly, the data that hasn't been moved to the cloud is often the most sensitive and valuable – exactly the kind that's most effective for training and fine-tuning AI models. That makes it logical to process AI workloads locally. It's a classic case of data gravity: companies want to run workloads where the data resides.

That's not to say enterprises are pulling back from the cloud. Instead, there's growing recognition that cloud and on-premises computing can coexist. In fact, thanks to emerging standards like the Model Context Protocol (MCP), distributed hybrid AI applications that leverage both public and private clouds will likely move to the mainstream in a very rapid fashion.

With that context in mind, it's no surprise that Dell is expanding its joint AI Factory offerings with Nvidia. The company is introducing new configurations of its PowerEdge XE9780 and XE9785 servers featuring Nvidia's Blackwell Ultra chips – available in both air-cooled and liquid-cooled designs. Dell is also among the first to support Nvidia's new RTX Pro architecture, introduced at Computex in Taiwan.

The new Dell PowerEdge XE7745 server combines traditional x86 CPUs along with Nvidia's RTX Pro 6000 Blackwell server GPUs in an air-cooled design, making it significantly easier for many enterprises to upgrade their existing data centers. The idea is that these new servers can run traditional server workloads while also opening up the option for running certain AI workloads. These systems don't have the high-end processing power of the most advanced Blackwell systems designed for cloud-based environments, but they have more than enough to handle many of the AI workloads that businesses will want to run within their own environments.

Beyond Nvidia-based options, Dell also introduced a range of PowerEdge XE9785 servers using AMD's Instinct MI350 GPUs. Thanks to an upgraded ROCm software stack, these systems are considered a viable – and in some cases, more power-efficient – alternative to Nvidia-based configurations. More importantly, they give enterprises greater flexibility in vendor selection.

Similarly, Dell announced one of the first mainstream deployments of Intel's Gaudi 3 AI accelerators, using PowerEdge XE9680 servers configured with eight Gaudi 3 chips. These solutions offer a more cost-effective alternative and are particularly well-suited for organizations leveraging Intel's AI software stack and optimized models from platforms like Hugging Face.

One of the most intriguing announcements came from Dell's PC division: the launch of the Dell Pro Max Plus portable workstation. This marks the first use of a discrete NPU in a mobile PC – specifically, the Qualcomm A100.

By leveraging the interface typically used for discrete GPUs, Dell was able to bring this new accelerator into an existing design. The A100 PC Inference Card features two discrete chips with a total of 32 AI acceleration cores and 64 GB of dedicated memory. The company is targeting the device at organizations that want to run customized inferencing applications at the edge as well as for AI model developers who want to leverage the Qualcomm NPU design (though it's important to note that it's a different NPU architecture than is found on the Snapdragon X series of Arm-based SOCs).

Thanks to its large onboard memory cache, the A100 allows for the use of models with over 100 billion parameters – far exceeding what's possible on even the most advanced Copilot+ PCs today.

In addition to hardware, Dell announced several new software capabilities for its AI Factory server platforms under the umbrella of the Dell AI Data Platform. One of the biggest challenges with large AI models is fast data access and memory loading. Dell's new Project Lightning addresses this with a parallel file system the company claims offers twice the performance of any comparable solution. Dell also enhanced its Data Lakehouse, a structure used by many AI applications to access and manage large datasets more efficiently.

All told, Dell put together what looks to be a solid set of new AI-focused offerings that give enterprises a broad range of alternatives from which to choose. Given the rapid rise of AI applications as highlighted during the event's opening keynote, it appears that the combination of different options Dell is bringing to market should enable even the most specific demands from a given enterprise to be met in a very targeted manner.

Bob O'Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on X