In context: With the surge of news and excitement surrounding AI, it appears that virtually every major tech company has either released or is on the verge of announcing their efforts in this burgeoning new area. As these debuts start to roll out, it's interesting to note how each company is taking a unique angle related to their organization's product history and their overall view of and approach to the tech world. Of course, this is not terribly surprising in one way, but it also seems to reflect the fact that many of these generative AI efforts represent substantive changes emanating from the core of each company's ethos and overall philosophy.

Such is the case with IBM, the most recent entrant to the generative AI product and service world. At their annual Think conference in Orlando, they unveiled a range of new AI offerings that have a clear sense of "IBM-ness" to them under the new moniker watsonx. They reflect the tool-focused, open technical heritage of the company as well as a great deal of work being done by IBM Research.

IBM certainly isn't new to the world of AI. The original Watson system was arguably much of the world's introduction to artificial intelligence. With watsonx, IBM is inserting a critical new layer of capabilities into the heart of their software stack and hybrid cloud platform. In the process, they're also hoping to reinvigorate the entire IBM software story.

The initial iteration of watsonx consists of three key components –, and watsonx.governance – that are designed to work together, but in the spirit of IBM open-ness and interoperability, can also work with similar components from other vendors. is the core AI toolset through which companies can build, train, validate and deploy foundation models. Notably, companies can use it to create original models or customize existing foundation models., is a datastore optimized for AI workloads that's used to gather, organize, clean and feed data sources that go into those models. Finally, watsonx.governance is a tool for tracking the process of the model's creation, providing an auditable record of all the data going into the model, how it's created and more.

Another part of IBM's announcement was the debut of several of its own foundation models that can be used with the watsonx toolset or on their own. Not unlike others, IBM is initially unveiling a LLM-based offering for text-based applications, as well as a code generating and reviewing tool. In addition, the company previewed that it intends to create some additional industry and application-specific models, including ones for geospatial, chemistry, and IT operations applications among others.

Critically, IBM said that companies can run these models in the cloud as a service, in a customer's own data center, or in a hybrid model that leverages both. This is an interesting differentiation because, at the moment, most model providers are not yet letting organizations run their models on premises.

IBM also described initially having three different model architectures for each of these applications, all with stone-based names. The idea behind the different architectures is to offer different levels of cost and performance tradeoffs. As many organizations are starting to learn, running foundation models takes enormous amounts of computing and electrical power, so companies have expressed interest in seeing different options in this area.

Slate is what IBM calls a decoder only AI model that requires task-specific labeled data for training purposes but doesn't offer generative capabilities. Sandstone is an encoder-decoder model that provides offers a mixture of non-generative and generative applications and is best suited to applications where the generated output is small. Finally, Granite is an encoder type model that is optimized for fully generative AI applications. In theory, you can build or use any type of model with any type of architecture. So, for example, while most organizations may want to have a generative text-capable foundation model, they may want to have a code-focused model that intelligently reviews the code submitted to it but doesn't generate new code.

To add yet more options – and, quite honestly, more complexity – IBM said they will also be offering different sizes of these models with different numbers of parameters (apparently, there are also different model architectures being developed by IBM Research for future release as well). While the multitude of options can get confusing, this gives companies more flexibility because each of the different models and architectures will have different computing requirements. In situations where customers may want to run their own version of these models, that could prove to be important because they may not have (or want to purchase) the number of GPU-equipped servers needed to run the larger models.

Highlighting the connection between the new tools and foundation model offerings, IBM pointed out that they used the watsonx tools to build these models as customer 0. The implication is that organizations can leverage the new AI platform tools to create a wide range of offerings.

Another important point is that while much of the focus around watsonx is towards the creation of new foundation models, it can also be used to work with and modify existing open-source models from companies like Hugging Face. This is important because not many organizations have skill sets in house yet to build their own models.

To complete the Watson-related AI announcements, IBM also said that they are using the Watson name (capital "W" and no "x") for AI tools that are focused on individuals. The company discussed the Watson Assistant, Watson Orchestrate, Watson Discovery and Watson Code Assistant names for its existing line of software applications that are designed to help users be more productive at certain tasks.

These are all part of IBM's higher-level software strategy that includes five different main categories: Digital Labor (which these all fall into), IT Automation, Security, Sustainability and Application Modernization. The idea is that users will interact with a Watson prompt, and it will provide AI-powered responses to help with things like creating chatbots, building automation scripts, writing code and more. The watsonx line, on the other hand, is a set of tools focused at AI developers within an organization who are tasked with creating and customizing foundation models.

As with other big AI announcements of late, there is a great deal to take in and absorb in all of this IBM news and it will be interesting to see how customers (and the market) react to these offerings. What I do find fascinating, however, is that these watsonx/Watson announcements also reflect some higher-level trends that are starting to become apparent in the rapidly evolving world of generative AI.

First, it's clear that these aren't simple add-ons meant to leverage the current excitement and hype around the category. To be sure, the timing of the announcements is undoubtedly tied to that, but the changes that IBM is making are at the core of the company's approach to software. These new capabilities sit in the middle of the software stack and platform strategy that the company has been talking about for years. They represent an important inflection point that will eventually impact their entire set of software offerings. In context, these announcements also serve as an exclamation to how seriously and how profoundly generative AI is shaping the tech world's agenda. To put it succinctly, this is way more than a passing fad.

Second – and this is more unique to IBM – the evolution of the Watson branding provides an interesting perspective on the evolution of AI overall. After the initial splash that IBM made with an AI-powered Watson system, there were quite a few years of disappointments that one might argue aligned with what's been called an AI winter. As ChatGPT and other generative AI tools exploded onto the scene late last year, however, there's been an incredible rebirth of attention in this next generation transformer and foundational model-driven version of artificial intelligence and the increased effectiveness and impact it now offers. So, it seems rather fitting to see IBM reflect that new perspective with the revised version of watsonx/Watson branding as part of its own generative AI tools.

There are some who may argue that another AI winter could soon be upon us. However, as these IBM announcements illustrate, there's a much wider range of AI offerings now coming to market than we've ever seen before. Serious obstacles and concerns remain, but for organizations that are curious about the possibilities that generative AI can enable, there's never been a better time to start the exploration process.

Bob O'Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter