Digital assistants drive new meta-platform battle

Bob O'Donnell

Posts: 79   +1
Staff member

In case you hadn’t noticed, the OS platform battle is over. Oh, and nobody really won, because basically, all the big players did, depending on your perspective. Google has the largest number of people using Android, Apple generates the most income via iOS, and Windows still commands the workplace for Microsoft.

But the stakes are getting much higher for the next looming battle in the tech world. This one will be based around digital assistants, such as Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana and Google’s Assistant, among others.

While much of the initial focus is, rightfully, around the voice-based computing capabilities of these assistants, I believe we’re going to see these assistants expand into text-driven chatbots, AI-driven autonomous software helpers and, most importantly, de facto digital gateways that end up tieing together a wide range of smart and connected devices.

From smart homes to smart cars, as well as smartphones, PCs and wearables that span both our personal and professional lives, these digital assistants will (ideally) provide the consistent glue that brings together computing, services and much more across many disparate OS platforms. In short, they should be able to make our lives better organized, and our devices and services much easier to use. That’s why these assistants are so strategically important, and why so many other companies—from Facebook to Samsung—are working on their own variations.

Another fascinating aspect of these digital assistants is that they have the potential to completely devalue the underlying platforms on which they run. To put it succinctly, if I can use, say, Alexa across an iPhone, a Windows PC, my smart home components and a future connected car, where does the unique value of iOS or Windows 10 go? Out the door….

This overarching importance and distancing from different platforms is why I refer to these assistants as the pre-eminent example of a “meta-platform”: something that provides the potential for expansion, via both APIs for new software development, and the connectivity of a regular platform, but at a layer “above” a traditional OS.

With that thought in mind, it’s interesting to look at recent data TECHnalysis Research collected as part of a nearly 1,000-person survey of US consumers on usage of digital assistants on smartphones, PCs and, the hottest new entrant, smart speakers such as Amazon’s Alexa and Google Home.

As mentioned earlier, in their present incarnations, these digital assistants are primarily focused on voice-based computing and the kinds of applications that are best-suited for simple voice-driven queries. So, to get a better sense of how these assistants are used, respondents were asked in separate questions how often (or even if) they used digital assistants on smart speakers (such as Amazon Echo), smartphones and PCs. The results were combined into the chart below.

What’s fascinating is that, even though the smart speaker category is relatively new (the Echo is less than 2 years old) and Siri, the first smartphone-based digital assistant, arrived in 2011, it’s clear that people with access to a smart speaker like Echo (around 14% of US households according to the survey results) are using digital assistants significantly more than those with smart phones.

While it’s tempting to suggest that this may be due to the perceived accuracy of the different assistants, in a separate question about accuracy, the rankings for Alexa, Siri and Google’s Assistant were nearly identical, meaning there was no one clear favorite. Instead, these results suggest that a dedicated function device placed in a central location within a home simply invites more usage. Translation: if you want to be relevant in these early stages of the digital assistant battle, you need to have a dedicated smart speaker offering.

Of course, the other challenge is that most people are now increasingly exposed to and use multiple digital assistants from multiple players. In fact, 56% of the respondents acknowledged that they at least occasionally (and some frequently) used multiple assistants, with differing degrees of comfort in making the switch between them. The largest single group, 26%, said they were loyal to and consistently used one assistant and ignored the others, but as competition in this area heats up, those loyalties are likely to be tested.

Digital assistant technology has a long way to go, and their current usage patterns only provide some degree of insight into what their long-term capabilities will be. Nevertheless, it’s clear that the meta-platform battle for digital assistants is going to have a significantly broader and longer-lasting impact than the OS platform battles of yore. That, by itself, will make them essential to watch and understand.

Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm. You can follow him on Twitter . This article was originally published on Tech.pinions. Image credit: CNN Money.

Permalink to story.

 
What’s fascinating is that, even though the smart speaker category is relatively new (the Echo is less than 2 years old) and Siri, the first smartphone-based digital assistant, arrived in 2011, it’s clear that people with access to a smart speaker like Echo (around 14% of US households according to the survey results) are using digital assistants significantly more than those with smart phones.

Probably because the primary (if not sole) purpose of the "smart speaker" is to give you access to the digital assistant? And it's not like it's really "innovative" technology; maybe an innovative application of a technology, but they've been working on speech recognition for decades, & we've had speech recognition software for nearly as long (Dragon Systems' old DragonDictate for DOS, for example, let alone the Windows & Mac versions released in 1997).
 
The AI assistants won't take off until they can understand accents and dialects of people around the world. People still wonder why I am screaming my head off when driving, what they don't realise is dumb assistant is very good at understanding things that I have not said.
 
The AI assistants won't take off until they can understand accents and dialects of people around the world. People still wonder why I am screaming my head off when driving, what they don't realise is dumb assistant is very good at understanding things that I have not said.
Agreed...plus just understanding normal spoken English. I thought using Cortana on my PC & phone would be fun and useful. After it being a PITA to just get her to understand basic commands or voice to text, I don't even bother now. Faster, easier, and more accurate to just type.
 
Back