Opinion: Siri shortcuts highlights evolution of voice-based interfaces

Bob O'Donnell

Posts: 79   +1
Staff member
Recap: To my mind, the most intriguing announcements from this year’s Apple Worldwide Developer Conference (WWDC) was the introduction of Siri Shortcuts. Available across iOS devices with iOS12 and Apple Watches with WatchOS 5, Siri Shortcuts essentially adds a new type of voice-based user interface to Apple devices.

It works by building macro-like shortcuts for basic functions across a wide variety of applications and then gets them to execute by simply saying the name of your custom-labelled function to Siri. Critically, they can be used not just with Apple apps and iPhone or iPad settings, but across applications from other vendors as well.

Early on, most digital assistant platforms, such as Siri, Amazon’s Alexa, and the Google Assistant, focused on big picture issues like answering web-based queries, scheduling meetings, getting updates on quick data nuggets like traffic, weather, sports scores, etc. Most assistant platforms, however, didn’t really make your smart devices seem “smarter” or, for that matter, make them any easier to use.

With the introduction of Samsung’s Bixby, we saw the first real effort to make a device easier to use through a voice-based interaction model. Bixby’s adoption (and impact) has been limited, but arguably that’s primarily because of the execution of the concept, not because of any fundamental flaw in the idea. In fact, the idea behind a voice-based interface is a solid one, and that’s exactly what Apple is trying to do with Siri Shortcuts.

"The idea behind a voice-based interface is a solid one, and that’s exactly what Apple is trying to do with Siri Shortcuts."

At first glance, it may seem that there’s little difference between a voice-based UI and traditional assistant, but there really is. First, at a conceptual level, voice-based interfaces are more basic than an assistant. While assistants need to do much of the effort on their own, a voice-based UI simply acts as a trigger to start actions or to allow more easy discovery or usage of features that often get buried under the increasing complexity of today’s software platforms and applications. It’s a well-known fact that most people use less than 10% of the capabilities of their tech products. Much of that limit is because people don’t know where to find certain features or how to use them. Voice-based interfaces can solve that problem by allowing people to simply say what they want the device to do and have it respond appropriately.

Given the challenges that many people have had with the accuracy of Siri’s recognition, this more simplistic approach is actually a good fit for Apple. Essentially, you’ll be able to do a lot of cool “smart” things with a much smaller vocabulary, which improves the likelihood of positive outcomes.

Another potentially interesting development is the possibility of its use with multiple digital assistants for different purposes. While I highly doubt that Apple will walk away from the ongoing digital assistant battle, they might realize that there could be a time and a place for, say, using Cortana to organize work-related activities, using Google Assistant for general data queries and using Siri for a variety of phone-specific functions—at least in the near term. Of course, a lot questions would need to be answered and API’s opened up before that could occur, but it’s certainly an intriguing possibility. Don’t forget, as well, that Apple has already created a connection between IBM’s Watson voice assistant and iOS, so the idea isn’t as crazy as it may first sound.

Even within the realm of a voice UI, it makes sense to add some AI-type functions. In fact, Apple’s approach to doing on-device machine learning to help maintain data privacy makes perfect sense, with a function/application that lets you use the specific apps installed on your device and provides suggestions based on the contacts and/or other personalized data stored in your phone. This is where the line between assistant and voice UI admittedly starts to blur, but the Apple offering still makes for a more straightforward type of interaction model that its millions of users will likely find to be very useful.

As interesting as the IFTTT (If This Then That)-like macro workflows that Siri Shortcuts can bring to more advanced users, however, I am a bit concerned that mainstream users could be a bit confused and overwhelmed by the capabilities that Shortcuts offers. Yes, you can achieve a lot, but even from the brief demo onstage, it’s clear that you also have to do a lot to make it work well. By the time it’s officially released as part of iOS12 this fall (as a free upgrade, BTW), I’m hoping Apple will create a whole series of predefined Siri Shortcuts that regular users can quickly access or easily customize.

The world of voice-based interactions continues to evolve, and I expect to see a number of advancements in both full-fledged assistant models, voice-based UIs, and combinations of the two. Long-term, I believe Siri Shortcuts has the opportunity to make the biggest impact on how iOS users interact with and leverage their devices of anything announced with iOS12, and I’m really looking forward to seeing how it evolves.

Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm. You can follow him on Twitter . This article was originally published on Tech.pinions.

Permalink to story.

 
Back