Sometimes it’s the things hidden in plain sight that have the biggest impact in the long run.
And so it may be with voice-controlled devices in our homes, such as Amazon’s Echo. After an awkward start, the Echo has morphed into the new tech darling, receiving heaps of praise from both the tech industry and normal consumers for its ability to easily and intuitively bring control to our digital devices and services. The success of the product has even inspired new members of the Echo family, the Echo Dot (designed to be used with existing speakers) and the Amazon Tap, a smaller, portable version of the Echo, both of which are just being released.
At first glance, the Echo’s capabilities seem somewhat interesting to people. However, only when you start to dig into what the device actually is and how it works do you begin to truly understand not only its brilliance, but its potential to completely disrupt traditional mobile platforms and app models.
Technologists will tell you that the Echo is a front-end for Amazon’s Alexa, a cloud-based, voice-driven service that runs on Amazon’s own AWS (Amazon Web Services) infrastructure. Alexa takes spoken commands, sends them to the web, converts them into digital commands, and sends them back to the device. The device then turns those commands into discrete actions, such as playing a song from a particular web service, turning on a light in your living room, or receiving an answer to a query. Fair enough. But what really matters is that beneath the façade of this slick new home automation device lies a remarkably clever and new type of platform: an invisible one.
The Echo and its siblings have absolutely no screens—the primary means by which you interact with the device is your voice. (Yes, it has a remote, but it’s designed as a secondary input device). That means there is no dependence or reliance on any kind of visual cues. Everything happens through your voice, making the technology essentially imperceptible. In fact, it’s arguably one of the best examples of invisible technology to date.
In addition, there’s no traditional operating system running on Echo. Instead there seems to be some type of basic RTOS (real-time operating system) that merely serves as an on-ramp to the cloud-based Alexa voice services, which are the real engine for the Echo and its siblings.
This lack of a traditional OS does not mean that the device is limited, however. You can add on capabilities through what it calls “skills”—essentially a new type of application that “runs” on Alexa. Skills aren’t big fancy, function-filled screens of software, just simple directives to do certain specific things or retrieve certain bits of information when you request them.
From a usage perspective, if you compare the voice-driven model of something like Echo to today’s smartphones and other mobile devices (like smart watches), the differences become glaring. Instead of just speaking a command, you need to find the right application on your smartphone’s display, launch it, and select a command inside the app, all while focusing exclusively on the screen and needing to physically touch the device. Yes, most mobile OS’s have started to add voice-based assistant features that are arguably similar to Alexa and its fast, hands-free means of interaction. However, they are still clearly secondary input methods on today’s mobile devices, not primary ones, and that distinction is extremely important. Given the inherently visual nature of human beings, it’s hard to imagine voice ever becoming a primary means of interaction on any device with a screen—we can’t help but want to look.
It’s not difficult to imagine a time in the not-too-distant future when smartphones get relegated to being more specialized devices for specific tasks, in the same way that PCs fell into that more specialized role with the growing importance of smartphones.
It’s also important to point out that you can’t do everything on an Echo that you can do on a smartphone. Nevertheless, you can do a lot of the most common and most important things. As a result, it’s not difficult to imagine a time in the not-too-distant future when smartphones get relegated to being more specialized devices for specific tasks, in the same way that PCs fell into that more specialized role with the growing importance of smartphones. Just as PCs have not gone away, neither will smartphones, of course, but they will likely slip a tier or two in the pantheon of our digital device universe.
The implications on the world of 3rd party apps are equally profound. There is likely little need for more than a few hundred or perhaps a few thousand “skills” to be added to a voice-based system and the means for monetizing those skills are few to none. Skills are likely to be created as enablers for other connected devices or services, instead of being seen as an end unto themselves, as most mobile apps currently are. Perhaps this is just as well, because the current mobile app store ecosystem is clearly faltering under its own size and weight and seems ready to implode at any moment.
To Amazon’s credit, they recognize the platform potential of Alexa and are actively encouraging other device makers to use Alexa in their own hardware designs and to create add-on skills. This could allow Amazon to do an end-run around the existing ecosystem players, such as Apple, Google and Microsoft, and place it at the forefront of voice-based computing.
Most visions of the future imagine technology that seamlessly blends into our lives, gives us immediate access to all the world’s information, and helps make our lives easier. While the Amazon Echo family of devices may not completely fulfill all these requirements, it’s one of the clearest indicators of where personal technology is headed in the years to come.
Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm. You can follow him on Twitter @bobodtech. This article was originally published on Tech.pinions.