Ambient computing gets a push forward with Amazon's Alexa enhancements

Ivan Franco

Posts: 300   +9
Staff member
Editor's take: The idea of having computing intelligence invisibly available all around us has been a part of science fiction for decades now. It’s also something that some people thought we could instantly bring to real life when the first smart speakers -- notably Amazon’s Alexa-equipped Echo device--first debuted nearly seven years ago. But it turns out it’s hard to enable ambient computing capabilities that live up to some of those futuristic visions, really hard.

Efforts to expand beyond the simple tasks of audibly requesting music to play, timers to be set, or providing factual answers to random questions have continued apace. During this year’s Alexa Live event -- Amazon’s developer-focused ambient computing conference -- the company debuted a large range of new capabilities and highlighted how impressively far this burgeoning category has progressed.

Amazon allows developers to extend the capabilities of its devices and the Alexa digital assistant through what the companies calls Skills, which are essentially audio applets that can be triggered by calling out certain keywords.

The concept has taken off as the company cites the availability of more than 130,000 skills being offered from over 900,000 registered developers, some of whom are also involved with the hundreds of non-Amazon branded devices with Alexa built-in.

At this year’s event, the range of new capabilities that Amazon is bringing to its conversational AI platform highlights how much ambient computing has evolved. Since the debut of the Echo Show, a number of Alexa-equipped devices are equipped with displays, allowing information to be presented visually as well as audibly. The introduction of APL (Alexa Presentation Language) widgets allows app developers to create content services that can display on these screens. In addition, Featured Skill cards offer a visual way for people to discover skills—sort of like an app store for skills that developers who apply to participate can leverage to promote their skills.

One of the early (and ongoing) challenges of working with smart speakers and other ambient computing devices is remembering how to trigger the skills you want to use. While some are relatively straightforward, it is also easy to forget or accidentally use the wrong trigger words. In the early days of smart speakers, this could be particularly troublesome. Amazon started to resolve this problem through what they call Name Free Interactions (NFI), which allows commonly used words to be recognized as triggers for various skills, essentially adding a degree of intelligence and flexibility to its usage. In other words, NFI made Alexa smart enough to understand what you meant to say instead of only precisely what you said.

Amazon has announced that it is extending the capabilities of NFIs in three different ways, including Featured Skills, which can link common utterances like “Alexa, tell me the news” or “Alexa, let’s play a game” to specific skills from various developers. In addition, personalized skill suggestions can connect individuals who commonly use particular phrases/requests, with several other relevant skills that offer similar capabilities.

In essence, this functions like a recommendation engine because it will direct people to skills they don’t currently use or have installed. Amazon has extended support for NFI to multi-skill experiences, where multiple experiences can be linked together and triggered by a single key word or phrase.

What’s interesting about all of these capabilities is that they appear to be subtle tweaks to the original model of “launching” skills through specific keywords. However, they actually reflect a more profound understanding of the way people think and talk, which is critically important in enabling a more seamless, more intelligent experience.

In a related way, the new event-based triggers and proactive suggestions take the concept of ambient computing even further—though they also carry with them potential privacy concerns. Both of these capabilities leverage data like your physical location, time of day, whether or not you’re in a vehicle, and history of interactions to make suggestions about potential information (via automatically triggered skills) that can be provided.

Fundamentally, this takes the notion of intelligence to a new level, because it reflects a greater awareness of your activities, habits, and surroundings and makes AI-powered recommendations based on all that data. At the same time, it raises fundamental questions about privacy and trust, because it requires Alexa to know a great deal about your comings and goings to make reasonable suggestions. Without that data, it could be spouting out data and suggestions in the dark, likely leading to extremely high frustration with the product. In addition, it raises fundamental trust issues between the customer and Amazon, some of whom may be uncomfortable with Amazon having access to all the data necessary to even make these suggestions.

Of course, these privacy and trust concerns hit at the very heart of any type of ambient computing model, all of which require some amount of personal data in order to make any experience compelling instead of frustrating. There is no easy answer here, and Amazon has been working hard to improve its trustworthiness with certain parts of the market. However, there are some consumers who are going to have a hard time ceding their trust to Amazon.

In terms of interoperability, Amazon also introduced a number of important new platform capabilities that make it easier to integrate the Alexa experience across a wide range of devices. Send to Phone, for example, will—as its name suggests—let you do things like send your requested results to an Alexa app-equipped mobile device. You can then continue working with this information or content on the mobile device or some other larger-screened device.

At an even higher level, Amazon used the event to announce that all of its Echo devices would get an update that adds support for the new Matter smart home interoperability protocol. Matter is endorsed by a wide variety of big-name smart home device makers and tech companies (including Apple and Google) and is intended to serve as a means to make the process of discovering, connecting to, and controlling multiple smart home devices much easier.

Amazon used the event to announce that all of its Echo devices would get an update that adds support for the new Matter smart home interoperability protocol.

Amazon also announced further enhancements to the intriguing Voice Interoperability Initiative (VII) that the company first debuted last year. Essentially a mechanism for integrating multiple voice assistants into a single solution, VII promises independence from a single voice provider, while simultaneously offering the promise of combining the best of different voice assistants into a single experience. Initially, this far-reaching concept will be productized via a new version of the Samsung Family Hub Refrigerator, which will integrate support for both Samsung’s Bixby and Alexa and will be able to switch between them on a dynamic basis.

The bigger story is that through several years of what could be considered fairly modest updates to the Alexa platform, Amazon is finally starting to deliver experiences that much more closely match what many initially hoped for with the original Echo device.

It seems clear Amazon is serious about building intelligent everywhere computing initiatives that, hopefully, will make our working and personal lives significantly easier to navigate—and more rewarding as well. In the meantime, it will be interesting to see how a conglomeration of 50+ new features can make Alexa-powered devices more capable, more interesting, and easier to use.

Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter .

Permalink to story.

 
Sounds over-reaching. In the meantime, I have the latest Alexa at home, and it's way too dumb to resemble AI. It can only understand the most simplistic preprogrammed sentences, and it cannot even combine those. I found it not much use beyond asking to switch lights on/off, or ask about weather. For the rest, a regular blutooth speaker would be sufficient, or just use the phone.
 
Sounds over-reaching. In the meantime, I have the latest Alexa at home, and it's way too dumb to resemble AI. It can only understand the most simplistic preprogrammed sentences, and it cannot even combine those. I found it not much use beyond asking to switch lights on/off, or ask about weather. For the rest, a regular blutooth speaker would be sufficient, or just use the phone.
(y) (Y) I'm just getting tired of saying "Porch Off" and getting a recipe for pork chops. Seems like the little things should be taken care of before anything like this.
 
Sounds over-reaching. In the meantime, I have the latest Alexa at home, and it's way too dumb to resemble AI. It can only understand the most simplistic preprogrammed sentences, and it cannot even combine those. I found it not much use beyond asking to switch lights on/off, or ask about weather. For the rest, a regular blutooth speaker would be sufficient, or just use the phone.
That's pretty much what I use it for too. Also timers and alarms, and as an ad-hoc speaker as well for spotify. The several times I've tried it for anything else I was left feeling underwhelmed and wanting, like as you said it cannot combine two even related commands together. "Play [playlist] on Spotify and set sleep timer for 45 minutes" as an example. No bueno.
 
Last edited:
Back