Through the looking glass: Voice recognition technology is already present in smartphones, household appliances and other gadgets and is primed to play an even bigger role in the future. For people that don't communicate using speech, however, it's not really all that useful... but it could be.

Software developer Abhishek Singh has created an Amazon Alexa mod capable of interpreting basic sign language gestures. Using a laptop with a webcam and some deep learning techniques along with an Echo, the setup is able to decode Singh's gestures and answer queries with both speech and on-screen text.

Singh tells The Verge that he used Google's TensorFlow software - specifically, TensorFlow.js - to power the experience. Unable to find any datasets for sign language online, Singh had to create a basic set of signals and train the software on his own.

Amazon just the other day pushed out an update for its Echo Show that makes the device more accessible to users that aren't easily able to communicate by voice by utilizing the device's integrated touchscreen for command input. It builds on an existing feature called Alexa Captioning that displays responses on compatible devices that have a screen.

Singh's demo is just a proof-of-concept at this point but he plans to open-source the code and publish a blog post outlining his work. He told The Verge that there's no reason the Amazon Show or any other camera-and-screen based voiced assistant couldn't build the functionality right now.