mbrowne5061
Posts: 2,157 +1,362
For many years I've wondered if the folks holding keyboard patents didn't have some sort of blackmail they were holding over the folks with speech recognition patents. If only 10% of the development costs of all these different keyboards had been spent on discrete speech recognition (DSR) , we'd all be using it today.
DSR (not the current spoken word>server translation>back to handheld device text box method) is performed entirely onboard a mid-powered or higher PC, is intuitive (remember Scotty talking into a mouse in the Star Trek movie "Coming Home" about the whales?) and much less subject to surveillance and/or data-mining than something running thorough AI in a cloud. Long-term use of a microphone also doesn't cause wrist and/or hand problems either.
Nuance, formerly Dragon Naturally Speaking is the only reliable DSR I'm aware of.
I do NOT work for them or sell thire product; I'm just a long-term and enthusiastic power user.
Imagine an office of people that can only interact with their computers by speaking with it.
Imagine doing any CAD work through a speech-only interface
Imagine having to say "load web browser. Go to Google. Search [topic]" instead of two clicks and a few keystrokes.
Speech recognition is fine and dandy when a keyboard isn't an option, but something tells me tactile interfaces are going to be around for a very long time.