Opinion: Hardware-Based AI

By Julio Franco
Jan 24, 2018
Post New Reply
  1. As we start to come to grips with how rapidly, and profoundly, the Artificial Intelligence (AI)-driven revolution has started to impact the tech and consumer electronics industry, it’s worth thinking about near-term evolutions that will start making an impact this year.

    At the recent CES show, AI was all around—from autonomous cars to voice-controlled, well, everything—and everybody wanted to get their product or service as closely associated with the technology as they could. In almost every case, the “AI” portion of the product was being powered and enabled by some type of internet connection to a large, cloud-based datacenter.

    Whether specifically highlighted or silently presumed, the idea was, and still is, that the “hard work” of AI has to happen in the cloud. But, does it?

    What we’re starting to see (or at least hear about) are products that can do at least some of the work of what’s called AI inferencing within the device itself, and without an internet connection. With inferencing, the device has to be able to react to a stimuli of some sort—whether that be, for instance, a spoken word or phrase, an image, a sensor reading—and provide an appropriate response.

    The “intelligence” part of the equation comes from being able to recognize that stimuli as part of a pattern, or something that the system has “learned”. Typically, that learning, or training process, is still done in large datacenters, but what’s enabling the rapid growth of AI is the ability to reduce these “learnings” into manageable chunks of software that can run independently on disconnected hardware. Essentially, it’s running the inferencing portion of AI on “edge” devices.

    Simultaneously, we’ve seen both the development of new, and adaption of existing, semiconductor chips that are highly optimized for running these pattern-matching neural networks behind modern AI. From DSP (digital signal processing) components inside larger SOCs, to dedicated FPGAs (field programmable gate arrays), to repurposed GPUs, and lower-power microcontrollers, there’s a huge range of new choices for enabling AI inferencing, even on very low-power devices.

    Part of the reason for the variety is that there is an enormous range of potential AI applications. While many have focused on the very advanced applications, the truth is, there are many more simple applications that can be run across a broad range of devices. For example, if all I need for a particular application is a smart light switch that’s only ever going to respond to a very limited number of verbal commands, doesn’t it make more sense to embed that capability into a device and not depend on an internet connection? Multiply that example by thousands or even potentially millions of other very straightforward implementations of very simplistic AI, and it’s easy to see why so many people are getting excited about hardware-based AI.

    Even on more advanced applications, like today’s personal-assistant driven smart speakers, the product’s evolution is moving towards doing more work locally on the device without an internet connection. Doing so enables faster response times, more customization, reduces network traffic, and if implemented intelligently, even enhances privacy by enabling some aspects of personalization and customization to remain on the local device and not be shared to the cloud.

    Moving forward, the tough part is going to be determining how the recognition tasks get broken up so that some can be done locally and some in the cloud. Those are exactly the kinds of AI-based evolutions that we should expect this year and over the next several years. It’s going to take lots of clever new hardware and software to enable these scenarios, but companies who are successful should be able to provide more “intelligent” and more capable solutions that consumers (and businesses) are bound to adopt in much larger numbers.

    To be clear, many implementations of AI are going to be dependent on network connections and large cloud-based datacenters, even for inferencing. But the opportunities for leveraging AI are so vast, and the range of applications (and computing requirements) so large, that there’s plenty of opportunity for many different levels of AI on many different types of hardware, powered by many different types of components. Translated into business potential, that also means there are strong opportunities for many different companies in the semiconductor, end product, software, services, and cloud-driven datacenter industries.

    Figuring out how we evolve from a purely cloud-based AI model to one that increasingly relies on hardware is going to be an interesting path that’s likely to develop many different sub-routes along the way. In the end, however, these developments highlight how cloud-based computing continues to evolve, how devices on the “edge” are becoming increasingly important, and how hardware continues to play a critical role for the future of the tech industry.

    Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm. You can follow him on Twitter . This article was originally published on Tech.pinions.

    Permalink to story.

     
  2. ET3D

    ET3D TechSpot Paladin Posts: 1,497   +228

    Huh? Phone processors have moved to include an AI component. Why start with an assumption that's not true?
     
    enemys likes this.
  3. jobeard

    jobeard TS Ambassador Posts: 11,997   +1,314

    There's a fundamental work for AI using the Prolog language: The Warren Abstract Machine (WAM) [see the wiki].
    It assumes a large number of registers, huge memory and special operations. If the 'abstract' were to be cast into some physical implementation, it would look much like a desktop graphics card with onboard GPU & fans.

    The so called onboard inferencing is from the coded model running on the WAM - - there's a good example in the wiki.

    Learing per se is adding new rules on-the-fly to create a body of facts that can then be used in the inferencing. Clearly learning in this way is limited by the available memory to hold all the rules, which is why AI is restricted to specific tasks.
     

Similar Topics

Add your comment to this article

You need to be a member to leave a comment. Join thousands of tech enthusiasts and participate.
TechSpot Account You may also...