Opinion: The contradictory state of AI

Bob O'Donnell

Posts: 80   +1
Staff member
Through the looking glass: For most major tech advancements, the more mature and better developed a technology gets, the easier it is to understand. Unfortunately, it seems the exact opposite is happening in the world of artificial intelligence, or AI. As machine learning, neural networks, hardware advancements, and software developments meant to drive AI forward all continue to evolve, the picture they’re painting is getting even more confusing.

At a basic level, it’s now much less clear as to what AI realistically can and cannot do, especially at the present moment. Yes, there’s a lot of great speculation about what AI-driven technologies will eventually be able to do, but there are several things that we were led to believe they could do now, which turn out to be a lot less “magical” than they first appear.

In the case of speech-based digital assistants, for example, there have been numerous stories written recently about how the perceived intelligence around personal assistants like Alexa and Google Assistant are really based more around things like prediction branches that have been human built after listening to thousands of hours of people’s personal recordings.

It’s not hard to see that some of the original promise of AI isn’t exactly living up to expectations.

In other words, people analyzed typical conversations, based on those recordings, determined the likely steps in the dialog, and then built sophisticated logic branches based on that analysis. While I can certainly appreciate that it represents some pretty respectable analysis and the type of percentage-based predictions that early iterations of machine learning are known to do, it’s a long way from any type of “intelligence” that actually understands what’s being said and responds appropriately. Plus, it clearly raises some serious questions about privacy that I believe have started to negatively impact the usage rates of some of these devices.

On top of that, recent research by IDC on real-world business applications of AI showed failure rates of up to 50% in some of the companies that have already deployed AI in their enterprises. While there are clearly a number of factors potentially at play, it’s not hard to see that some of the original promise of AI isn’t exactly living up to expectations. Of course, a lot of this is due to the unmet expectations that are almost inevitably part of a technology that’s been hyped up to such an enormous degree.

Early discussions around what AI could do implied a degree of sophistication and capability that was clearly beyond what was realistically possible at the time. However, there have been some very impressive implementations of AI that do seem to suggest a more general-purpose intelligence at work. The well-documented examples of systems like AlphaGo, which could beat even the best players in the world at the very sophisticated, multi-layer strategy necessary to win at the ancient Asia game called Go, for example, gave many the impression that AI advances had arrived in a legitimate way.

In addition, just this week, Microsoft pledged $1 billion to a startup called OpenAI LP in an effort to work on creating better artificial general intelligence systems. That’s a strong statement about the perceived pace of advancements in these more general-purpose AI applications and not something that a company like Microsoft is going to take lightly.

The problem is, these seemingly contradictory forces, both against and for the more “magical” type of advances in artificial intelligence, leave many people—myself included—unclear as to what the current state of AI really is. Admittedly, I’m oversimplifying to a degree. There are an enormous range of AI-focused efforts and a huge number of variables that go into these efforts, so it’s not realistic to expect, much less find, a simple set of reasons for how or why some of the AI applications seem so successful and why some are so much less so (or, at the very least, at lot less “advanced” than they first appear). Still, it’s not easy to tell how successful many of the early AI efforts have been, nor how much skepticism we should apply to the promises being made.

Interestingly, the problem extends into the early hardware implementations of AI capabilities and the features they enable as well. For example, virtually all premium smartphones released over the last year or two have some level of dedicated AI silicon built into them for accelerating features like on-device face recognition, or other computational photography features that basically help make your pictures look better (such as adding bokeh effects from a single camera lens, etc.)

The confusing part here is that the availability of these features is generally not dependent on whether your phone includes, for example, a Qualcomm Snapdragon 835 or later processor or Apple A11 or later series chip, but rather what version of Android or iOS you’re running. Phones that don’t have dedicated AI accelerators still offer the same functions (in the vast majority of cases) if they’re running newer versions of Android and iOS, but the tasks are handled by the CPU, GPU, or other component inside the phone’s SoC (system on a chip). In theory, the tasks are handled slightly faster, slightly more power efficiently, or, in the case of images, with slightly better quality if you have dedicated AI acceleration hardware, but the differences are currently very small and, more importantly, subject to a great deal of variation based on software and software layer interactions. In other words, even phones without dedicated AI acceleration at the silicon level are still able to take advantage of these features.

This is due, primarily, to the extremely complicated layers of software necessary to write AI applications (or features). Not surprisingly, writing code for AI is very challenging for most people to do, so companies have developed several different types of software that abstract away from the hardware (that is, put more distance between the code that’s being written and the specific instructions executed by the silicon inside of devices). The most common layer for AI programmers to write is within what are called frameworks (e.g., TensorFlow, Caffe, Torch, Theano, etc.). Each of these frameworks provide different structures and sets of commands or functions to let you write the software you want to write. Frameworks, in turn, talk to operating systems and translate their commands for whatever hardware happens to be on the device.

In theory, writing straight to the silicon (often called “the metal”) would be more efficient and wouldn’t lose any performance benefits in the various layers of translation that currently have to occur. However, very few people have the skills to write AI code straight to the metal. As a result, we currently have a complex development environment for AI applications, which makes it even harder to understand how advanced these applications really are.

Ultimately, there’s little doubt that AI is going to have an extremely profound influence on the way that we use virtually all of our current computing devices, as well as the even larger range of intelligent devices, from cars to home appliances and beyond, that are still to come. In the short term, however, it certainly seems that the advances we may have been expecting to appear soon, still have a way to go.

Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm. You can follow him on Twitter . This article was originally published on Tech.pinions.

Permalink to story.

 
About 1986 I work for a company which provided Prolog (a symbolic processing language used for AI) -- even became subcontractor to IBM. The promises for the future were rampant then just as they are now. Those promises have been shown to be far more theoretical than actual.

Yes, AI has the structure for machine learning. However, the finite limitations of our systems quickly lowers the limits of what actually can be delivered. For example, our version of Prolog could solve some problems that the native IBM product could not -- due to the manner in which the facts were code consuming too much memory, eg: one application took 15mb of application memory which was enormous then.

While that work was in the late 80's, the implementation at that time was incapable of using threaded processes -- my guess is the Warren Abstract Machine is where that occurs.
 
While I found the article genuine and informative, and find the author the same, it ends up just being yet another apology - since AI itself still does not exist. Machine learning isn't AI. Nothing labeled AI in any field is actually an AI. The diction of this article (and most on the topic) is simply not accurate. It's effectively salesmanship for something not yet real.

Artificial Intelligence does not exist yet.
 
Oddly enough in the early 1980's I *had* a company that wrote Prolog compilers and so I had a front seat for the 1980's AI hype (and therefore the succeeding AI winter). Back then it was Expert Systems (and Lisp and Prolog - C++ won by the way) now it is neural networks (and Tensorflow and ... name your poison).

By examining the meta-level I think there are lessons to be learned.

The first example is, oddly enough, examples. When the echo-chambers resonate with a few common examples one must dig deeper and be suspicious. Back then the poster children were XCON, Mycin, GE locomotive failure diagnosis, the Wine Advisor (!), AMEX credit scoring and so forth. In each case the resulting system was found to be, even *if* effective, then fragile - move outside its closely prescribed constraints and it would rapidly collapse. For example (paraphrasing) journalism of the time 'This system which can diagnose blood toxicology can (therefore) easily be soon expanded to general medicinal inferencing.' ...... Run away, now.

But, sadly, it seems these same sins are being repeated. How do we know it is AI? -- well it beats us (world expert) at a game!. Deep Blue (chess - 1990s). Alpha Go (Go - 2015). In both cases the results are due to hardware improvements - Deep Blue had chips specifically designed for positional chess analysis, whereas Alpha Go was relying on chip development designed to drastically improve (short) floating point computation speed (from GPU evolution - see below).

Indeed a tightly restricted (e.g. rules-of-the-game) successful challenge is *precisely* what one would expect generations of development of computational speed to achieve. This is AI by Moore's law. Image recognition ? - A game (or if you like a very large lookup table computed by back-propagation). Language translation? Close but no cigar - see, for example Douglas Hofstadter's perceptive article: https://www.theatlantic.com/technology/archive/2018/01/the-shallowness-of-google-translate/551570/

The moment one looks at 'AI' outside of constrained game rules then things get more interesting. Self-driving cars? 90% a game and hence compute-able. Traffic signals, road markings, four way stops ? All have rules and we are good to go. Bad weather? Some guy in the next lane who does not just 'seem right'? Parent with upset child on the side of the road who might just ..... not in the book. That other 10% will kill lots of people.

Viewed from another perspective these waves of AI hype are predictable. In the 70's it was simply the appearance of Time Sharing and the notion that you might have enough computational power at your disposal to start applying brute force to certain problems which if you could appear to solve them would be AI. In the 80's the development of custom hardware (for LISP) and efficient compilers (e.g. Horn Clauses and the Warren Abstract Machine) for Logic Programming led to the Japanese 5th generation program and the aforementioned 1980's AI hype (which, in part, was led by the U.S's Ed Feigenbaum and friends recognizing a good source of funding 'We can't let the Japanese beat us' when they saw one). Yes we now had symbolic programming languages, which was wonderful (the best language for writing a Prolog compiler was, you guessed it, Prolog). But waving a shiny tool and doing something truly groundbreaking with it are two different things. Sadly the 1980s media failed to differentiate between the two.

The current 'Machine Learning' wave? First of all call it what is is - neural networks, an idea going back to at least the 1950s. Were there some important algorithmic advances? Absolutely. Were they the key factor in the currently claimed successes - arguable. Neural networks (as currently understood) rely on two pillars: the ability to compute vast amounts (basically - back propagation) of multiplications for real numbers (they have a decimal point in them) - which came from GPU cards where doing the same kinds of calculations (fortuitously) was required to put a video gun in someone's video hands, and millions and millions of examples of the kinds of things you are trying to do. Want to train a neural network to recognize cats? This is a picture of a cat - this isn't. Millions upon millions of times. So of course some of the missing sauce is the Internet which has given us such 'tagged' data. Or if you are in China (or London) the CCTVs providing the same information (scarily of people, not cats).

Now take any two year old child. This extensive process is replaced by: This is a cat..... Any questions ?

Wait up. How about AlphaZero? It still learned chess and go by playing itself - without training against millions of other players. Surely this is intelligent? Well, no. Programs were solving chess-end games in the 1970s without recourse to millions of games against other players. Again this is simply a combination of computational power (exponentially greater than in the 1970s) and, importantly, a totally closed predictable game world in which it could happily play itself having been told the immutable rules. For days and millions ( if not billions) of games. Move along, nothing to see here.

I won't even get into the 'Cat dream as seen by Van Gogh in a stary night moment as AI'. Please.

Perhaps a relabeling approach would be better. Ditch AI and Machine learning. Call it what the current technology actually is: Pattern Recognition. This is, I think, an accurate summary of the current state of the art. It neither over promises nor under delivers. It also realistically points out that things really have not qualitatively changed that much since Rosenblatt's Perceptron of the late 1950's.

So, what would impress me ? Well here's a start (I was going to mention J.S. Bach, but that's for another screed ..)

"Two people die and their souls are on a cloud drifting up to heaven. Their cloud passes a cloud with two Angels on it ... 'Ah, Angels' the souls cry. But the Angels were too polite to reply".

Go on, AI, laugh.
 
Drofelttil...You’re correct mostly, but end up misrepresenting the matter. Yes they’re just neural networks, but so are you. Even though the theory was around decades back, the hardware improvements have been a game changer. You are making the same error as someone looking at a slice of human brain under a microscope imagining it clear that consciousness can not arise from these cells. And yes it’s ‘pattern recognition’, but again, that’s what your brain does. The detailed architecture of neural nets is a highly evolving field currently, and the field remains hardware limited (even google research based ai is still by some measures a couple of orders of magnitude below our brains processing ability) but it is obvious that in essence this is how our brain works, give or take clever propagations/ state retentions etc.
 
Sc000,

I'm not arguing (so much) that intelligence and pattern recognition are not related somehow (though some, like Roger Penrose, would argue there has to be more to it than a Turing machine can provide). My concern is that we are in another hype cycle, as we were in the 80's, driven by the *Name* Artificial Intelligence with the same ludicrous exaggerations (by purveyors and pundits alike). Back then Expert Systems were fragile, now neural networks are fragile (as the famous 'change one pixel and it is no longer a cat' experiment showed). Back then Expert Systems were going to replace Experts, now neural networks are going to replace .. everyone ?

Expert systems were example of what clever programmers could do with workstations. Neural networks are examples of what clever programmers can do with BIG processors and BIG data. Both deliver interesting technologies which just gets incorporated naturally into what comes next (and eventually stop being 'special'). Both are overhyped to distraction mainly, I would argue because of the name. Finally I would argue that both actually *are* examples of very intelligent systems in action - the engineers who created them. What the engineers have created - less so.
 
The less the AI advances the better for us. Not because they will evolve into Terminators (they will, but not so soon). But because the more the AI understands and analyses, the more power we're giving to rich bastards who finance and use that technology.

This is very similar to biology. We thought that biology will save us, provide better medical treatments, increase the quality of our lives, etc. But what is really happening is the opposite. The more the scientists understand the bio-chemical processes in our cells, the better health hazards can they produce, and hide them in ordinary products we consume. Especially when the hazard is produced by two independent components interacting (kinda like binary poison).

Nowadays around 1 in 6 couples is infertile, and cancer has reached epidemic proportions. Almost all the new additives the food industry is putting into food cause infertility or cancer (or both). Knowledge is not used for our benefit.

Same goes for cellphones. Progress wasn't based on what the people want, but what Google, NSA and similar organizations want. Never what the consumer wants.
 
Last edited:
The less the AI advances the better for us. Not because they will evolve into Terminators (they will, but not so soon). But because the more the AI understands and analyses, the more power we're giving to rich bastards who finance and use that technology.

This is very similar to biology. We thought that biology will save us, provide better medical treatments, increase the quality of our lives, etc. But what is really happening is the opposite. The more the scientists understand the bio-chemical processes in our cells, the better health hazards can they produce, and hide them in ordinary products we consume. Especially when the hazard is produced by two independent components interacting (kinda like binary poison).

Nowadays around 1 in 6 couples is infertile, and cancer has reached epidemic proportions. Almost all the new additives the food industry is putting into food cause infertility or cancer (or both). Knowledge is not used for our benefit.

Same goes for cellphones. Progress wasn't based on what the people want, but what Google, NSA and similar organizations want. Never what the consumer wants.

I bet your fun at parties...
 
Back