Google's AI made its own AI, and it's better than anything ever created by humans

William Gayde

Posts: 382   +5
Staff

The Google's Brain team of researchers has been hard at work studying artificial intelligence systems. Back in May they developed AutoML, an AI system that could in turn generate its own subsequent AIs. Their next big task was to benchmark these automatically generated AIs against more traditional human-made AIs.

AutoML uses a technique called reinforcement learning as well as neural networks to develop the daughter AIs. NASNet, a child AI, was developed and trained to recognize objects in real-time video streams. When benchmarked with industry standard validation sets, it was found to be 82.7% accurate at recognizing known objects. This is 1.2% higher than anything seen before and the system is also 4% more efficient than the previous best algorithms.

These efficient and accurate computer vision algorithms are becoming more valuable as the technology advances. They could allow self-driving cars to better avoid obstacles or help visually impaired people regain some sight.

Creating machine learning and artificial intelligence systems requires massive datasets and powerful GPU arrays to train the networks. By automating their creation, AutoML can help bring ML and AI to a wider audience instead of just computer scientists.

There are still some privacy concerns related to inherent biases in the system that are passed down into the new generations of AIs. This is especially important for applications like facial recognition and security systems. That being said, this is still great news and is a big step of progress for the AI community.

Permalink to story.

 
There have been rumblings about the advancement for AI, but this is one of the mile-stones where things actually start to get worrying. Human-built AIs are one thing (and currently nothing like the AIs from Sci-Fi, yet) but artificially created AIs, once given the ability to iterate as well, are a game changer. I get that currently these are quite distinct systems, as in, the "AI-creating AI" created a system for visual recognition, and not one that can also itself create further AIs, but how long before these types of system integrate with one another and then progress starts to accelerate at a rate humans can't keep up with?

Scary times!
 
While an advance, let's put this into context. This 'computer' was made with the specific aim of bieng developed and trained to recognize objects in real-time video streams. Tested only against an industry standard validation set, the best it could do 'with known objects' was 82.7%. This is a long way from sentience, and/or cyborgs with it. If tested against unknown objects, how would it fare. For example say it can recognize a motorcycle. How about one with a sidecar, a trike, Harley vs. sportsbike. Drag bike, bicycle. I suspect an awful lot lower than 83%. Skynet seems an awfully long away. We do know that this can change quickly or like some fields turn out to be intractable.
What's worrying from my perspective is the concentration on human and facial recognition. We know already the USA and Canada already have these facial databases, there is a lot of potential for abuse.
 
If you're going to get the gun recognition software working on your ED209 unit you have to start somewhere.....

You have 20 seconds to comply.....
 
While an advance, let's put this into context. This 'computer' was made with the specific aim of bieng developed and trained to recognize objects in real-time video streams. Tested only against an industry standard validation set, the best it could do 'with known objects' was 82.7%. This is a long way from sentience, and/or cyborgs with it. If tested against unknown objects, how would it fare. For example say it can recognize a motorcycle. How about one with a sidecar, a trike, Harley vs. sportsbike. Drag bike, bicycle. I suspect an awful lot lower than 83%. Skynet seems an awfully long away. We do know that this can change quickly or like some fields turn out to be intractable.
What's worrying from my perspective is the concentration on human and facial recognition. We know already the USA and Canada already have these facial databases, there is a lot of potential for abuse.

To add to this:
Calling this an "AI" is extremely generous. At best, it is a narrow intelligence (good at one thing only). More likely, it is just a computer vision algorithm... but hey "AI" gets more clicks than "computer vision"
 
I've read that meaningful vision for computers is extremely difficult. Humans can, in an instant, take in a scene, parse out what is important, what is not, what is a threat or a treat, whilst for computers this is just starting out, like with the article example
 
While an advance, let's put this into context. This 'computer' was made with the specific aim of bieng developed and trained to recognize objects in real-time video streams. Tested only against an industry standard validation set, the best it could do 'with known objects' was 82.7%. This is a long way from sentience, and/or cyborgs with it. If tested against unknown objects, how would it fare. For example say it can recognize a motorcycle. How about one with a sidecar, a trike, Harley vs. sportsbike. Drag bike, bicycle. I suspect an awful lot lower than 83%. Skynet seems an awfully long away. We do know that this can change quickly or like some fields turn out to be intractable.
What's worrying from my perspective is the concentration on human and facial recognition. We know already the USA and Canada already have these facial databases, there is a lot of potential for abuse.

To add to this:
Calling this an "AI" is extremely generous. At best, it is a narrow intelligence (good at one thing only). More likely, it is just a computer vision algorithm... but hey "AI" gets more clicks than "computer vision"
And to add even more:
1.2-percent better accuracy and 4-percent more efficient are, IMO, relatively meaningless statistical terms that could be within what is known as "the margin of error" such as within voter polls. While they appear to be improvements, there is no context. How many tries did it take the AI to achieve this? How many different sets of data did they use to analyze the algorithm?

Marketing loves statistics in that the average person does not really know what the statistics mean; therefore, they are easy to spin. To me, this seems like it has some spin in it. AI is a competitive field right now, and whatever team has the best spin is likely to appear the winner when, in reality, the achievement may not be all that meaningful.
While an advance, let's put this into context. This 'computer' was made with the specific aim of bieng developed and trained to recognize objects in real-time video streams. Tested only against an industry standard validation set, the best it could do 'with known objects' was 82.7%. This is a long way from sentience, and/or cyborgs with it. If tested against unknown objects, how would it fare. For example say it can recognize a motorcycle. How about one with a sidecar, a trike, Harley vs. sportsbike. Drag bike, bicycle. I suspect an awful lot lower than 83%. Skynet seems an awfully long away. We do know that this can change quickly or like some fields turn out to be intractable.
What's worrying from my perspective is the concentration on human and facial recognition. We know already the USA and Canada already have these facial databases, there is a lot of potential for abuse.
Agreed. This is a universe away from anything even remotely resembling sentience regarding both the AI that generated the algorithm and the product algorithm.
I've read that meaningful vision for computers is extremely difficult. Humans can, in an instant, take in a scene, parse out what is important, what is not, what is a threat or a treat, whilst for computers this is just starting out, like with the article example
Computer vision is extremely difficult because of the amount of data that needs to be processed, and the possibility that whatever is viewed might be any of millions of different objects.

I have heard that one reason humans are so good at object recognition is that the brain tends to apply pattern recognition to objects in view and attempts to match the patterns in view to known objects. This means that the brain reduces the data that it needs to process and by doing so, becomes more efficient.
 
I'll break the mold and say this is pretty cool, and I can't wait to see how much better AI in the real world will become.
 
Well, humans created the first AI in the first place. Humans are still more intelligent.

Just that computers do things faster. Much faster.
 
There have been rumblings about the advancement for AI, but this is one of the mile-stones where things actually start to get worrying. Human-built AIs are one thing (and currently nothing like the AIs from Sci-Fi, yet) but artificially created AIs, once given the ability to iterate as well, are a game changer. I get that currently these are quite distinct systems, as in, the "AI-creating AI" created a system for visual recognition, and not one that can also itself create further AIs, but how long before these types of system integrate with one another and then progress starts to accelerate at a rate humans can't keep up with?

Scary times!

don't worry, we aren't talking about skynet here... this is an AI the same way a micromachinne is a car. all the new AI can do is to recognize patterns in a video stream, nothing else, and the "AI" that made this new AI is a piece of software designed to find optimizations in the way algorithms of pattern recognition work, nothing else.
 
don't worry, we aren't talking about skynet here... this is an AI the same way a micromachinne is a car. all the new AI can do is to recognize patterns in a video stream, nothing else, and the "AI" that made this new AI is a piece of software designed to find optimizations in the way algorithms of pattern recognition work, nothing else.

Yeah, as I said, I know we're not talking about super advanced systems that handle multiple functions, but even as a proof of concept this seems like a bit of a milestone.
 
There have been rumblings about the advancement for AI, but this is one of the mile-stones where things actually start to get worrying. Human-built AIs are one thing (and currently nothing like the AIs from Sci-Fi, yet) but artificially created AIs, once given the ability to iterate as well, are a game changer. I get that currently these are quite distinct systems, as in, the "AI-creating AI" created a system for visual recognition, and not one that can also itself create further AIs, but how long before these types of system integrate with one another and then progress starts to accelerate at a rate humans can't keep up with?

Scary times!
Interesting times. ;)
 
Back