DARPA taps Intel and Georgia Tech to pioneer a machine learning 'immune system'

Cal Jeffrey

Posts: 4,176   +1,424
Staff member
In context: The proliferation of machine learning systems in everything for facial recognition systems to autonomous vehicles has come with the risks of attackers figuring out ways to deceive the algorithms. Simple techniques have already worked in test conditions, and researchers are interested in finding ways to mitigate these and other attacks.

The Defense Advanced Research Projects Agency (DARPA) has tapped Intel and Georgia Tech to head up research aimed at defending machine learning algorithms against adversarial deception attacks. Deception attacks are rare outside of laboratory testing but could cause significant problems in the wild.

For example, McAfee reported back in February that researchers tricked the Speed Assist system in a Tesla Model S into driving 50 mph over the speed limit by placing a two-inch strip of black electrical tape on a speed limit sign (below). There have been other instances where AI has been deceived by very crude means that almost anyone could do.

DARPA recognizes that deception attacks could pose a threat to any system that uses machine learn and wants to be proactive in mitigating such attempts. So about a year ago, the agency instituted a program called GARD, short for Guaranteeing AI Robustness against Deception. Intel has agreed to be the primary contractor for the four-year GARD program in partnership with Georgia Tech.

"Intel and Georgia Tech are working together to advance the ecosystem's collective understanding of and ability to mitigate against AI and ML vulnerabilities," said Intel's Jason Martin, the principal engineer and investigator for the DARPA GARD program. "Through innovative research in coherence techniques, we are collaborating on an approach to enhance object detection and to improve the ability for AI and ML to respond to adversarial attacks."

The primary problem with current deception mitigation is that it is rule-based and static. If the rule is not broken, the deception can succeed. Since there is nearly an infinite number of ways deception can be pulled off, limited only by the attacker's imagination, a better system needs to be developed. Intel said that the initial phase of the program would focus on improving object detection by using spatial, temporal, and semantic coherence in both images and video.

Dr. Hava Siegelmann, DARPA's program manager for its Information Innovation Office, envisions a system that is not unlike the human immune system. You could call it a machine learning system within another machine learning system.

"The kind of broad scenario-based defense we're looking to generate can be seen, for example, in the immune system, which identifies attacks, wins, and remembers the attack to create a more effective response during future engagements," said Dr. Siegelmann. "We must ensure machine learning is safe and incapable of being deceived."

Permalink to story.

 
I'm reminded of the video game AI training being the equivalent of 200 years of play. Also, the Go AI:

"It started by studying a database of about 100,000 human matches, and then continued by playing against itself millions of times.

Perhaps the AI video training should continue for another 200 years or so to get rid of these human easily perceptible bugs. Equally, maybe a dev could actually learn to drive so they know where 80MPH applies on a map.

BINB
 
I'm reminded of the video game AI training being the equivalent of 200 years of play. Also, the Go AI:

"It started by studying a database of about 100,000 human matches, and then continued by playing against itself millions of times.

Perhaps the AI video training should continue for another 200 years or so to get rid of these human easily perceptible bugs. Equally, maybe a dev could actually learn to drive so they know where 80MPH applies on a map.

BINB
I don't know if you are old enough to remember WarGames, but in the movie, there was a military supercomputer called WOPR that taught itself strategies by playing games against itself -- specifically Tic-Tac-Toe and Global Thermonuclear War. At the time, I was into programming and had even done a little dabbling in AI (my final in Computer Science II in high school was a chatbot). I thought the concept of programming a computer to teach itself anything was interesting, and I started playing with the idea -- writing up some pseudocode and stuff, but it was far beyond my skills, and I wrote it off as just sci-fi mumbo-jumbo. Now here we are, 37 years later and it's a reality.

Now, where is my neural interface and teleportation machine?
 
I don't know if you are old enough to remember WarGames, but in the movie,

Now, where is my neural interface and teleportation machine?
Oh yes. Definitely old enough.
As for the neural interface.. I read somewhere back in the (I believe) 90s, that someone had successfully implanted a chip in a live cocker spaniel brain that didn't get eaten by body defenses immediately. Unlike the previous use externally in (danht danht daaah) the CIAs MKUltra, which apparently had control mechanism "field" applications. An article in 2008 "

Nicolelis also noted that the monkey was watching the video of the corresponding robot and seemed amused that the robot was mimicking its movements. "As he changed his speed or pattern, he was watching the robot change as well," he added. "He was pretty happy, yeah. Plus, he was getting fruits and Cheerios as a reward."

The Duke University professor said that it was very significant that the electrodes and chip worked so well a year after being surgically implanted in the monkey.

"There have been a lot of difficulties maintaining recordings with other technologies," said Nicolelis. "With this, we have completed a year, and that shows you can sustain viable implants without any harmful impact to the animal or the brain of the animal. That's a key issue for future patients."

Nicolelis said clinical trials on humans should begin within a few years.


Anyway...
 
Back