Nov 2, 2017 | By Benedict

Researchers at MIT have carried out an investigation into “adversarial examples,” objects that can fool AI vision into thinking an object is something completely different. The researchers made a 3D printed turtle that fooled Google’s Inception-v3 into thinking it was a gun, even from multiple angles.

Take a look at the 3D printed turtle above, and you’ll be hard pressed to find anything particularly threatening about it. Perhaps the 3D printing filament used to make it was slightly toxic, but ultimately, it’s just a plastic turtle.

That’s not how Google’s Inception-v3 AI image classifier sees it though. Through the eyes of the artificial intelligence system, that innocent-looking 3D printed sea creature looks just like a rifle.

The 3D printed prop is what is known as an adversarial example—something designed to trick an artificial intelligence system into thinking it’s something else entirely. In this instance, MIT researchers engineered the plastic turtle to make Google’s AI see it as a dangerous weapon.

It’s obviously quite funny on some level: who knows how many millions of dollars are being pumped into image classification systems, yet some still think a plastic toy is a rifle. It’s the same impressive yet amusing quality that made those nightmarish Google DeepDream pictures so mesmerizing.

But adversarial objects—or adversarial images in the 2D world—are actually highly significant, and potentially very troublesome.

AI neural network systems like Google’s Inception-v3 are, of course, incredibly smart. But they work on complex, human-made algorithms, not common sense. And if you’re familiar with the precise rules and logic behind an AI system, you can potentially exploit it.

Because Google’s Inception-v3 is open source, the MIT researchers—Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok, who are together known as “labsix”—were in the perfect position to exploit it, by looking at the exact criteria for "rifle" recognition and trying to somehow squeeze those characteristics into something not very rifle-like at all: a turtle.

The MIT researchers aren’t the first to create adversarial objects, of course. People can use certain tricks to fool facial recognition systems into misidentifying a person—something that border security services, for example, are currently trying to curtail.

For most adversarial objects or images, however, the “trick" only works from certain angles. You might fool a neural network into thinking a bag of chips is a face from a certain angle, but move it around slightly and the AI will likely correct its mistake.

But the 3D printed turtle, as well as the MIT researchers’ other 3D examples, are different. They actually fool the Google AI from multiple angles, rather than just one, making them far more devastating than your typical adversarial object.

In addition to the turtle that seems like a rifle, labsix also 3D printed a baseball that gets recognized as espresso. They also made digital models of a barrel that gets interpreted as a guillotine, a baseball that can appear like a green lizard, a dog that the AI thinks is a bittern, and several other examples.

The researchers were able to easily make more examples of these objects after creating an algorithm for “reliably producing physical 3D objects that are adversarial from every viewpoint,” working at almost 100 per cent accuracy. They call this algorithm “Expectation Over Transformation” (EOT).

In a sense, the team is pleased with its achievements, but it’s also worried by how easily it managed to pull it off.

“[EOT] shouldn’t be able to take an image, slightly tweak the pixels, and completely confuse the network,” Athalye told Quartz. “Neural networks blow all previous techniques out of the water in terms of performance, but given the existence of these adversarial examples, it shows we really don’t understand what’s going on.”

Of course, this isn’t just a bit of fun for the MIT researchers. They believe that their research proves beyond doubt that “adversarial examples are a practical concern for real-world systems.”

If, say, hackers were able to ascertain the complex algorithms behind a non-open AI system—the “eyes” of a self-driving car, for example—they might be able to cause real damage by manipulating real-world objects into making the car behave in erroneous ways.

This might all seem like a remote possibility—after all, Google’s open Inception-v3 isn’t used for any critical applications—but the MIT research certainly makes a strong point about the fallibility of visual AI systems.

The team even plans to look further into creating adversarial objects that challenge AI systems whose mechanics are hidden.

The MIT group’s research paper, “Synthesizing Robust Adversarial Examples,” will be presented at ICLR 2018, the sixth International Conference on Learning Representations. It can be read here.

 

 

Posted in 3D Printing Application

 

 

Maybe you also like:


   






Leave a comment:

Your Name:

 


Subscribe us to

3ders.org Feeds 3ders.org twitter 3ders.org facebook   

About 3Ders.org

3Ders.org provides the latest news about 3D printing technology and 3D printers. We are now seven years old and have around 1.5 million unique visitors per month.

News Archive