Hacker to Vandal: How Researchers Tricked Autonomous Vehicles - NextMobility
Autonomy

Hacker to Vandal: How Researchers Tricked Autonomous Vehicles

While the concern of hacker’s manipulating and exploiting the systems of autonomous vehicles are extensive, the dawn of the age of driverless cars could in fact be stifled by one of civilization’s oldest art forms: graffiti. But these vandals won’t be teenagers equipped with cans of  spray paint and Sharpie markers, but sophisticated hackers with vast knowledge of machine learning systems simply using their household printer to create stickers to be placed on road signs.

A team of cybersecurity researchers from four esteemed universities, Washington and UC Berkeley among them, have shown that machine-learning vehicle visual systems can be tricked into misidentifying road signs with simple at-home printed stickers. The scientists presented “Robust Physical-World Attacks on Machine Learning Models” as an attack algorithm that caused autonomous vehicle visual systems to mistake a STOP sign for a Speed Limit sign. The potential detrimental consequences of speeding up instead of coming to a stop are numerous and obvious. These attacks are also dangerous because they are specifically designed to mimic vandalism and reduce likelihood of detection by a casual observer.

Disguised as graffiti, the camouflaged graffiti STOP sign was misclassified as a Speed Limit sign 66.67% of the time. (Photo: Robust Physical-World Attacks on Machine Learning – University of Washington)

To better understand how this can become a problem can be achieved by knowing a little bit more about how these visual systems work. The system typically consists of an object detector and a classifier. The detector spots pedestrians, lights and signs, while the classifier determines what they are and what they are saying. In these particular attacks the vandal is presumed to have extensive knowledge of how the classifying systems work in order to confuse the system with malicious intent.

The study outlines three different kinds of attacks with varying effects on classification — subtle, camouflage graffiti, and camouflage art. These different adversarial perturbations, or disturbances in the way the system processes the information can be very dangerous. The attacks can be carried out in the following ways:

  • Poster-printing attacks: the attacker prints an actual size rendering of the targeted sign on paper and overlays it onto the existing sign.
  • Sticker attacks: the attacker prints only the perturbations and sticks them onto the sign in a specific, yet inconspicuous manner so that it difficult to identify to a casual observer, but is misidentified by the vehicle.

The subtle poster printed attacks extremely dangerous with a 100% misclassification rate and are nearly undetectable to a casual observer. The problem is they are difficult to make perfect due to various conditions such as lighting and weather. The two kinds of camouflage sticker attacks — graffiti and art — are also dangerous because it could be disregarded as vandalism to a human driver, but are perturbed enough to cause a misclassification. The sticker camouflaged art attack resulted in a 100% misclassification rate, while the camouflaged graffiti attack was only miscalculated 66.67% of the time.

The camouflage art sticker attack misclassified the STOP sign as a Speed Limit sign 100% of the time. (Photo: Robust Physical-World Attacks on Machine Learning – University of Washington)

Both the camouflaged art sticker attack and the subtle poster printed overlay had misclassification rates of 100%. (Photo: Robust Physical-World Attacks on Machine Learning – University of Washington)

“Attacks like this are definitely a cause for concern in the self-driving-vehicle community,” said Tarek El-Gaaly, senior research scientist at Voyage, an autonomous-vehicle startup. “Their impact on autonomous driving systems has yet to be ascertained, but over time and with advancements in technology, they could become easier to replicate and adapt for malicious use.”

The dangers of these types of attacks are very real and definitely a cause for concern. Producers of cars with plans to be fully autonomous, like Tesla for example, will have to find ways to defend against such attacks. One way, as suggested by El-Gaaly, is for the vehicle  to use contextual information to foil a potential adversarial perturbation. An example is a STOP sign in the middle of a highway, or a “70 MPH” Speed Limit sign in a densely populated urban area — the car should recognize that these don’t make sense.

All things considered, the threat of hackers hell-bent on causing mayhem with these malicious attacks is feasible and dangerous. It’s our hope that the experts programming the vehicles figure out ways to foil their plots.

Hacker to Vandal: How Researchers Tricked Autonomous Vehicles
1 Comment

1 Comment

  1. Emmit Cauley

    August 8, 2017 at 12:48 pm

    I hope these cars don’t make it to Pittsburgh…..where I’m from!

Leave a Reply

Your email address will not be published. Required fields are marked *

NextMobility is a new media company covering everything happening in the mobility sector. We are proudly committed to bringing the most relevant high-quality content in a stunning fashion. NextMobility - Moving the World Forward

Copyright © 2017 NextMobility, Inc.

To Top