Artificial intelligence has been integrated into the world of facial recognition and privacy advocates have come up to oppose the excessive monitoring of citizens without their consent.
Northeastern University researchers created an adversarial example that works effectively even when it is printed onto any moving fabric that you can wear. Do not get surprised since the device is not a fancy gadget or a piece of top-notch electrical equipment that you would expect.
What if you have something that can bypass facial recognition systems and make you invisible to artificial intelligence cameras? It sounds like something that only exists in the movies but not anymore. Looking at William Gibson’s novel Zero History, one of the characters wears the ugliest T-shirt ever made in the world – a strangely-looking garment that magically makes the wearer invisible to CCTV.
As countries throughout the world strive to deploy artificial intelligence surveillance systems to trace, track, and monitor citizens, we might eventually be wearing ugly T-shirts of our own. Northeastern University, MIT and IBM researchers have designed a top printed with a kaleidoscopic patch of colour rendering the wearer undetectable to artificial intelligence.
This is part of a growing number of “adversarial examples”, which are physical objects designed to counteract the creep of digital surveillance. One assistant professor of electrical and computer engineering at Northeastern, who is also the co-author of a paper on the subject, Xue Lin, explained:
“The adversarial T-shirt works on the neural networks used for object detection.”
Typically, a neural network recognizes something or someone in an image, draws a virtual “bounding box” around it, and then assigns a label to the object. By getting the boundary points of a neural network, Lin and colleagues have managed to work backward to build a design that can confuse the artificial intelligence network’s classification and labeling system.
YOLOv2 and Faster R-CNN are object-recognition neural networks mostly used for training purposes. The team identified the areas of the body where adding some pixel noise may confuse the AI, and in effect make the wearer invisible.
It is not the first time that such objects have been designed aiming to trick artificial intelligence. Sometime in 2016, several researchers from the US universities Carnegie Mellon and North Carolina at Chapel Hill developed some ingenious glasses that could dupe facial recognition technology into misclassifying the wearer.
In 2017, United States researchers deceived neural networks into believing that a stop sign was a 45mph speed limit sign, by just adding a few subtle graffiti-like characters. However, all of the previous adversarial attacks have been designed on static materials.
Doing the same thing for video surveillance is quite tricky. Battista Biggio is an assistant professor at the University of Cagliari. He is the creator of the first adversarial example that succeeded in fooling spam email detection. He explained:
“For the physical attacks, the real challenge is to remain undetected during the whole video duration. When detection is running in every frame, remaining consistently undetected is much harder.”
Unlike the stop signs, T-shirts wrinkle and crease whenever the wearer moves, meaning that the team had to take that into account. These T-shirt researchers became the first to succeed in creating an adversarial example that was set up to be printed onto moving materials. They used a ‘transformer’, which is a method that is used for measuring the way T-shirts move, and then mapping everything onto the design.
Researchers recorded a person who walked wearing a checkerboard pattern. Then, they tracked the corners of every board’s squares to accurately map out how it wrinkles and moves when the person is walking.
Utilizing that technique enhanced the ability of people to dodge detection from 27% to 63% against YOLOv2, and from 11% to 52% against Faster R-CNN. Nonetheless, Lin believes that it is unlikely that we will see the T-shirts being worn extensively in the real world. She explained:
“We still have difficulties in making it work in the real world because there’s that strong assumption that we know everything about the detection algorithm. It’s not perfect, so there may be problems here or there.”
Researchers do not want to help the general public evade artificial intelligence surveillance technology. Instead, the team’s primary goal is to discover vulnerabilities in the neural networks to enable the surveillance companies to resolve them, instead of helping people avoid detection.
“In the future, hopefully, we can fix these problems, so the deep learning systems can’t be tricked.”
How To Prevent Artificial Intelligence Surveillance From Seeing Us
AI wins a game at a time and it can rely on a neural net. Nonetheless, an Artificial Neural Net resembles neurons in the brain instead of working as a team or a community. Neural networks (NN) comprise the input and output layer, and a hidden layer having units that change input into the output to ensure that the layer can use the realized value.
Those are the tools used for finding patterns that are many and complex for programmers to retrieve and train machines to recognize the patterns. Thus, artificial intelligence is more pattern-seeking than teaching other technologies to enslave humanity. AI learns selfishly since it consumes to win alone.
The solitary and singular purpose explains why it is quite easy to confuse AI surveillance networks monitoring government facilities, hospitals, universities, schools, sporting arenas, casinos, and entire countries like China and the United Kingdom.
What can we do to ensure that AI does not see us?
Cover The Bridge Of Your Nose
The most advanced artificial intelligence programs use the nose bridge as a major facial marker. It has been at its core since facial recognition was unveiled in 1964. AI-Writer explains:
“The facial recognition software is Woody Bledsoe’s pioneering invention, which was invented by him, his wife, and their two children. The project was financed by opaque secret services, although the original concept was jointly developed by the US National Security Agency (NSA) and the United States Department of Homeland Security (DHS), which is meant to not publish much of the family’s work. It’s gained very little traction and publicity.”
The technology involves cataloging and recognizing human faces from thousands of available images that are normally stored in a database of nearly 10 million images from the National Security Agency (NSA). This software’s algorithm is highly accurate in that the faces can be compared to the stored images in the database within 15 seconds.
“Years later, Bledsoe’s work was continued by the Stanford Research Institute, which continually questioned the technology, surpassing it almost every time. In 2006, the detection algorithm was 100 times more accurate than in 1995 and was even able to distinguish twins. The main advantage of incorporating the 3D technique is that, unlike other detection systems, changes in lighting do not affect the performance.”
When 3D hit the market, technicians used the method to capture and determine the weight and height of a person’s face, and the shape of their body. The technology can get various angles of the face to get a perfect profile view utilizing tracking cameras and sensors. But making yourself invisible to AI makes you conspicuous to humans.
Use CV Dazzle
CV Dazzle is described as an anti-facial recognition makeup. It was mainly created to confuse the OpenCV haar cascade face detection algorithm. This open-source project is still in the development stage aiming to do the same with haar, dlib, ssd, and yolo detectors.
CV Dazzle was developed by Adam Harvey using accessories like facepaint, gems, and creative hairstyling to hide the area above the nose and between the eyes. CV Dazzle tries to:
- Create asymmetry – facial recognition algorithms are well-programmed to seek symmetry between the left and right sides of the face. The creation of asymmetry reduces the chances of detection. You can cover one side with a feather or a piece of trendy-styled hair.
- Using tonal inverse – algorithms can review skin tone and texture enabling them to locate the real face region. Nonetheless, the AI relies primarily on the assumption of what the facial features appear to be. That strategy makes it easy for anyone to trick the system. Use makeup colors and hair to contrast the skin tone. You may also apply makeup strangely in unusual tones, like emerald and teal. You can use dark colors on light skin and vice-versa.
Nonetheless, when one wears such strange makeup or face jewelry, humans suddenly do not trust them. You look strange and different. You are automatically watched more closely, which means that you have virtually put a target on your back.
A subtle way to confound artificial intelligence is wearing well-designed anti-surveillance clothing. For instance, Naamiko’s anti-surveillance wear has been proven to hide the wearers from facial recognition technologies.
Computers focus on interpreting the printed patterns which they see as hundreds of faces, in turn, overloading the system and preventing it from finding the wearer’s face. Adam Harvey (CV Dazzle creator) also designed Stealth Wear, with the scarf already sold out, to help mask users’ faces and bodies from detection. The Stealth Wear also prevents a thermal signature from getting spotted by overhead drones.
You may also use a photo taped to the front of your shirt to fool human-recognizing artificial intelligence tools to focus on the picture instead of the face. Although facial recognition provides additional layers of security, it is believed to invade privacy in some cases which have pushed some critics to look for ways to become ‘invisible’ from the artificial intelligence technologies involved in recognition.
Nevertheless, AI will have to win people’s trust before it is accepted by the majority and gains mainstream adoption everywhere.