• Sat. Dec 21st, 2024

Artificial Intelligence Birthes Fake Faces To Dominate Digital Worlds

Artificial Intelligence Birthes Fake Faces That Might Dominate Digital Worlds

Advancement in technology has resulted in the development of many new products. Among the new technologies is artificial intelligence (A.I.). This technology has enabled businesses to sell fake people. On sites like Generated. Photos, users can purchase a “unique, worry-free” fake person for as low as $2.99. They may also choose to buy wholesale by acquiring 1,000 people for $1,000.

Anyone seeking a few fake people for characters in a video game, or wish to make a site appear more diverse can get free photos from ThisPersonDoesNotExist.com. then, adjust the photos as needed; make the old appear young or change ethnicity to target a certain demographic segment.

A company known as Rosebud.AI can help in animating fake people and even make them talk. Artificial Intelligence has made simulated people common around the internet. In some cases, they are used as masks by a real individual who may want to engage in illegal activities, spies who wear an attractive face aiming to investigate the intelligence community; online harassers that troll people with a friendly face; propagandists who hide behind a fake profile, and many others.

How This Artificial Intelligence System Works

The A.I. system considers every face as a complex mathematical figure that comprises of values that can be shifted. Selecting various values like the ones that determine the shape and size of the eyes can eventually alter the entire image. It is possible to change the eyes, age, perspective, and mood of these fake people.

For other qualities, these systems work differently. The artificial intelligence system may first generate a pair of images to determine the starting and endpoints far all values and then create images in between instead of shifting the particular values that determine certain parts of the image.

Creating these fake images became possible recently due to the inception of a new design of artificial intelligence that is known as a generative adversarial network. You simply need to feed a computer program several photos of real people. The program analyzes the images and tries to create unique photos of people. Simultaneously, another part of the same system aims to detect which of these images are fake.

That back-and-forth ensures that the end product becomes more indistinguishable from the real people. One such system is GAN software which is publicly available manufactured by Nvidia computer graphics company.

It is quite easy to imagine that soon there will be a whole collection of fake people on the internet given the pace of improvements made in this industry. People will soon host parties with fake friends, hold fake babies, and even hang out with their fake dogs. It will prove more challenging to determine who is real online and who is a product of a computer software’s imagination.

Camille François works as a disinformation researcher that analyzes the manipulation of social networks. She commented:

“When the tech first appeared in 2014, it was bad — it looked like the Sims. It’s a reminder of how quickly the technology can evolve. Detection will only get harder over time.”

Technological Advancements

Facial fakery technological advancements have become possible partly because the latest artificial intelligence networks have been enhanced to identify critical facial features. Today, it is possible to use a face to unlock a smartphone. Also, a user can tell their photo software to sort through thousands of pictures and show only those with a child or some other distinguishing feature.

Law enforcement agencies use facial recognition programs to identify and arrest criminals and suspects. Furthermore, some activists use these programs to reveal the identities of police officers that try to remain anonymous. Clearview AI scraped the internet of billions of public photos to develop an app that can recognize a stranger from a single photo.

New technologies have now been developed with the ability to organize and process the world in a manner that was not possible previously.

Imperfections In The AI Systems

Nothing is entirely perfect in this world and facial-recognition algorithms like the Artificial Intelligence systems are not an exception. There is an underlying bias in the data that is used to train them. Some of them are not as good as they are advertised and most are not good at recognizing people of color.

In 2015, one such system developed by Google labeled two Black people as “gorillas.” The system likely made that mistake since it had been fed with many more photos of gorillas than of individuals with dark skins. Furthermore, the cameras used were mostly calibrated to identify the faces of light-skinned people.

These mistakes in some cases result in severe consequences. In January 2020, a black man in Detroit by the name Robert Williams got arrested for a crime he never committed due to a wrong facial-recognition match.

Although artificial intelligence can make lives better, it is as flawed as humanity is since all of it is entirely man-made. The A.I. systems are not independent since humans determine how they are made and what data they get exposed to. People select the voices that teach the virtual assistants to hear. Thus, these virtual assistants sometimes do not understand people with accents.

People create a computer program that predicts a person’s criminal behavior by feeding it with a plethora of data from past rulings made by human judges. In the process, the programs pick up some of these judges’ biases. Also, humans label the images that train computers to see and they then associate glasses with ‘nerds’.

Errors Made By The Systems

If you are keen enough, you can spot some of the errors, mistakes, and patterns that the AI system repeats when it is configuring fake faces. Human errors are common; we tend to ignore or glaze past the flaws that exist in these systems quite quickly to trust that the software is hyper-rational, objective, and always right.

Research reveals that in situations where computers and humans must work together to identify human faces or fingerprints, people constantly made the wrong identifications when computers encouraged them to do so. During the early days of dashboard GPS systems, some drivers followed directions of faulty devices straight into disasters.

Could it be that we place little value in human intelligence or is it overrated given that we are smart enough to come up with smarter things?

Algorithms made by Bing and Google sort the entire world’s knowledge. Facebook’s newsfeed filters updates from our social circles to determine which are important enough to show us. The self-driving features in cars mean that we are putting our safety in the operation of the software.

We place a lot of trust in artificial intelligence systems. But, can they be as fallible and imperfect as humanity?

Kevin Moore - E-Crypto News Editor

Kevin Moore - E-Crypto News Editor

Kevin Moore is the main author and editor for E-Crypto News.