There are now businesses that sell fake people.
On the website Generated.Photos, you can buy a “unique, worry-free” fake person for $2.99, or 1,000 people for $1,000.
If you just need a couple of fake people — for characters in a video game, or to make your company website appear more diverse — you can get their photos for free on ThisPersonDoesNotExist.com.
If you want your fake person animated, a company called Rosebud.AI can do that and can even make them talk.
These simulated people are starting to show up around the internet, used as masks by real people with nefarious intent: spies who don an attractive face in an effort to infiltrate the intelligence community; right-wing propagandists who hide behind fake profiles, photo and all; online harassers who troll their targets with a friendly visage.
The creation of these types of fake images only became possible in recent years thanks to a new type of artificial intelligence called a generative adversarial network.
In essence, you feed a computer program a bunch of photos of real people.
It studies them and tries to come up with its own photos of people, while another part of the system tries to detect which of those photos are fake.
Given the pace of improvement, it’s easy to imagine a not-so-distant future in which we are confronted with not just single portraits of fake people but whole collections of them — at a party with fake friends, hanging out with their fake dogs, holding their fake babies.
Thanks to underlying bias in the data used to train them, some of these systems are not as good, for instance, at recognizing people of color.
In 2015, an early image-detection system developed by Google labeled two Black people as “gorillas,” most likely because the system had been fed many more photos of gorillas than of people with dark skin.
Moreover, cameras — the eyes of facial-recognition systems — are not as good at capturing people with dark skin; that unfortunate standard dates to the early days of film development, when photos were calibrated to best show the faces of light-skinned people.
We choose the voices that teach virtual assistants to hear, leading these systems not to understand people with accents.
We label the images that train computers to see; they then associate glasses with “dweebs” or “nerds.”.GANs typically train on real photographs that have been centered, scaled and cropped.
In the early days of dashboard GPS systems, drivers famously followed the devices’ directions to a fault, sending cars into lakes, off cliffs and into trees
The networks trained on the Flickr-Faces-HQ dataset, which included over 70,000 photographs of people
10 hours ago
10 hours ago
11 hours ago
11 hours ago
Get monthly updates and free resources.
CONNECT WITH US