You are here

Artificial portrait generation and its implications for fake news

Wednesday, May 1, 2019 - 10:45

Original article by Glenn Fleishman on FastCompany.

Welcome to the other side of the uncanny valley of profile photographs. The attention of generated imagery has so far focused on “deepfake videos,” in which a real person’s face is grafted semi-realistically (for now) onto someone else’s body. There’s a different impact with deep-learning AI-generated fake still-photo faces that look realistic but aren’t attempting to match the appearance of any actual person. And they’re so new that these images have yet to get a name—perhaps deepfaces will win out.

Deepfaces have a greater potential to add to the noise of troll farms, social-media griefers, and outright scammers and fraudsters because they look legitimate and fail reverse-image searches. As Craig Silverman, a long-time exposer of online frauds and BuzzFeed media editor, says, “I think it presents a big challenge for some of the existing approaches used by investigators, journalists, and police and others to follow a breadcrumb trail.”

Even more disturbing? It’s possible we’ve been seeing them for years without realizing it. “It wouldn’t surprise me if this technology was already there—even maybe a year or two ago—to replicate humans and have fake images,” says Jevin West, an assistant professor at the University of Washington’s Information School, who co-teaches a course on “Calling Bullshit.” “I can’t wait for the person to find that they actually existed before the public release of this.”

 Jevin West and his colleague, Carl Bergstrom, a professor in theoretical and evolutionary biology who also teaches at the UW’s I-School, posted something even more unsettling: a side-by-side test of a photographically captured face and a generated one at whichfaceisreal.com. They began their co-taught course in spotting B.S. before the latest presidential election cycle, and deepfaces are another aspect in an array of analytical approaches they’re presenting to students and via the project’s web site.

Bergstrom also argues that deepfaces could contribute to something worse. The use of faked personas “is fundamentally undermining democracy,” he says. When elected representatives and government agencies receive feedback from good-enough fakes, whether impersonating real people with false information, like photos, or failing to validate whether people exist at all, he calls it a “man-in-the-middle attack on democracy.” The comment stuffing that occurred around network neutrality rules changes as the FCC didn’t rely on faces, but it did rely on faked and stolen identities, the New York State Attorney General has alleged. Imagine millions of comments accompanied by millions of unique faces.

Read the original article by Glenn Fleishman on FastCompany.

Fields of interest: