Glad to see that our work inspired debate. Your opinion would be stronger, have you read the paper and our notes: https://goo.gl/BvAh9g
Every face does not tell a story; it tells thousands of them. Over evolutionary time, the human brain has become an exceptional reader of the human face—computerlike, we like to think. A viewer instinctively knows the difference between a real smile and a fake one. In July, a Canadian study reported that college students can reliably tell if people are richer or poorer than average simply by looking at their expressionless faces. Scotland Yard employs a team of “super-recognizers” who can, from a pixelated photo, identify a suspect they may have seen briefly years earlier or come across in a mug shot. But, being human, we are also inventing machines that read faces as well as or better than we can. In the twenty-first century, the face is a database, a dynamic bank of information points—muscle configurations, childhood scars, barely perceptible flares of the nostril—that together speak to what you feel and who you are. Facial-recognition technology is being tested in airports around the world, matching camera footage against visa photos. Churches use it to document worshipper attendance. China has gone all in on the technology, employing it to identify jaywalkers, offer menu suggestions at KFC, and prevent the theft of toilet paper from public restrooms.
“The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” Michal Kosinski, an organizational psychologist at the Stanford Graduate School of Business, told the Guardian earlier this week. The photo of Kosinski accompanying the interview showed the face of a man beleaguered. Several days earlier, Kosinski and a colleague, Yilun Wang, had reported the results of a study, to be published in the Journal of Personality and Social Psychology, suggesting that facial-recognition software could correctly identify an individual’s sexuality with uncanny accuracy. The researchers culled tens of thousands of photos from an online-dating site, then used an off-the-shelf computer model to extract users’ facial characteristics—both transient ones, like eye makeup and hair color, and more fixed ones, like jaw shape. Then they fed the data into their own model, which classified users by their apparent sexuality. When shown two photos, one of a gay man and one of a straight man, Kosinski and Wang’s model could distinguish between them eighty-one per cent of the time; for women, its accuracy dropped slightly, to seventy-one per cent. Human viewers fared substantially worse. They correctly picked the gay man sixty-one per cent of the time and the gay woman fifty-four per cent of the time. “Gaydar,” it appeared, was little better than a random guess.
The study immediately drew fire from two leading L.G.B.T.Q. groups, the Human Rights Campaign and glaad, for “wrongfully suggesting that artificial intelligence (AI) can be used to detect sexual orientation.” They offered a list of complaints, which the researchers rebutted point by point. Yes, the study was in fact peer-reviewed. No, contrary to criticism, the study did not assume that there was no difference between a person’s sexual orientation and his or her sexual identity; some people might indeed identify as straight but act on same-sex attraction. “We assumed that there was a correlation . . . in that people who said they were looking for partners of the same gender were homosexual,” Kosinski and Wang wrote. True, the study consisted entirely of white faces, but only because the dating site had served up too few faces of color to provide for meaningful analysis. And that didn’t diminish the point they were making—that existing, easily obtainable technology could effectively out a sizable portion of society. To the extent that Kosinski and Wang had an agenda, it appeared to be on the side of their critics. As they wrote in the paper’s abstract, “Given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.”
The objections didn’t end there. Some scientists criticized the study on methodological grounds. To begin with, they argued, Kosinski and Wang had used a flawed data set. Besides all being white, the users of the dating site may have been telegraphing their sexual proclivities in ways that their peers in the general population did not. (Among the paper’s more pilloried observations were that “heterosexual men and lesbians tended to wear baseball caps” and that “gay men were less likely to wear a beard.”) Was the computer model picking up on facial characteristics that all gay people everywhere shared, or merely ones that a subset of American adults, groomed and dressed a particular way, shared? Carl Bergstrom and Jevin West, a pair of professors at the University of Washington, in Seattle, who run the blog Calling Bullshit, also took issue with Kosinski and Wang’s most ambitious conclusion—that their study provides “strong support” for the prenatal-hormone theory of sexuality, which predicts that exposure to testosterone in the womb shapes a person’s gender identity and sexual orientation in later life. In response to Kosinki and Wang’s claim that, in their study, “the faces of gay men were more feminine and the faces of lesbians were more masculine,” Bergstrom and West wrote, “we see little reason to suppose this is due to physiognomy rather than various aspects of self-presentation.”
Read the full article in the New Yorker.
Other news: Quartz | #CallingBSchat