×

Our award-winning reporting has moved

Context provides news and analysis on three of the world’s most critical issues:

climate change, the impact of technology on society, and inclusive economies.

Researchers criticize AI software that predicts emotions

by Reuters
Thursday, 12 December 2019 13:21 GMT

Children smile as they play with a polythene sheet in Dhaka, Bangladesh, June 13, 2019. REUTERS/Mohammad Ponir Hossain

Image Caption and Rights Information

'How people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures,' say researchers

SAN FRANCISCO (Reuters) - A prominent group of researchers alarmed by the harmful social effects of artificial intelligence called Thursday for a ban on automated analysis of facial expressions in hiring and other major decisions.

The AI Now Institute at New York University said action against such software-driven “affect recognition” was its top priority because science doesn’t justify the technology’s use and there is still time to stop widespread adoption.

The group of professors and other researchers cited as a problematic example the company HireVue, which sells systems for remote video interviews for employers such as Hilton and Unilever. It offers AI to analyze facial movements, tone of voice and speech patterns, and doesn’t disclose scores to the job candidates.

The nonprofit Electronic Privacy Information Center has filed a complaint about HireVue to the U.S. Federal Trade Commission, and AI Now has criticized the company before.

HireVue said it had not seen the AI Now report and did not answer questions on the criticism or the complaint.

“Many job candidates have benefited from HireVue’s technology to help remove the very significant human bias in the existing hiring process,” said spokeswoman Kim Paone.

AI Now, in its fourth annual report here on the effects of artificial intelligence tools, said job screening is one of many ways in which such software is used without accountability, and typically favored privileged groups.

The report cited a recent academic analysis of studies on how people interpret moods from facial expressions. That paper here found that the previous scholarship showed such perceptions are unreliable for multiple reasons.

“How people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation,” wrote a team at Northeastern University and Massachusetts General Hospital.

Companies including Microsoft Corp are marketing their ability to classify emotions using software, the study said. Microsoft did not respond to a request for comment Wednesday evening.

AI Now also criticized Amazon.com Inc, which offers analysis on expressions of emotion through its Rekognition software. Amazon told Reuters that its technology only makes a determination on the physical appearance of someone’s face and does not claim to show what a person is actually feeling.

In a conference call ahead of the report’s release, AI Now founders Kate Crawford and Meredith Whittaker said that damaging uses of AI are multiplying despite broad consensus on ethical principles because there are no consequences for violating them.

(Reporting by Joseph Menn. Editing by Gerry Doyle)

Our Standards: The Thomson Reuters Trust Principles.

Themes
-->