Our award-winning reporting has moved

Context provides news and analysis on three of the world’s most critical issues:

climate change, the impact of technology on society, and inclusive economies.

Zoom urged by rights groups to rule out 'creepy' AI emotion tech

by Avi Asher-Schapiro | @AASchapiro | Thomson Reuters Foundation
Wednesday, 11 May 2022 10:00 GMT

A student takes classes online with his companions using the Zoom APP at home during the coronavirus disease (COVID-19) outbreak in El Masnou, north of Barcelona, Spain April 2, 2020. REUTERS/ Albert Gea

Image Caption and Rights Information

Integrating emotion recognition tools with Zoom's video-conferencing platforms would endanger privacy and perpetuate racial bias, digital rights campaigner warn

  • Rights groups warn of emotion recognition tech risks

  • Media report says Zoom researching use of such tools

  • Use of mood-detecting AI tech increases worldwide

By Avi Asher-Schapiro

LOS ANGELES, May 11 (Thomson Reuters Foundation) - Human rights groups have urged video-conferencing company Zoom to scrap research on integrating emotion recognition tools into its products, saying the technology can infringe users' privacy and perpetuate discrimination.

Technology publication Protocol reported last month that California-based Zoom was looking into building such tools, which could use artificial intelligence (AI) to scan facial movements and speech to draw conclusions about people's mood.

In a joint letter sent to Zoom Chief Executive Eric Yuan on Wednesday, more than 25 rights groups including Access Now, the American Civil Liberties Union (ACLU) and the Muslim Justice League said the technology was inaccurate and could threaten basic rights.

"If Zoom advances with these plans, this feature will discriminate against people of certain ethnicities and people with disabilities, hardcoding stereotypes into millions of devices," said Caitlin Seeley George, director of campaign and operations at Fight for the Future, a digital rights group.

"Beyond mining users for profit and allowing businesses to capitalize on them, this technology could take on far more sinister and punitive uses," George said.

Zoom did not immediately respond to a request for comment.

Zoom Video Communications Inc emerged as a major video conferencing platform around the world during COVID-19 lockdowns as education and work shifted online, reporting more than 200 million daily users at the height of the pandemic in 2020.

The company has already built tools that purport to analyze the sentiment of meetings based on text transcripts of video calls, and according to Protocol it also plans to explore more advanced emotion reading tools across its products.

In a blog post describing the sentiment analysis technology, Zoom said its tools can measure the "emotional tone of the conversations" in order to help sales people improve their pitches.

But the rights groups' letter said rolling out emotional recognition analysis for video calls would trample users' rights.

"This move to mine users for emotional data points based on the false idea that AI can track and analyze human emotions is a violation of privacy and human rights," said the letter, a copy of which was sent to the Thomson Reuters Foundation.

"Zoom needs to halt plans to advance this feature," it added.

From classrooms to job interviews and in public places, emotional recognition tools are increasingly common, despite questions about their accuracy and human rights implications.

Critics of the technology often draw parallels to facial recognition technologies, which have been shown to have high error rates on non-white faces, and have led to wrongful arrests.

Esha Bhandari, deputy director of the ACLU Speech, Privacy, and Technology Project, called emotion AI "a junk science."

"There is no good reason for Zoom to mine its users' facial expressions, vocal tones, and eye movements to develop this creepy technology," she said in emailed comments.

Related stories:

Online exam software sparks global student revolt

'Racist' facial recognition sparks ethical concerns in Russia

Indian government faces lawsuit against facial recognition

(Reporting by Avi Asher-Schapiro @AASchapiro; Editing by Helen Popper. Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers the lives of people around the world who struggle to live freely or fairly. Visit http://news.trust.org)

Our Standards: The Thomson Reuters Trust Principles.