×

Our award-winning reporting has moved

Context provides news and analysis on three of the world’s most critical issues:

climate change, the impact of technology on society, and inclusive economies.

Content moderation in conflict zones: What role for big tech?

by Maya Gebeily | @GebeilyM | Thomson Reuters Foundation
Friday, 21 May 2021 17:10 GMT

Palestinians ride a motorcycle past the site of an Israeli air strike, after Israel- Hamas truce, in Gaza May 21, 2021. REUTERS/Suhaib Salem

Image Caption and Rights Information

Tech firms have come under fire for 'discriminatory' content moderation policies during the recent flare-up between Gaza and Israel. How have they responded?

By Maya Gebeily

May 21 (Thomson Reuters Foundation) - Conflict broke out this month between Israel and the southern Gaza Strip - and big tech has not been spared.

Instagram and Twitter have blamed technical errors for deleting posts mentioning the possible eviction of Palestinians from East Jerusalem, but data rights groups fear "discriminatory" algorithms are at work and want greater transparency.

Tensions first started in early May over the possible evictions of Palestinians from their homes in Jerusalem's Sheikh Jarrah neighbourhood but soon escalated into a full-blown conflict between Gaza and Israel. A ceasefire announced overnight is set to put an end to fighting.

Can big tech stay neutral when conflicts erupt?

How have tech giants responded?

Facebook, Twitter, Google and Venmo all declined to comment on whether criticism over the Gaza-Israel conflict had prompted broader internal reviews of their content moderation processes or other policies.

But they have responded to individual criticism: on May 8, Instagram publicly apologised for the deletions of Sheikh Jarrah posts and suspensions of accounts.

This week, its parent company Facebook set up a round-the-clock "special operations center" to deal with content on the Israeli-Palestinian conflict.  

"It is staffed by experts from across the company, including native Arabic and Hebrew speakers," a Facebook spokesman told the Thomson Reuters Foundation.

Facebook has established other "special operations centers" to deal with content on COVID-19, wildfires in California and Australia in recent years, violence in Myanmar, and major elections, including in the United States.  

A Twitter spokeswoman said the company uses "a combination of technology and human review to enforce the Twitter Rules...impartially," but did not specify whether it had created a dedicated team on the Gaza flare-up.  

Following outrage on social media, YouTube recently deleted a video linked to the Israeli government that depicted rocket fire. A spokeswoman said the company uses automated systems to find content "at scale" while humans help with "contextual decisions" on removing content regardless of language.

A view shows the ruins of houses and buildings destroyed by Israeli strikes in the recent cross-border violence between Palestinian militants and Israel, following Israel-Hamas truce, in Gaza May 21, 2021. REUTERS/Ibraheem Abu Mustafa

What's tech got to do with the conflict? 

Palestinians took to social media earlier this month to protest the possible Sheikh Jarrah evictions, but many found their posts, photos, or videos removed or their accounts blocked.

Facebook and Twitter blamed "technical glitches" and said the posts and accounts would be restored, but data rights groups found deletions continued, even if the posts depicted no violence or incitement.

The groups accused the tech platforms of "censoring" Palestinian voices through discriminatory content moderation policies, which they said should be made more fair and transparent.  

In protest at the continued restrictions, users are rating Facebook with one star on Android and Apple stores, dragging its average rating down.

The raters left comments including "biased policies against Palestinian people", "no freedom of speech," and "Free Palestine". Similar comments were left alongside one-star ratings for Instagram.  

Others have accused Venmo, a peer-to-peer payment service owned by PayPal, of "systemic financial discrimination" after delaying some donations to Palestinian relief organisations.

A Venmo spokesman attributed the delays to "compliance obligations" with sanctions. Hamas, the Islamist Palestinian group which controls Gaza, and affiliated organisations are on a U.S. terrorism blacklist. 

Google, meanwhile, saw internal complaints: a group of Jewish Google employees urged their leadership to make a public statement on the conflict and fund Palestinian rights organisations.

Palestinians leave a United Nations-run school where they took refuge during the recent cross-border violence between Palestinian militants and Israel, heading to their home following Israel-Hamas truce, in Gaza May 21, 2021. REUTERS/Ibraheem Abu Mustafa

Have tech companies been caught up by conflict before? 

In 2018, a United Nations fact-finding mission criticised Facebook for allowing posts including anti-Muslim hate speech and calls for violence between the military and ethnic groups in Myanmar.

Last year the Atlantic Council's Digital Forensic Research Lab found that pro-Azerbaijan accounts were manipulating traffic in a "Twitter war" against their online enemies during the six-week war between Christian Armenia and mainly Muslim Azerbaijan over the enclave of Nagorno-Karabakh.

What does the law say about content moderation and freedom of speech? 

Tech companies in the United States - where most are headquartered - are granted broad protections under Section 230 of the Communications Decency Act of 1996, which frees online platforms like Facebook and Twitter from legal responsibility for what others say or do on their sites. 

That has left content regulation to the firms themselves, exposing them to criticism.

"Frankly, I don't think we should be making so many important decisions about speech on our own either," Facebook CEO Mark Zuckerberg said in 2019, announcing the creation of an independent Oversight Board to make binding decisions on whether flagged Facebook content should stay up or be removed. 

"We'd benefit from a more democratic process, clearer rules for the internet, and new institutions," he said.

Jillian York, director of international freedom of expression at advocacy group Electronic Frontier Foundation, said that "tech companies are not legally bound to be neutral in any way."

She pointed to what she said were inherent biases on tech platforms against certain Arabic names being used in page titles, or the portrayals of women's bodies.

The platforms' internal policies do not mention neutrality either, and all tech firms interviewed by the Thomson Reuters Foundation declined to comment on their neutrality policies during conflict.

(Reporting by Maya Gebeily @gebeilym, Editing by Zoe Tabary and Lin Taylor. Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers the lives of people around the world who struggle to live freely or fairly. Visit http://news.trust.org)

RELATED STORIES

Instagram, Twitter blame glitches for deleting Palestinian posts

INTERVIEW - 'Algorithms of oppression': Big tech urged to combat discrimination

'Lost memories': War crimes evidence threatened by AI moderation

Our Standards: The Thomson Reuters Trust Principles.

-->