×

Our award-winning reporting has moved

Context provides news and analysis on three of the world’s most critical issues:

climate change, the impact of technology on society, and inclusive economies.

OPINION: The importance of local voices in online content moderation

Friday, 17 June 2022 10:25 GMT

A cellphone user looks at a Facebook page at a shop in Latha street, Yangon, Myanmar August 8, 2018. To match Special Report MYANMAR-FACEBOOK/HATE REUTERS/Ann Wang

Image Caption and Rights Information

* Any views expressed in this opinion piece are those of the author and not of Thomson Reuters Foundation.

Current content moderation practices are failing those living in countries not deemed a priority for Big Tech firms. Local coalitions on freedom of expression and content moderation can help address the address the power imbalance.

Pierre François Docquir is the head of media freedom at ARTICLE 19 and Catherine Muya is the digital programme officer at ARTICLE 19 Eastern Africa 

June 18 is the first United Nations International Day for Countering Hate Speech.

In 2018, a UN investigation concluded that the spread of hate speech on Facebook had played a determining role in the possible genocide of the Rohingya population. The case of Myanmar provided the world with clear evidence: hate speech online can contribute to disastrous harms in the real world.

Social media platforms can be a space for free expression, democratic debate, and participation. But, as in the case of Myanmar, weak content moderation can transform them into hotbeds of hate speech, disinformation and other content that fuels polarisation.

This affects societies around the world, but matters especially in countries with previous experiences of conflict and societal tensions: where the online dissemination of lies and calls to violence are more likely to have real-world consequences.

To understand the meaning behind a message and assess its potential impact, one needs not only the knowledge of the local language but also of historical, societal and political contexts. This understanding is crucial when it comes to content moderation on social media.

We know from our Bridging the Gap research that in many parts of the world, this is not the reality. Even where platforms are just about equivalent to the internet - according to one interviewee from Bosnia and Herzegovina, ‘Facebook is the Internet’ there - companies do very little to ensure that they understand the societies they operate in.

This matters to the lives of millions of people on a daily basis - but never is it more important than at the time of elections, when misinformation and hate speech tend to be most rife, produced and disseminated for political gain.

For instance, Kenya has a history of disputed elections and political violence. Over 1,200 people were killed in the devastating aftermath of the 2007 elections, as ethnic divisions led to widespread violence. During the 2017 general elections, social media was used by politicians to target voters but also spread misinformation and polarising content that stoked ethnic divisions. Hateful and inciteful messages, written in the local dialect or in Swahili that would have been easily understood with local knowledge, were allowed to remain on platforms, further exacerbating tensions between groups.

More than 30 people died in protests following the 2017 vote and the unprecedented decision by Kenya’s Supreme Court to nullify the election result threw the country into turmoil. Throughout this time, social media platforms were slow to take down problematic content, with some posts still available on the platform as late as 2019. Videos from 2007 were also shared online with the intention to misinform people about erupting violence in certain parts of the country, further stroking fear. At the time, most platforms had barely taken action to label false or misleading posts, and the algorithms continued to promote visibility of the polarising messages.

Our research has shown a huge gap between demands of local civil society organisations and actions of global social media companies. From content rules not available in local languages, to ineffective mechanisms to appeal wrongful moderation decisions, local actors constantly feel like they’re fighting a losing battle. They understand the potential risks of harmful content - and yet, little to no avenues exist for them to engage with platforms.

It is clear that current content moderation practices are failing those living in countries not deemed a priority in the boardrooms of Silicon Valley. A way to bridge the gap between global companies and local contexts and address this overwhelming power imbalance is the establishment of local coalitions on freedom of expression and content moderation, which could give local actors a voice in moderation debates in their countries.

In turn, such coalitions could provide platforms with avenues for engagement so they are able to hear the concerns of civil society in a timely manner, helping to prevent wrongful moderation decisions from rising to a fever pitch. The result? Content moderation policies that uphold international freedom of expression guidelines, while grounding decisions in a robust understanding of the local context.

Global companies have a duty to respect human rights; local coalitions that bring local voices into content moderation can help improve their practices.

-->