England's Black footballers have faced a wave of racist abuse on social media in the wake of the team's Euro 2020 defeat. What are tech giants doing to quell the hate?
By Sharon Kimathi
LONDON, July 11 (Thomson Reuters Foundation) - Twitter and Facebook have condemned racist abuse which took place on their platforms after England lost to Italy on Sunday in the Euro 2020 final.
Three Black British players, Marcus Rashford, Bukayo Saka and Jadon Sancho, faced a barrage of abuse online after missing spot-kicks.
As pressure mounts on social media firms, how are they pledging to tackle online abuse?
Why have social media companies come under fire?
Rashford, Sancho and Saka were the targets of the online abuse after they missed spot-kicks in a penalty shootout with Italy which settled Sunday's final after the game finished as a 1-1 draw.
While the social media feeds of the players also showed huge levels of support and gratitude from fans for the tournament, the abuse overshadowed the positive messages.
Social media platforms have been criticised for not doing enough to police content on their platforms, being too quick to block or restrict controversial content and for applying their rules unevenly.
British Member of Parliament Diane Abbott – who has been subjected to frequent online racial hatred – said via email that "it is time that the social media companies stopped avoiding their responsibilities when it comes to racist, sexist and threatening abuse online".
Abbott said that individuals should be permitted to post anonymously in the first instance, but that social media companies should keep the names and addresses of anyone who posts on their platforms.
"When the abuse crosses the line into illegality the police could be informed, investigate and prosecute," said Abbott.
Jo Stevens, the Shadow Secretary of State for DCMS said that: "Twitter, Facebook and Instagram have the means to stop this hatred on their platforms and yet they decide to do nothing."
Sheree Atcheson, author of ‘Demanding More, Why Diversity & Inclusion Don't Happen and What You Can Do About It’, said that the "key thing here is that online abuse is in addendum of in-real-life abuse, and we cannot tackle one without the other".
Has this happened before?
Critics say that social media platforms have become toxic places where people can racially abuse others without consequence.
In 2018, Reuters documented a pattern of Facebook allowing posts urging hatred against the minority Rohingya Muslim population in Myanmar amid ethnic violence.
Facebook was also slow to block homophobic hate speech in Arabic, while YouTube quickly deleted videos of potential war crimes evidence in Syria, the Thomson Reuters Foundation has reported.
How did platforms react to the online abuse of England players?
Twitter said it had removed more than 1,000 tweets and permanently suspended a number of accounts following the "abhorrent" racist abuse directed at England players.
"In the past 24 hours, through a combination of machine learning based automation and human review, we have swiftly removed over 1000 Tweets and permanently suspended a number of accounts for violating our rules," a Twitter spokesperson said.
A Facebook company spokesperson offering comments about both Instagram and Facebook via email told the Thomson Reuters Foundation that it "quickly removed comments and accounts directing abuse at England’s footballers last night".
It added that it will continue to take action against those that break our rules, and it encouraged all players to "turn on Hidden Words, a tool which means no one has to see abuse in their comments or DMs [direct messages]".
What else are social media companies doing to tackle the problem?
Twitter says it has expanded its rules against hateful conduct to include language that dehumanises others on the basis of race, ethnicity, national origin, religion, age, disability or disease.
It has also changed its direct messages policy to include the sender’s profile information and indicate how the sender is connected to the receiver to help people quickly identify potentially abusive content.
All Instagram accounts in the UK have the ability to filter messages to reduce the probability of abusive communications.
Facebook Messenger provides the option to ignore a conversation and automatically move it out of your inbox.
It has also added comment controls and filters where a user can add emojis, words or phrases they find offensive to the comment filter, and comments containing those terms will not appear.
Facebook will also be rolling out an option to block a user’s account and pre-emptively block new accounts the person may create in the next few weeks.
Our Standards: The Thomson Reuters Trust Principles.