* Any views expressed in this opinion piece are those of the author and not of Thomson Reuters Foundation.
Is Spotify willing to allow harmful content like COVID misinformation on its platform to spread, prioritizing the companyʼs bottom line over public health?
By Willmary Escoto, U.S. policy analyst at Access Now
“I fucked up.” Last year, Joe Rogan publicly admitted to spreading misinformation on his podcast, acknowledging he did not bother to undertake basic fact-checking before leaning into the mic. He said it wouldn’t happen again — then it did.
Now, The Joe Rogan Experience podcast, produced exclusively by Spotify, has firmly landed itself in the spotlight once more, this time for spreading dangerous COVID-19 misinformation and endangering public health, and the hosts' use of harmful racial slurs. With a listenership upwards of 400 million, what is said on the show has far-reaching consequences. But Spotify’s problems are much bigger than Joe Rogan, and censorship is not the answer. What we need is for the world’s largest audio platform to take responsibility for the words it pays to produce, host, and promote.
When Rogan makes false claims that the COVID vaccines alter DNA and that the health risks for young people are greater from the vaccine than from the virus itself, it undermines people’s ability to arrive at well-informed opinions. This is incredibly dangerous, and there are serious human rights implications tied to Spotify’s decisions. The bare minimum the company can do? Align with basic human rights principles, starting with due diligence.
Spotify needs to exercise due diligence
Content in The Joe Rogan Experience is acquired and produced — not to mention hosted — exclusively by Spotify, so the company must conduct due diligence over content it purchased. By playing the role of both producer and broadcaster, and profiting from every episode, Spotify has a responsibility for what content it amplifies. If Spotify performed its due diligence, it would enable the platform to act responsibly by understanding, identifying, and addressing the human rights risks associated with its content governance practices.
After the recent backlash, the streaming giant announced it would add advisory labels to podcast episodes and invest 100 million USD back into the licensing, development, and marketing of music and audio content from historically marginalized groups. But Spotify can’t circumvent racialized inaction by throwing money at marginalized people and seeing what sticks. Let’s call a spade a spade — it’s a shallow PR stunt.
The company also published its excessively vague Platform Rules for the first time, which prohibit “content that promotes dangerous false or dangerous deceptive medical information,” like asserting that COVID-19 isn’t real. The Platform Rules are ambiguous, failing to include what recourse is available to users appealing a content moderation decision. Spotify claims it reviewed multiple, controversial Joe Rogan episodes and determined they “didn’t meet the threshold for removal.” But what that threshold is, for misinformation or racial slurs, remains unclear, especially considering Spotify appears to have cut over 100 Joe Rogan episodes.
While having Platform Rules is a step forward, it’s long overdue Spotify baked in human rights protections to any new policy, rather than relying on a model of scaling up first and addressing abuses later.
Tackling problematic content through a human rights framework
Spotify has failed to produce a human rights-centered policy to combat misinformation. It isn’t the first company to miss the mark, and it certainly won’t be the last. Spotify is promoting freedom of expression and opinion, and providing a platform to impart and receive information, yet failing to address the harmful material it produces.
There are a number of things Spotify can do to address the harms. In addition to undertaking human rights diligence, Spotify needs to publish a human rights policy. People should know exactly what content is allowed on the platform and what are the circumstances, processes, and remedies in place when content is taken down or modified. Furthermore, Spotify should issue regular transparency reports detailing how the company takes action against users, content producers, and flagged accounts.
Spotify’s lack of action to-date is a clear message that it is prioritizing its bottom line over public health and human rights. It’s time for Spotify to fully address its human rights impacts as a company — not in a reactionary, ad hoc approach based on trending controversies that draw negative attention, but in a holistic way that centers the human rights of millions of people using its services.
Our Standards: The Thomson Reuters Trust Principles.