×

Our award-winning reporting has moved

Context provides news and analysis on three of the world’s most critical issues:

climate change, the impact of technology on society, and inclusive economies.

OPINION: The AI in our healthcare needs a reckoning

by J. Bob Alotta | Mozilla
Thursday, 21 July 2022 09:00 GMT

A biologist works on DNA extracted from cancer tissue in Oncompass Medicine's laboratory in Budapest, Hungary, June 2, 2021. Picture taken June 2, 2021. REUTERS/Bernadett Szabo

Image Caption and Rights Information

* Any views expressed in this opinion piece are those of the author and not of Thomson Reuters Foundation.

The use of Artificial Intelligence in healthcare has brought many benefits, but is also marred by failures because of bias and a lack of accountability

J. Bob Alotta is vice president of global programs at Mozilla

Over the past decade, as artificial intelligence (AI) systems have pervaded our lives, they’ve often met with necessary reckonings. The algorithms that power social media platforms can be helpful. But as we’ve learned through the 2016 U.S. election and COVID-19 disinformation, they can also be disastrous.

The same can be said about the use of AI systems in law enforcement. Facial recognition and predictive policing are often tools of surveillance and oppression, not protection and justice. Yet AI is pervading another sector of our lives with little accountability, or even awareness: healthcare. And in this case, it can literally be a matter of life and death.

AI’s move into healthcare has been swift and, to be clear, has brought many benefits. The AI in our hospitals can help doctors predict illnesses, diagnose diseases, and even prescribe treatments. Used responsibly and transparently, AI can be a boon, saving lives and advancing research.

But just like the AI in our social media platforms and our justice system, it can also backfire. And its failures are often linked to two big issues: bias, and a lack of accountability.

Consider the diagnostic AI used to detect skin diseases like melanoma, which rely on training datasets of pictures of skin. For the most part, varieties of skin colors aren’t represented in these data sets. This bias has lethal consequences: AI skin cancer diagnoses can be less accurate for individuals with darker skin. Similarly, AI that is used to assign patients hospital beds has been found to prioritize white patients over Black patients.

Disparities in the healthcare system for people of color have a long and tragic history; AI is now perpetuating a long-standing injustice.

Bias in voice AI, which often complements healthcare AI, is also an issue. Throughout the pandemic, millions of people and healthcare workers relied on voice technology to stay updated on COVID-19. But for millions of others, that simply wasn’t an option. AI-powered gadgets such as Amazon’s Alexa, Apple’s Siri, and Google Home do not support a single native African language. This means that millions of people who speak Kiswahili and other African languages can’t use voice technology to access vital COVID-19 updates.

Accountability in healthcare AI is another issue. These systems are being deployed into the most intimate parts of our lives, like our mental health and wellness. A legion of mental health apps - many powered by AI - is on the market, purporting to help people manage issues like depression, suicidal thoughts, and PTSD. But as a recent report by Mozilla revealed, these apps and their AI share people’s intimate data freely for profit. Of the 32 apps reviewed, 28 received Mozilla’s *Privacy Not Included warning label. As one researcher said, these apps “operate like data-sucking machines with a mental health app veneer. In other words: A wolf in sheep’s clothing.”

Accountability - or a lack of it - is also at play as healthcare AI tools are deployed across rural communities in India by Big Tech companies in collaboration with hospitals in an apparent effort to address doctor shortages. These tools can provide relief, but the way they are deployed is often horrifying. Research by Radhika Radhakrishnan has revealed that these AI systems operate in opaque ways, giving patients little insight into how - or why - certain health data is being collected. And they’re often deployed with little to no testing or oversight.

A full-scale reckoning over the AI in our healthcare is long overdue. Fortunately, a growing number of technologists and activists are addressing issues such as lethal bias in AI datasets. Avery Smith, a Baltimore-based African American who lost his wife to melanoma, is the creator of Melalogic, a resource and nascent AI system specifically designed for patients with dark skin.

There’s also Jen Caltrider at Mozilla, who led the research into mental health apps. Her research has spurred six apps to strengthen their privacy and security policies, while Mozilla Fellow Remy Muhire is building voice technologies that can understand languages like Kinyarwanda.

Alone, they’re not enough. As AI becomes further entwined with medicine, it’s crucial that we - society writ large, and not just Big Tech - wrestle with its potential and its pitfalls, and ensure that it’s ultimately deployed in ways that help, rather than harm, humanity.

Our Standards: The Thomson Reuters Trust Principles.

-->