×

Our award-winning reporting has moved

Context provides news and analysis on three of the world’s most critical issues:

climate change, the impact of technology on society, and inclusive economies.

AI bias: How do algorithms perpetuate discrimination?

by Umberto Bacchi | @UmbertoBacchi | Thomson Reuters Foundation
Friday, 18 June 2021 13:38 GMT

FILE PHOTO: Visitors check their phones behind the screens advertising facial recognition software during Global Mobile Internet Conference at the National Convention in Beijing, China April 27, 2018. REUTERS/Damir Sagolj/File Photo

Image Caption and Rights Information

From recruiting software that favours men to facial recognition systems that struggle to identify Black faces, artificial intelligence (AI) technology often reflects entrenched bias, campaigners say

By Umberto Bacchi

TBILISI, June 18 (Thomson Reuters Foundation) -  Whether helping to tackle disease or improve public transport, artificial intelligence (AI) technology is already part of everyday life, promising greater efficiency, better services and technological progress.

But digital rights campaigners say the technology has the potential to entrench and amplify discrimination against women and members of minority groups who are already marginalised.

As algorithmic automations multiply along with calls for their ethical use, here are some key facts about AI bias:

WHAT IS AI BIAS?

The term bias is defined by the Oxford English Dictionary as "a strong feeling in favour of or against one group of people, or one side in an argument, often not based on fair judgment".

Within AI, the concept of bias is generally associated with machine learning, a process where algorithms learn to make automated decisions based on an analysis of the data fed into them. 

Such decisions have sometimes been found to be biased, with cases including AI favouring men over women or penalising members of ethnic minorities.

WHY DOES IT HAPPEN?

Biases in AI are related to how programmes are scripted and built.

A common problem relates to the data that algorithms are fed and use to make inferences. If it is incomplete or representative of an already discriminatory reality, results will be skewed.

One of the reasons some facial recognition systems have been found to misidentify people of colour more often than white people, is that they have been trained on sets of images that included predominantly white faces.

Similarly, if a company that has never employed a woman in a senior role was to use its own data to train a hiring algorithm, it would most likely teach itself that male candidates are more successful and thus preferable.

The end goal algorithms are instructed to pursue can also lead to discriminatory outcomes.

 Problems could arise, for example, if an AI system helping  doctors find the best treatment for their patients was instructed to save the hospital money or factor in patients' wealth and insurance coverage, scientists have warned.

   

ARE THERE ANY EXAMPLES?

As AI becomes more widely adopted, an increasing number of studies and cases have pointed to hidden bias.

In May, Twitter said its image-cropping algorithm had a problematic bias towards excluding Black people and men.

A month earlier, researchers found Facebook users may not be learning about jobs for which they are qualified because the company's tools can disproportionately direct adverts to a particular gender.

In 2017, Amazon stopped using an AI resume screener after discovering it penalised resumes that included the word "women", automatically downgrading graduates of all-women's colleges.

And a 2019 study published in Science magazine indicated that a healthcare algorithm used in the United States was more likely to recommend additional care for white patients than for Black patients.

WHAT IS BEING DONE TO FIX THE PROBLEM?

Researchers, lawmakers and activists have taken different steps to try to address the problem.

Some scientists have developed AI tools that can detect algorithmic discriminations. Others have come up with guidelines for developers to avoid creating biased programmes.

The risk of discrimination is also a central issue for lawmakers around the world as they weigh how to regulate the use of AI technology.

U.S. lawmakers are considering federal laws to address algorithmic bias, while the EU has proposed rules requiring firms ensure high risk AI applications in sectors including biometric identification and recruiting are free of bias.

Others have taken a bottom-up approach. A German charity has been training women from different ethnic backgrounds to become data analysts and AI specialists with the hope that more diverse teams will produce less discriminatory tools.

(Reporting by Umberto Bacchi @UmbertoBacchi; Editing by Helen Popper. Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers the lives of people around the world who struggle to live freely or fairly. Visit http://news.trust.org)

Related stories:

As AI-based loan apps boom in India, some borrowers miss out

Why Twitter and Instagram are inviting people to share their pronouns

Unfair surveillance'? Online exam software sparks global student revolt

Our Standards: The Thomson Reuters Trust Principles.

-->