×

Our award-winning reporting has moved

Context provides news and analysis on three of the world’s most critical issues:

climate change, the impact of technology on society, and inclusive economies.

EU rights watchdog warns of pitfalls in use of AI

by Reuters
Monday, 14 December 2020 05:00 GMT

A staff member, wearing a face mask following the coronavirus disease (COVID-19) outbreak, looks at a robot at the venue for the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 9, 2020. REUTERS/Aly Song

Image Caption and Rights Information

From predictive policing to targeted advertising, artificial intelligence can be discriminatory and invasive privacy experts say

By Foo Yun Chee

BRUSSELS, Dec 14 (Reuters) - The European Union's rights watchdog has warned of the risks of using artificial intelligence in predictive policing, medical diagnoses and targeted advertising as the bloc mulls rules next year to address the challenges posed by the technology.

While AI is widely used by law enforcement agencies, rights groups say it is also abused by authoritarian regimes for mass and discriminatory surveillance. Critics also worry about the violation of people's fundamental rights and data privacy rules.

The Vienna-based EU Agency for Fundamental Rights (FRA) urged policymakers in a report issued on Monday to provide more guidance on how existing rules apply to AI and ensure that future AI laws protect fundamental rights.

"AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions," FRA Director Michael O'Flaherty said in a statement.

FRA's report comes as the European Commission, the EU executive, considers legislation next year to cover so-called high risk sectors such as healthcare, energy, transport and parts of the public sector.

The agency said AI rules must respect all fundamental rights, with safeguards to ensure this and include a guarantee that people can challenge decisions taken by AI and that companies need to be able to explain how their systems take AI decisions.

It also said there should be more research into the potentially discriminatory effects of AI so Europe can guard against it, and the bloc must further clarify how data protection rules apply to the technology.

FRA's report is based on more than 100 interviews with public and private organisations already using AI, with the analysis based on uses of AI in Estonia, Finland, France, the Netherlands and Spain. (Reporting by Foo Yun Chee; Editing by Alex Richardson)

-->