×

Our award-winning reporting has moved

Context provides news and analysis on three of the world’s most critical issues:

climate change, the impact of technology on society, and inclusive economies.

High-tech lie detector used at Europe borders face scrutiny

by Umberto Bacchi | Thomson Reuters Foundation
Friday, 5 February 2021 12:08 GMT

ARCHIVE PHOTO: Latvian border guard Evelina Aleksandrova works with the surveillance system in the border crossing point in Terehova, May 3, 2014. REUTERS/Ints Kalnins

Image Caption and Rights Information

MEP Patrick Breyer of Germany's Pirate Party is taking the EU's Research Agency to court to gain access to information related to the controversial border security research project iBorderCTRL

By Umberto Bacchi

TBILISI, Feb 5 (Thomson Reuters Foundation) - A lie detector driven by artificial intelligence and trialled at European Union borders is the focus of a lawsuit that hopes to bring more transparency over the bloc's funding of "ethically questionable" technology, its proponent said.

Patrick Breyer, a European lawmaker, is requesting the release of EU Research Agency (REA) documents evaluating the 4.5 million euro ($5.4 million) trial of the use of artificial intelligence (AI) lie detectors to ramp up EU border security.

"I want to create a precedent to make sure that the public ... can access information on EU-funded research," said Breyer, of Germany's Pirate Party, who has described the technology as a "pseudo-scientific security hocus pocus".

The European Union's top court started hearing the case on Friday.

The iBorderCtrl trial, which ended in 2019, is one of several projects seeking to automate the EU's increasingly busy borders and counter irregular migration and terrorism.

The project, launched in 2016, was tested in Greece, Latvia and Hungary, drawing criticism from human rights groups that question the technology's ability to accurately assess people's intentions and its potential for discrimination.

The European Commission, which manages the REA, said the project aimed to test new ideas and technologies.

"iBorderCtrl was not expected to deliver ready-made technologies or products. Not all research projects lead to the development of technologies with real-world applications," a Commission spokesman said in emailed comments.

Under iBorderCtrl, people planning to travel were asked to answer questions from a computer-animated border guard, via webcam. Their micro-gestures were analysed to see if they were lying, according to the European Commission website.

Then at the border, low-risk travellers went through, while higher-risk passengers were sent for further checks, it said.


'DYSTOPIAN'

Ella Jakubowska of digital rights group EDRi expressed concern over the effectiveness of AI in making such decisions.

"Human expressions are varied, diverse (especially for people with certain disabilities) and often culturally-contingent," she said in emailed comments.

"(IBorderCtrl) is by no means the only dystopian technological experiment being funded by the EU," she added.

IBorderCtrl acknowledged the ethical concerns on its website, adding the project helped initiate a public debate over the technology's use.

"Novel technologies can have a significant impact on improving the efficacy, accuracy, speed, while reducing the cost of border control," it said.

"However, they may imply risks for fundamental human rights, which need to be further researched and mitigated before a concept goes live."

When Breyer asked the REA for the project's results, ethics report and legal assessment in 2019, the REA said disclosure would undermine commercial interests of the iBorderCtrl consortium - a decision that Breyer is now challenging in court.

Breyer said he hopes the case will lead to greater transparency over the EU's funding of "ethically questionable" technology.

While the technology was unlikely to be used at EU borders again, there was a risk it could make its way into the private sector, for example to screen insurance claims or job applicants, he said.

As EU governments increasingly turn to algorithms and AI to make important decisions about people's lives, more transparency was needed, said Merel Koning, senior policy officer at Amnesty International.

"The (European Commission) must subject all research being conducted on AI systems to the full light of public scrutiny and only fund research that respects, protects and promotes human rights," Koning told the Thomson Reuters Foundation.

The Commission said that all EU-funded research proposals undergo a specific evaluation that verifies their compliance with ethical rules and standards.

"The Commission always encourages projects to publicise as much as possible their results," it said.

Related stories:

Fears raised over facial recognition use at Moscow protests

EU criticised over surveillance aid in nations where privacy at risk

China growing use of emotion recognition tech raises rights concerns

($1 = 0.8344 euros) (Reporting by Umberto Bacchi @UmbertoBacchi, Editing by Katy Migiro. Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers the lives of people around the world who struggle to live freely or fairly. Visit http://news.trust.org)

Our Standards: The Thomson Reuters Trust Principles.

-->