OPINION: AI excellence, trust and ethics... But what about rights?

by Ella Jakubowska | @ellajakubowska1 | European Digital Rights
Monday, 6 April 2020 12:00 GMT

A student takes classes online with his companions using the Zoom app at home during the coronavirus disease (COVID-19) outbreak in El Masnou, north of Barcelona, Spain April 2, 2020. REUTERS/ Albert Gea

Image Caption and Rights Information

* Any views expressed in this opinion piece are those of the author and not of Thomson Reuters Foundation.

New EU regulation must not allow powerful actors to co-opt the narrative of trust, ethics, and public health

Ella Jakubowska is a policy intern at European Digital Rights.

The European Commission have put AI at the heart of their strategy for the EU’s digital future, promising enormous benefits for businesses, healthcare and the environment. February’s long-awaited ‘White Paper on Artificial Intelligence’ saw Commission Executive Vice-President Margrethe Vestager commit to balance the risks and benefits of AI through a regulatory framework of “Excellence” (innovation) and “Trust” (risk mitigation).

Throughout 2019, in the absence of clear rules, businesses jumped on the "ethics-washing" bandwagon. The current coronavirus pandemic throws the absence of regulation into even sharper relief: Big Tech companies are using the crisis to encourage the Commission to relax plans to regulate AI, arguing that failing to do so could threaten public health.

New European rules must not allow powerful actors to co-opt the narrative of trust, ethics – and now, public health – to avoid legally-binding standards. Instead, the protection of fundamental rights must remain the driving force behind the EU’s digital plans.

NO SILVER BULLET

From developing the Hippocratic oath, to the early internet, ethics has helped to foster democratic societies and push back against harmful applications of new technology. But ethics, trust and excellence are not a comprehensive solution to all of the risks posed by AI. When it comes to justice, healthcare, or other essential areas, it is vital that fundamental rights do not lose out to a desire to be the most innovative, profitable or technologically advanced.

WHERE IT STARTS TO GO WRONG

Corporate AI guidelines often make vague ethical commitments in ways that are hard to enforce. This doesn’t mean that explainability of systems, accountability for decisions or accurate data are not good things, but rather that a buzzword-driven approach often fails to help the people that are affected, and ignores the fact that technology is not the right solution for every problem.

Without due care for fundamental rights such as dignity and privacy, the Commission’s trust and excellence framework is at risk of falling into the same trap. The White Paper acknowledges that the catch-all term “Artificial Intelligence” includes many different systems, and not all present the same risks. But an approach that relies on the goodwill and discretion of Big Tech will be a superficial fix in the context of wider structures of abuse of power and dominance that are rampant across the technology industry. What we need is systemic change to tackle business models that profit from exploiting people and their data.

A FATAL COLLISION

Self-driving cars demonstrate the difficulties of translating ethical rules and human judgement into automated decision-making systems. Researchers have struggled with how to answer questions about who or what should be sacrificed in a crash – the “driver”?An elderly passerby? And what about the systematic problems many algorithms have identifying black people? The Commission is working on questions like these, but clear rules have proved elusive. What is clear, however, is that simply training algorithms with European data, or applying labels of trustworthiness, will not be enough.

The White Paper’s proposed AI regulation includes online compliance checklists to “limit the burden” on tech companies, and exemptions for “trade secrets”. Tech companies have long exploited ethics to manipulate and have power over governments, control the AI narrative, and ultimately avoid legal controls. The EU’s failure to propose mandatory measures for all but the most high-risk applications shows that this industry pressure could be working.

DON'T THROW THE BABY OUT WITH THE BATHWATER

EU laws provide tangible protections for the digital rights of people across Europe. Law goes through rigorous, democratic processes, allowing it to put checks and balances on power, demand standards and safeguards, and ensure due process – all of which codify human rights. In contrast, guidelines on ethics, excellence and trust can all be (ab)used to deflect from concrete rights.

There is an important place for ethics and trust in society. Genuine attempts to make technology fairer, more ecological, transparent or less biased should be applauded. But if we want to tackle harmful business models, algorithmic discrimination and environmental threats, voluntary regulation within an already unequal playing field is simply not enough. Now more than ever, we need the European Commission to cement its global data protection and human rights leadership by ensuring that the regulatory ecosystem for AI is comprehensive, sustainable and accountable to humans (and their rights).