Predicting Human Lives?

New Regulations for AI systems in Europe

November, 2020 in Opinion

Rita Gsenger

“Humanity’s story had been improvised, now it was planned, years in advance, for a time the sun and moon aligned, we brought order from chaos.” (Serac, Westworld, Season 3, Episode 5, HBO)

Rehoboam–named after the first king of Judah, said to be the wisest of all human beings was the successor of King Solomon–is predicting human lives and dictating what individuals will do without them knowing in the HBO adaption of “Westworld”. The giant quantum computer looks like a black globe covered in red, flickering lights and it is placed in the entrance of its owner company, where the public and school children can look at it, visit it, to see that it is not such a dangerous mythical creature after all. Nobody, except for its creators understands how the system works and it structures and shapes society, controlling its own system as well. Rehoboam analyses millions of files of individuals, predicting the course of their life, including their precise time of death. The citizens of this world do not know that their lives are shaped and controlled by the predictions of an AI system, which aims to establish and maintain order in society. A society which was bound to destroy itself and had been saved by a god created by a human.

Not unlike in contemporary science fiction, the increasing use and deployment of AI technologies is influencing not only our online but more and more our offline lives as well. The unrest and resistance against measures fighting the covid-19 pandemic have shown that online mobilisation and the use of algorithms that push content that has been shared, viewed and liked the most, can result in difficult situations with consequences for entire societies, bringing insecurities, distrust and fears to the surface and into the actions of human beings. Thus, the conversation was shifted to newly strengthened science skepticism and conspiracy theories with real life consequences, making the need for human-centric AI systems and digital literacy more palpable.

Algorithmic decision-making systems play an increasingly important role in the landscape of various AI technologies as they are often used to make decisions and fulfill roles that were previously done by human beings for instance in employment, credit scoring, education and sentencing. Predictive policing and predicting recidivism rates are to some extent done by AI systems in all the states of the US. Many countries in Europe are adopting various ADM technologies, also to fight the covid-19 pandemic, as summarised by AlgorithmWatch.

The European Commission and especially DG Connect is currently drafting various pieces of legislation for the end of this year and next year such as the Digital Service Act regulating social media. These are following the broader themes and priorities as outlines in the White Paper on AI and the European Data Strategy. In a leaked position paper countries such as Sweden, France and Denmark are advocating for a softer approach relying on recommendations and advocating for a voluntary labelling scheme of AI systems to increase visibility for European citizens, businesses and administration in order to enable them to make an ethical choice. According to the position paper that would provide incentives for the companies to go beyond the law to establish trustworthy AI solutions on the one hand and give them competitive advantages on the other. Germany however would prefer a stricter approach on legislating AI technologies as trustworthiness in AI systems would be established by the legislation. 

In a recent online event on AI and racism EC representatives discussed AI and structural racism, participants agreed that biased data are a problem when employing these technologies and racialized communities are adversely affected by these technologies, for instance a cross national database with an automated exchange of information on criminal offenses (including DNA information following the Prüm decision introducing facial recognition data  was expressed to be concerning by Sarah Chander, a senior policy advisor for European digital rights. The database is criticized due to the possibilities of false positives, which might lead to investigations going into the wrong direction. Anthony Whelan, a digital policy advisor and part of the Cabinet of President von der Leyen however did not deem it controversial to use facial recognition data to identify individuals, he does acknowledge however that training data needs to be free of bias and sufficiently representative.

How far the proposed legislation by the European Commission will go and how it will address the many issues raised by these technologies is unclear. One might hope that European values and the privacy of citizens and any other concerned parties will be respected and are crucial for the guidance of these debates, so a society governed by Rehoboam remains a dystopian fiction.