Ethics and artificial intelligence: tomorrow's dilemmas are already here

In an increasingly automated and robotic world, artificial intelligence is already a prominent part of our lives. The medium-term challenge is to create systems that are compatible with our sense of ethics, that respect us, do not endanger us and do not control us or violate our rights.

UNESCO has dubbed it "the self-driving car dilemma". An autonomous (or robotic) car is a vehicle equipped with artificial intelligence that allows it to perceive the environment it is travelling through and, consequently, move with little or no human intervention. We are not talking about a simple theoretical possibility. This is a reality: companies like Honda, Waymo, Toyota, Mercedes Benz and Cruise have commercial prototypes above level 3 of effective autonomy, that is, perfectly capable of driving on their own. Vehicles feed on a large amount of contextual information captured by their multiple sensors. This is processed by their autonomous driving system, equipped with a complex self-learning algorithm to make decisions in real time.

The dilemma arises when the algorithm has to make decisions with a potential risk to human integrity. For example, if a vehicle with broken brakes were to head at high speed towards an intersection where there is an old woman and a child, and it had to decide which of the two to endanger with a sudden twist of the steering wheel. If the driver were a human being, we would be talking about a moral decision. In the case of an algorithm, it would be a technical decision, but not without ethical implications.


Responsibility begins with design

As explained by Raquel Jorge, an expert in technology and digital agenda at the Elcano Institute, "the algorithm will do what its human programmers have taught it to do". Human beings have decided which life has more value, that of the old woman or that of the child; or if it is really a false dilemma, because the only ethically consistent decision would be to look for an intermediate option that would reduce the risk to both as much as possible.

Jorge insists that what we are doing with artificial intelligence is not "outsourcing our ethical debates, something impossible by definition, because ethics is a human product; we are taking them to a new sphere of activities in which the liability will continue to be ours, even if a machine is acting".

In its Global Agreement on Ethics in Artificial Intelligence, approved in Paris on 28 November 2021, UNESCO set out to precisely define, in the words of its Director General, Audrey Azoulay, "what are the values and principles that should serve as our guide in building a legal infrastructure that enables the healthy development of AI systems".

Ethical reflection on the ultimate meaning of what is being done "is unavoidable, and must begin from the moment you start working, before writing the first line of code".


We are not far from the three laws of robotics, set forth by Isaac Asimov in the 1940s: a robot must do no harm, obey any human command that does not conflict with the first law, and protect itself as long as that is compatible with the previous two laws. Today, UNESCO is also considering the avoidance of ethnic and gender bias in artificial intelligence programs, respecting individual privacy and dignity or making equitable use of technology. In the words of Raquel Jorge, "we need an AI that, in addition to not running over old women or children, respects us, recognises us and does not discriminate against us."

This is a formidable challenge. Carmen Jordá Sanz, professor of Criminology at the Camilo José Cela University and head of the Intelligence Office and head of Prosegur Research, attended a workshop of experts on governance of artificial intelligence organised by Esglobal last March. "There was a lot of talk about the changes in the geopolitical context that the development of this type of technology is generating", explains Jordá, "but also, in particular, about ethical dilemmas and legal frameworks that take them into account". The discussion workshops featured speakers like Ángel Alonso Arroba, vice-dean of the School of Global and Public Affairs at IE University, and Carmen Colomina, professor at Colegio Europa and principal investigator at the Barcelona Centre for International Affairs (CIDOB).

For Jordá, who is the head of Prosegur's Intelligence and Foresight Unit, the level of the presentations and how much they focused on ethical issues shows "that the time has come to face these debates, that minor references to the medical use of AI or the problems posed by security robots or autonomous vehicles are no longer enough". The key to the future, according to Jordá, "is to develop technologies that put people at the centre". Ethical artificial intelligence can only prosper against "ethical societies and ethical companies that give a moral dimension to both coexistence and economic activity".

In other words, "only from countries and private companies with purpose and values can a fully responsible and respectful artificial intelligence emerge". The European Union, which is governed by a set of principles and strives to create a legislative framework compliance with these principles", explains Carmen.

UNESCO set out to precisely define, in the words of its Director General, Audrey Azoulay, "what are the values and principles that should serve as our guide in building a legal infrastructure that enables the healthy development of AI systems".


Although the legislation can sometimes be "somewhat rigid and not fully adapted to the development of the technological framework", it is important to regulate "issues such as data protection, the restriction on the use of biometrics, the correction of gender bias and so many other issues that are already on the table and will become even more topical in the coming years". The challenge is "to legislate and regulate efficiently and exhaustively, but without putting a brake on innovation, because artificial intelligence is a very competitive environment in which the United States and China have a substantial advantage over Europe in terms of research and commercial implementation".

In her role as representative of a private company such as Prosegur, with a clear technological and disruptive vocation, Jordá stresses that "ethical responsibility lies, first of all, in the design phase". Creators and developers of systems must "think in moral terms and take into account the implications of what they do". Algorithms "are mathematics, and mathematics does not have an ethical dimension; it is the producers of algorithms who must strive to give it to them."


The social dimension of technology

For Raquel Jorge, "beyond the economic and geopolitical impact of artificial intelligence, there is its social dimension, which is even more important". The Elcano Institute researcher points out that "Spain is an example at the international level, due to how seriously it is taking these issues". In her opinion, it is very significant that "when the European Union began, in 2018, the procedures to equip itself with a new legal framework for artificial intelligence, Spain was proposed as a tester country as regards giving a voice to social groups or demographic groups that are very rarely heard, such as children".

Spain "is one of the most scrupulous countries when it comes to recognising digital rights, such as privacy". It is also a pioneer "for its recognition of sexual diversity", an aspect "that AI systems should increasingly take into account". This is not, in the expert's opinion, an anecdotal detail: "In fact, a very frequent reason for user complaints is that many of the artificial intelligence systems they interact with are still designed by default for a very specific profile, generally middle-aged heterosexual white men. Correcting those biases so that no one feels left out is also an ethical imperative".

In the medium term, the great pending debates are, in Jorge's opinion, "in the first place, that of transparency and ethical traceability, i.e. the need to clearly explain to people what is done and why it is done, because the criteria applied by artificial intelligence are not always intelligible to people, and they should be". Then, the debate "of the limits to data accumulation and how to make that information compatible with the respect for privacy". Also "fairness: artificial intelligence should not represent a new qualitative disadvantage for the most vulnerable". And, finally, "we must be sure that AI is not going to be under any circumstances a system of social control, that it is not going to restrict our freedoms or violate our rights".

These are essential debates that will end up shaping the type of societies we will be living in in the medium term. And they go far beyond asking ourselves whom an autonomous vehicle should hit or not hit.