fbpx

Emmanuel R. Goffi, PhD and Aco Momcilovic, EMBA

 

AI may benefit or represent a threat to humanity in many ways in numerous fields such as education, environment, health, defense, transportation, space exploration and so on.

To avoid potential drifts of AI and benefit as much as possible from its advantages, AI must be controlled by normative frameworks. Yet, setting legal norms is a difficult and time-consuming process.

Therefore, ethics is seen as a convenient and acceptable alternative to laws, since conversely to laws, it is flexible, easy, and quickly adjustable, and less constraining than formal rules.

The number of Ethics code that have been issued around the world demonstrates the need to regulate AI while avoiding formal legal constraint. This tendency to use ethics as a tool to escape from constraining laws is referred to as ethics washing or cosm-ethics.

Too many rules kill rules

Since 2017, the number of codes pertaining to AI ethics has increased at a fast pace reaching 1 180 documents according to a meta-study released by the ETH Zurich.

In a report by Tim Dutton from the Canadian Institute for Advanced Research, AI strategies from 18 different countries have been studied showing that they did not “share the same strategic priorities” and that “governments are taking very different approaches to promoting the development of the same technology”.

With different so many documents and goals leading to different strategies, it seems impossible to elaborate a set of shared ethical standards. The report by Dutton even shows that ethics does not have the same importance to each government and that in some cases, like in the strategies of Japan, South Korea, and Taiwan; it is totally absent. Only Sweden and the European Union are putting ethics at the top of their priorities. For most other stakeholders, ethics comes after research and industrial strategy.

What can we learn from these considerations about AI ethics codes? First, stakeholders have been so far unable to agree on shared values on which they could build a global ethical standard. Second, ethics is not a top priority in AI strategies for all governments. Third, each country is pursuing specific interests and setting ethical standards accordingly.

So, questions remain: What are all these codes aiming at exactly? Is their multiplication efficient or counterproductive in regulating AI?

The EU deontological stance: a moral suicide

In the race for leadership in AI, the stakes are high, and the struggle is harsh. With the US and China leading the sector, being competitive requests some comparative advantage. It seems that positioning itself as a normative actor, the European Union has found a way to enter the competition knowing that it is, nonetheless, lagging way behind many other competitors.

The European Union has thus made the choice to invest its energy in demonstrating that, conversely to its rivals, it is willing to make sure that the development and use of AI would be framed by ethical standards in the absence of a legal framework.

In June 2018, the European Commission set a High-Level Expert Group on Artificial Intelligence which issued in April 2019, a document entitled Ethics guidelines for Trustworthy AI. In this document, seven principles are listed, with the purpose of “achieving Trustworthy AI”, “in the service of humanity and the common good, with the goal of improving human welfare and freedom”.

In other words, the European Union demonstrates a strong will to develop a responsible approach to AI, making sure this technology will not become a threat to human beings.

This stance differentiates the Union from other competitors for it gives it a status of normative power focused on ethical AI.

At the same time, one must remind that this posture comes with the scope of a tenacious competition driven by the promises of huge economic benefits.

The European Union, like any other AI race runner, is perfectly aware of the benefits it could generate from this technology. It is also perfectly aware that it cannot compete against the US or China. Hence the question: is the European Union positioning itself as a normative power because it firmly believes in the importance of AI ethics, or does it do it for the competitive advantage it procures? Is trustworthy AI, as Thomas Metzinger wrote it, “a marketing narrative invented by industry, a bedtime story for tomorrow’s customers”?

After all, it is clearly stated in the Guidelines that “Trustworthiness is a prerequisite for people and societies to develop, deploy and use AI systems”. How should we interpret that?

It is noticeable that in its White Paper on AI the EU mentioned that “Europe is well placed to benefit from the potential of AI, not only as a user but also as a creator and a producer of this technology”, that the Union “should leverage its strengths to expand its position” and seize “the opportunity ahead” in a “data-agile economy and to become a world leader in this area”.

The whole White paper is actually built on this focus on competitiveness supported by the establishment of an ecosystem of trust.

The normative logorrhoea of the European Union

In April, the European Union has released the draft proposal of its “Artificial Intelligence Act”, adding one more document to the list of guidelines, recommendations, regulations, reports, and other resolutions. In this “Proposal for a Regulation laying down harmonized rules on artificial intelligence, the EU is inserting a new layer of complexity to existing norms introducing the notion of a risk-based approach. If the intention is good, this new document makes even more complex the operationalization of EU normative requirements.

Many national AI Associations (like CroaAI) raised their concerns that companies working in the practical world, with pragmatic objectives will be even more lost in translation. The multiplication of ill-defined words, notions, and concepts, artificially linked to a superficial ethical narrative tends to blur EU expectations and real goals, much more than to make them clearer and easier to apply. The concern is that this additional regulation must drive a new wedge between companies with different capabilities and the potential to deal with new layers of administration and complexity.

The Union itself seems lost in its own regulatory tools and wording, trying to explain former tools with new ones that will need further explanation through new documents. Certification agencies are struggling to translate these regulations into practical processes. Companies are petrified facing this wall of meaningless words and ideas. The EU is killing its own market by imposing impenetrable rules, while other stakeholders, public and private, are developing at a fast pace free from legal constraints.

Normative discrimination might not happen only at the companies’ level. It can also apply to nations having different capabilities, possibilities, agendas, or even resources. This would create a situation of unfair discrimination between countries based on their National AI Capital. This discrimination could lead to the denial of diversity and existing differences in norms and values perspectives within the EU. Then, we could ask ourselves the following question: is the current EU regulatory process going to foster or to discriminate against nations’ opportunities to fully benefit from this technology?

At the end of the day, the ethical stance of the EU is all but clear. While asserting the importance of having an ethical framework for AI, the European Union developed non-constraining tools such as the General Data Protection Regulation (GDPR) and principles that both lend themselves to all kinds of interpretation and are almost impossible to understand, operationalize, and therefore to implement. On the practical level, is it even reasonable to assume that all countries have enough of “Human Capital”, in the sense of enough educated people in the area of AI/Tech/Regulation, that would be required to be employees of national regulatory agencies, that are planned to be open by this act? What will be the consequences for those who do not have enough experts? Not developing AI? Will they have some kind of pro forma procedure? Something else?

Language is the vector of perceptions and ideas. As such, it seems that the EU is slowly getting tangled up in ethereal verbiage, losing sight of any practical and operationalizable end.

As one can see it the reality of the words might be somehow different from the reality of the action. Between pretending to be and being, between ethics and cosm-ethics, there is a very thin and porous line.

Is the European Union a real normative actor willing to frame AI with ethical principles? Or is it a mere competitor doing whatever it takes to get its lion’s share? Is the EU strategy in AI relevant and efficient? Aren’t we killing our own dynamics instead of boosting it in a highly competitive field?

It seems that the EU is now stuck in its own ethical swamp. It might be costly for our economy and our companies that will lose the race for AI dominance.

It is time to rethink our strategy and our ethical stance escaping from low-cost deontology to switch to a subtle balance between real deontology and consequentialism, in the framework of virtuous behaviors.

 

The authors, Emmanuel R. Goffi (PhD) and Aco Momcilovic (EMBA) are Co-Founders and Co-Directors of Global AI Ethics Institute

This commentary was first published in Medium on May 09, 2021.

Contact

Phone

Tel: +32 (0) 2 801 13 57-58

Email

info@behorizon.org

Loading...