Artificial Intelligence (AI) is progressing. And if we are not careful, it may well become a trojan horse for one single premise: the imposition of a universal approach to ethical decision making.

The quest for this Holy Grail of a universal Code of ethics in AI has left in its wake a remarkable, if not worrying, quantity of projects aiming to establish a corpus of ethical standards to frame its development. This intention is laudable. But it is vital that we question the basis on which this corpus is established. And the fast-increasing number of initiatives requiring this tool makes the necessity of ensuring the basis all the more urgent. We must ask two fundamental questions. Is it possible to create one single tool for everything and is there a real widespread desire to create such a tool?

A debate dominated by the West

Yannick Meneceur demonstrates the call for an ethics code by listing 126[1] initiatives in his book L’intelligence artificelle en procès. In its AI Ethics Guidelines Global Inventory, the project AlgorithmWatch identifies 166[2]. And, a study run by a team from ETH Zurich found 1180 codes ‘pertaining to ethical principles.’[3]

The need for a standard tool for AI ethics should make us question its pertinence and the reasons which motivate the growth of such initiatives.

The most worrying thing about these reports is that they are essentially published by a small number of people in a small number of countries. In the meta-analysis produced by ETH Zurich, covering 84 documents, authors highlight that ‘In terms of geographic distribution, data show a significant representation of more economically developed countries (MEDC), with the USA (n=20; 23.8%) and the UK (n=14; 16.7%) together accounting for more than a third of all ethical AI principles’ while ‘African and South-American countries are not represented independently from international or supra-national organisations’.[4]

In other words, Western countries are leading it comes to ethical decision making. If we add the weight of the EU, which is clearly asserting its willingness in establishing itself as a normative actor, the West accounts for 63% (53 documents) of the codes relating to the ethics of AI. According to the authors of ‘The Global Landscape of AI ethics guidelines’, this over representation indicates a lack of global equality in the treatment of AI and shows that the most economically advanced countries are shaping the debate by ‘neglecting local knowledge, cultural pluralism and global fairness’[5].

Something that limits the scope of this subject further is the fact that it is monopolised by a small circle of ‘those in the know’ concentrated in private, public and academic areas. Even within Western countries, it is clear that the debate is practically closed to the public.

The result of this Western dominance in the field of ethics in AI is that the approach to it is exclusively through continental philosophy and its three theories of ethics: virtue ethics, deontology and consequentialism. In fact, on closer inspection, we see that there is a real predominance of the deontological approach. This is controversial as it simplifies Kantian thought down to the extreme. It reduces it to a low-cost ethics programme with a top-down set of rules.

As we can see, Western thought occupies the space in ethics opened by AI and in turn denies cultural diversity, the variety of normative perspectives and, ultimately, the true complexity of ethical analysis.

In fact, the proliferation of codes, ethical charters and regulations applied to AI illustrates the impasse where we find ourselves when it comes to attempting to reach a consensus on universal standards.

Opening up to ethical plurality

We need to open the discussion about the ethical rules of AI up to different cultures and, therefore, different philosophical perspectives. Without this, AI could very quickly become an instrument of intellectual domination and modern imperialism. This, in turn, would stand in the way of any chance we have to establish a universally accepted set of norms.

Beyond the mere question of geographical representation, AI ethics should be thought through various philosophies and principles. And those making decisions should refrain from relying on prior judgments. The point of having an ethics standard in place in AI is for it to play its part in separating the acceptable and the unacceptable without a predetermined bias of Good or Evil. And this can only truly happen if diverse cultural identities and their philosophies are taken into account.

The negation of cultural diversity in this area is exemplified in China, whose values can be overlooked in the West. The point is not to take sides and question whether or not to adhere to Chinese positions. It is to understand and analyse them. As the second most powerful country in the world in terms of AI and with its 1.4 billion inhabitants, China will have a seat at the table when it comes to discussions on AI. So, it is important to know the long philosophical history that China has in order to understand people’s perspectives and therefore be able to interact with them constructively.

As Anna Cheng wrote in La pensée en Chine aujourd’hui in 2007, ‘the first thing people feel when they hear the adjective ‘Chinese’ and the word ‘philosophy’ is uncomfortable. It can be a very subtle feeling but it is certainly there’.[6] This is still the case, not only in the context of AI but in terms of geopolitics too. Anna Cheng continues ‘many of our contemporaries maintain the impression that the Chinese are not part of the conversation because of their submission to an autocratic regime’.[7] These preconceptions are what stand in the way of China playing its part in AI discussions.

The controversial Middle Kingdom is not alone in being the subject of this ostracization. There are other countries and cultures that are completely invisible in this debate, which should by all accounts be universal.

Latin America is also left on the side-lines despite being affected by the decisions that are made. Julio Pertuzé, assistant professor at the Pontificia Universidad Católica de Chile, writes that ‘AI ethics discussions are dominated by other voices, especially Europe’.[8] Based on the observation that ‘while the impact of AI is global, its debate has been dominated by a very restricted set of actors’[9], the Centro de Estudios en Tecnología y Sociedad from the University of San Andrés in Argentina launched the in 2019. This initiative was created to strengthen ‘a space where regional researchers can discuss the ethics, principles, norms and policies of Artificial Intelligence systems and the particular problems of Latin America and the Caribbean’.[10] Even if ‘the issue of AI ethics is at an early stage in the region and there is still not enough information available to comprehensively assess it’[11], there is no lack of countries in Latin America and the Caribbean that wish to be involved

India, too, is not to be overlooked. Its presence in technology, despite being regarded superficially and marked by the colonial past of the country, is emerging. And, simultaneously, it is putting its AI strategy into place.[12]

Looking deeper, African philosophies and wisdoms, such as Ubuntu, need their place in the conversation. Its ethnophilosophy, with its own thematic focusses marked by colonial experience[13] and its cultural nationalism[14] needs to be integrated into our thoughts about ethics in AI. The African continent is rich in intellectual history, experiences, relationships to humans and to nature, as well as in cultural diversity which is essential for debates on the ethics of AI. Just as for China, ‘it is as though the adjective ‘African’ covers an exclusionary particularism’.[15] African philosophy, like others, can open people up to new perspectives and help them to question their convictions. As Alassane Ndaw rightly asserts ‘being a philosopher in Africa is about understanding that there cannot be a monopoly on philosophy’.[16] This is true of philosophy in general, no matter where it comes from.

In terms of the Muslim world and Islam’s place in ethical thought, again preconceptions prohibit its acceptance. And in doing so, prevent this centuries-old religion that covers incredible cultural and intellectual diversity from contributing. The reduction of Islam to its geopolitical dimension and marginal Islamist components fosters a global rejection. And so this extraordinary culture that would enrich the debate on the ethics of AI is unable to take part.

Some have already understood the importance of extricating oneself from their own convictions. In Canada, the recognition of the indigenous culture is emerging in the field of AI[17] and in New Zealand the Māori culture is being considered in the recommendations related to AI ethics.[18] Two examples that should be followed.

In Conclusion

Cultural diversity, its particularisms and the different perspectives outlined in broad strokes are all elements we need to consider in the construction of ethics in AI. Without preconceptions. Without prejudice. Without value judgment. We have to learn to listen in order to depolarise and depoliticise the debate. And in doing so, we will be able to open it up to more perspectives.

Currently, we are at an impasse because we have not addressed this. And because, despite having good intentions, we impose a Western vision onto the rest of the world. By pushing that we fear onto other people, as is the nature of human beings, we assume our anxieties to be universal. Effectively, we are providing solutions to problems that affect only a minority of people as though they affect everybody equally and neglecting to consider the very specificities of problems faced by others. This is the crux of the issue: the universal. This concept has become an ideology. It claims to abolish cultural differences and refuses diversity, and which, today, borders on tyranny.

Indifference to others, indifference which often borders on hostility, is the natural companion to the forms of language that reduce and often mock, which Edward Saïd has denounced[19]. In fact, denying the difference of others is a way of compensating for our own fragilities and doubts.

We are crying out for universality of values whilst simultaneously praising cultural diversity. We are protesting against biases and discrimination but steering clear of ideas that we cannot or do not want to understand. While condemning Chinese or American imperialisms, we ourselves are imposing our own ethical empire onto the rest of the world. In other words, we do to others that which we do not want done to ourselves.

In light of this, the Observatoire Éthique & Intelligence Artificielle of the Institut Sapiens, has decided to devote the coming year to an in-depth reflection on ethical multiculturalism and the ethical regulation of AI. This will be done alongside several partners such as the Illinois Institute of Technology Centre for the Study of Ethics in Professions; the Obersvatorio del impacto social y ético de la inteligencia artificial; the Artificial Intelligence Society Bahrain; the Institut Français des Études Académique; INDIAai; the Indian Society of Artificial Intelligence and Law; the Université Mohammed Premier à Oujda and many others from Latin America, Asia, Africa and the Middle East. The Institut Sapiens will work on creating a report that covers the importance of cultural pluralism in the evaluation of ethics in artificial intelligence. In making this report as well as publications and events, the group is setting itself the goal of expanding the field of possibilities for ethics in AI. Without ostracising any perspective, it will expand the network of contributions from the cultures that make up our humanity.



* This commentary was first published at L’Institut Sapiens on January 18, 2021.


Emmanuel R. Goffi

Emmanuel R. Goffi is an AI philosopher and the Director of the Observatoire Ethique & Intelligence Artificielle of the Institut Sapiens. Before this, he served 27 years in the French Air Force. He holds a doctorate in political sciences from Science Po Paris and is professor of AI ethics at aivancity, School for Technology, Business and Society, Paris-Cachan. He is also a research associate at the Centre for Defence and Security Studies at the University of Manitoba, Winnipeg in Canada. 

Emmanuel has taught and researched at universities in France and Canada and regularly speaks at conferences as well as in the press. He published The French Armies Facing Morality: A Reflection at the Heart of Modern Conflicts (Paris: L’Harmattan, 2011) and coordinated the reference book Aerial drones: past, present and future- A Global Approach (Paris: La Documentation française, coll. Stratégie Aérospatiale, 2013) as well as many articles and chapters. 


[1] Yannick Meneceur, L’intelligence artificielle en procès: plaidoyer pour une réglementation internationale et européene, Paris Brulant, 2020, p. 201

[2] AI Ethics Guidelines Global Inventory, AlgorithmWatch, available at 

[3] Anna Jobin, Marcello Ienco, Effy Vayena, ‘The Global landscape of AI ethics guidelines’, Nature Machine Intelligence, Vol. 1, 20019, p. 391

[4] Idem.

[5] Ibid., p. 396.

[6] Anna Cheng, ‘Les tribulations de la « philosophie chinoise » en Chine’, La pensée en Chine aujourd’hui, Paris, Gallimard, 2007, p. 156-160.

[7] Anna Cheng, Introduction, Op, cit., p. 11-12.

[8] Julio Pertuzé, cited in The global AI agenda: Latin America, MIT Technology Review Insights, 2020, p. 6

[9] Norberto Andrade, Promoting AI ethics research in Latin America and the Caribbean, Facebook Research blog, July 2 2020

[10] CETyS| GuIA.ia, Artificial Intelligence in Latin America and the Caribbean: Ethics, Governance and Policies, GuAI.ia

[11] Constanza Gómez Mont, Claudia May Del Pozo, Cristina Martínez Pinto, Ana Victoria Martín de Campo Alcocer, Artificial Intelligence for Social Good in Latin America and the Caribbean: The Regional Landscape and 12 Country Snapshots, Inter-American Development Bank, fAIr LAC intuitive report, July 2020, p.34.

[12] See in particular Avik Sarkar, Ashish Nayan, Kartikeya Asthana, National Strategy for Artificial Intelligence #AIFORALL, Discussion Paper, NITI Aayog, June 2018; Abhivardhan, Dr Ritu Agarwal, AI Ethics in a Multicultural India: Ethnocentric or Perplexed? A Background Analysis, Discussion Paper, Indian Society of Artificial Intelligence and Law, 2020.

[13] Jean-Godefroy Bidima, Philosophies, démocraties et pratiques: à la recherche d’un « universal latéral», Critique, Tome LXVII, N° 771-772 ‘Philosopher en Afrique’, August- September 2011, p. 672-686

[14] Chike Jeffers, Kwasi Wiredu et la question du nationalisme culturel, Critique, Tome LXVII, N° 771-772 « Philosopher en Afrique », August- September 2011, p. 639-649.

[15]  Séverine Kodjo-Grandvaux, Vous avez dit « philosophie africaine », Critique, Tome LXVII, N° 771-772 « Philosopher en Afrique », August- September 2011, p. 613

[16] Alassane Ndaw, « Philosopher en Afrique, c’est comprendre que nul n’a le monopole de la philosophie », carried out by Rammatoulaye Diagne-Mbengue, Critique, Tome LXVII, N° 771-772 « Philosopher en Afrique », August- September 2011, p. 625.

[17] Karina Kesserwan, How Indigenous Knowledge Shapes our View of AI? Policy Options, February 16, 2018.

[18] See Karaitiana Taiuru, Treaty of Waitangi/Te Tiriti and Māori Ethics Guidelines for: AI, Algorithms, Data and IOT, May 04, 2020 or The Algorithm charter for Aotearoa New Zealand, New Zealand Government, July 2020.

[19] Edward W. Saïd, L’orientalisme : l’Orient créé par l’Occident, Paris, Editions du Seuil, 2005 [1978].

translated from French by Béatrice Vanes



Tel: +32 (0) 2 801 13 57-58