fbpx

The long-term impact of artificial intelligence (AI) will be unprecedented in the field of international security. There is already an ongoing fierce competition for global technological supremacy. Defense projects by the military powers seek to secure strategic advantages on the battlefield. Perceived rivalry only increases investment in research and development. However, despite concerns related to major conflicts between these powers, serious consequences will be felt sooner elsewhere in the Global South. We should then ask what the perception of developing countries is in the face of accelerated militarization of AI.

The effects of an AI arms race

Without any form of control, sophisticated, fully autonomous AI-powered weapons will eventually be developed and become a threat to strategic stability.[1] Their startling speed can cause a dangerous escalation of conflict, friendly fire or unwanted clashes in the event of false alarms or accidents. These weapons will increase uncertainty due to their lack of flexibility in changing circumstances or in case they behave unpredictably.

An AI arms race can push states to neglect moral, legal, or security criteria in order to achieve faster results and outperform their rivals. The risks are serious, urgent, and worrying. Lethal autonomous weapons systems (LAWS) could perform missions independently, select targets, and use force without human intervention. Calculation errors can lead to unforeseen outcomes. It will be extremely difficult to predict what self-learning codes can do on the ground if left to their own devices. There are many examples of algorithms, trained in neural network learning, which reach embarrassing conclusions that even their creators could not have anticipated.[2]

Machines can also present technical faults (malfunction, breakdowns, or programming errors) or be jammed by adversarial cyber-attacks, with unforeseeable consequences. Even worse, in a scenario of uncontrolled proliferation, these weapons can fall into hostile hands, non-state actors, insurgents, and extremist groups.

The Global South: testing ground and target for military AI

The technological gap in military capabilities between the great powers and countries of the Global South may widen much further in the coming decades. Few developing countries will be able to carry out a catch-up strategy to strengthen their technological leverage for credible defense.

Power disparities have of course always existed, but sometimes such an imbalance becomes structurally so great to the point of generating lasting and disproportionate effects. A classic example is the advent of the nuclear age, redefining the role of conventional warfare. Nuclear weapons are so destructive that there are usually serious constraints not to use them. One of the problems with the militarization of AI systems is the urge to deploy them to achieve strategic or tactical advantages, in the belief that there will always be a menu of choice for every occasion and their use would be justified in the end by military necessity.

Numerous studies examine international competition, deterrence, and geopolitical disputes involving military powers, in particular the United States, China, and Russia. But the most immediate risk does not lie in wars of great intensity between the main actors. While the AI ​​arms race is often seen as a global struggle for world hegemony, the fallout from this race can have a serious impact on theaters of war far from their capitals, that is, in low-intensity and small-scale intra-state conflagrations, civil or proxy wars, and in counterinsurgency or urban warfare situations.

Elite forces will be among the first military units to be AI compatible. It turns out that covert and special operations frequently take place in troubled regions, outside of traditional rules of engagement. LAWS would likely be deployed for the first time in theaters of conflict in developing countries, as a proving ground, with the associated risks of unexpected fatal engagements, collateral damage, and asymmetric encounters of machines against biological adversaries.

Indeed, the use of remote and depersonalized warfare will continue unabated, as long as armed forces seek to protect their own military personnel. Autonomous weapons reinforce the asymmetry by creating a physical separation that further protects their commanders and operators.[3] Certain countries could deploy autonomous weapons in a myriad of missions abroad, as the risk of casualties on their side will be reduced. A conflict between great powers seems distant on the horizon, but the unbalanced nature of robotic warfare would increase the likelihood of machines killing people, soldiers or civilians, in poor countries.

Autonomous weapons and the argument of benevolence

Some argue that autonomous weapons have the advantage of being more precise in the use of force, thereby contributing to making war more “humane” – by protecting civilians, for example. Machines, they say, would be faster, have no emotion, fear, fatigue, or desire for revenge, unlike human fighters. We can call this way of thinking as the argument of benevolence. Algorithmic precision could, theoretically, make weapons more discriminating and less likely to cause collateral damage or violate the laws of war.

Nonetheless, it is not clear whether AI systems will have such remarkable competence in volatile environments. How will they react to enemy attacks aimed at tricking the machine into making fatal mistakes, thus hitting the wrong targets? Likewise, algorithmic bias can lead to false positives and the death of innocent people or the destruction of special objects and places protected by the Geneva Conventions. We are still a long way from building an AI flexible enough to understand the larger context in real-life situations and to reliably adapt its behavior, before making decisions that put human lives at risk.

It is often the moral pain of killing that can call into question the horrors of war, increase its political cost, and serve as a constraint on the excesses and suffering that international humanitarian law seeks to prevent. The dehumanization of war opens the door to accepting the banality of evil as a price to pay for greater “efficiency”.

Remote military engagements often cause significant harm to civilians, although they are presented as primarily “surgical”. The commander initiating a military action will normally calculate the degree of exposure of their own forces in order to assess what is the acceptable level of risk. If casualties for the attacker are null, the only collateral damage will be the number of civilian fatalities that the initiator considered a priori to be justified. In this context, justifying a military intervention by claiming that it “reduces risk” is necessarily a one-sided discourse that can be challenged by more critical interpretations.

It is for this reason that, from the perspective of the Global South, the benevolent argument that AI-enabled weapons “save lives” may also appear to be a justification for interventionism. Then again, a dehumanizing war could encourage political leaders to authorize the use of force to settle disputes away from their national territories, to the detriment of negotiation and diplomacy, since their troops would be safe anyway.

Do not delegate life and death decisions to machines

United Nations Secretary-General António Guterres has repeatedly warned of the risks associated with the weaponization of AI. The prospect of having machines with the power to kill without human intervention is “politically unacceptable and morally repugnant”, he said.[4] An electrical engineer by training, Guterres urged Member States to use the UN as a platform to negotiate crucial issues for the future of humanity. He convened the High-Level Panel on Digital Cooperation, which included in its 2019 report the recommendation 3C on AI and autonomous intelligent systems, stressing the foundational principle that “life and death decisions should not be delegated to machines”.[5]

As a dual-use technology, AI must be available for peaceful purposes in all countries for the benefit of all humanity. In war, there are also rules to be observed. International humanitarian law is very clear in stating that the right of parties to a conflict to choose methods or means of warfare “is not unlimited”, in accordance with article 35 of Protocol I to the Geneva Conventions of 1949. It is also prohibited to employ weapons, projectiles, and material and methods of warfare of a nature to cause “superfluous injury or unnecessary suffering”.[6]

There are serious doubts about the ability of autonomous weapons to comply with the requirements of jus in bello. They would not be able to distinguish between combatants and non-combatants, or understand the context to assess whether military action is proportional or not. Moreover, LAWS could not decide for themselves what is really needed in military terms, as this assessment requires a political judgment that belongs only to humans.

Although war robots could become increasingly efficient, delegating the moral burden of killing to machines poses fundamental ethical problems. Ethics are linked to human values ​​and the organization of societies. Moral decisions belong only to the individual concerned and cannot be delegated to others. This is why outsourcing ethics to a machine that is not anchored in a network of human dialogue is contrary to the basic principles of morality. Even if AI one day develops a fully-fledged moral agency, turning moral judgments over to computer software still means transferring ethics from one person to another external entity.[7]

Towards a new era of algorithmic hegemony?

In AI governance discussions, its military dimension and dilemmas should never be forgotten. Although the most common justification in several countries for investing in military AI is to outperform their major rivals, the possibility that autonomous weapons will be deployed first in the Global South seems more likely. As previously stated, if LAWS are considered tactically effective for specific missions, the threshold for using them will be considerably lower, thus posing a direct threat against countries lacking the means to deter coercion and aggression.

Here is a real danger for developing countries: the deeply disturbing prospect of an asymmetrical fight between machines and humans. In such situations, human forces might have little chance and be decisively defeated. In terms of global balance of power, a new era of algorithmic hegemony and digital neocolonialism could emerge if large-scale technological asymmetries, fueled by AI, reach a level comparable to the overwhelming military superiority once enjoyed by European powers in relation to their colonial possessions around the world.

We often talk about Western domination in the field of AI ethics, and rightly so, but we must also talk about the domination of the strategic debate by the most armed. We often call for more cultural diversity in international AI forums, but we must also call for, a fortiori, more attention to the perspective of the least favored countries in military capacity and the potential victims of killer algorithms. Do autonomous weapons reduce risks? Ask the targets.

 

 

* Diplomat, PhD in International Relations, and researcher on artificial intelligence and global governance. Currently based in Conakry, Republic of Guinea. The opinions expressed here are those of the author. Email: egarcia.virtual@gmail.com

 

 

[1] Brabant, Stan. Robots tueurs : bientôt opérationnels ? Bruxelles, Groupe de recherche et d’information sur la paix et la sécurité (GRIP), 29 mars 2021, https://grip.org/robots-tueurs-bientot-operationnels

[2] Making the case: the dangers of killer robots and the need for a preemptive ban. Human Rights Watch, International Human Rights Clinic at Harvard Law School, 2016, https://www.hrw.org/report/2016/12/09/making-case/dangers-killer-robots-and-need-preemptive-ban

[3] Bode, Ingvild and Huelss, Hendrik. The future of remote warfare? Artificial intelligence, weapons systems and human control, in: McKay, Alasdair et al. (eds.). Remote warfare: interdisciplinary perspectives. Bristol: E-International Relations Publishing, 2021, p. 219, https://www.e-ir.info

[4] Le chef de l’ONU exhorte à interdire les armes autonomes qui tuent, ONU Info, 25 mars 2019, https://news.un.org/fr/story/2019/03/1039521

[5] The age of digital interdependence: report of the UN Secretary-General’s High-Level Panel on Digital Cooperation, New York, 2019, https://digitalcooperation.org/report

[6] Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 1977, https://ihl-databases.icrc.org/ihl

[7] Boddington, Paula. Towards a code of ethics for artificial intelligence. Cham: Springer, 2017, p. 90.

Loading...