New Report on Artificial Intelligence and Related Technologies in Military Decision-Making on the Use of Force in Armed Conflicts

13 May 2024

Our latest report, ‘Artificial Intelligence And Related Technologies In Military Decision-Making On The Use Of Force In Armed Conflicts: Current Developments And Potential Implications’, highlights and examines some of the themes that arose during two expert workshops on the role of AI-based decision support systems (AI DSS) in decision-making on the use of force in armed conflicts. Drafted by Anna Rosalie Greipl, Geneva Academy Researcher, with contributions from Neil Davison and Georgia Hinds, International Committee of the Red Cross (ICRC), the report aims to provide a preliminary understanding of the challenges and risks related to such use of AI DSS and to explore what measures may need to be implemented regarding their design and use, to mitigate risks to those affected by armed conflict.

Anna Rosalie Greipl explained, ‘The introduction of AI to DSS for military decision-making on the use of force in armed conflicts adds a new dimension to existing challenges relating to non-AI-based DSS and distinct legal and conceptual issues to those raised by Lethal Autonomous Weapon Systems. These systems carry the potential to reduce the human judgment involved in military decision-making on the use of force in armed conflicts in a manner that raises serious humanitarian, legal, and ethical questions.’

Georgia Hinds went on to underscore the critical role of human judgment in addressing these concerns. ‘It’s crucial to preserve human judgment when it comes to decisions on the use of force in armed conflicts. And this will have implications for the way these systems are designed and used, particularly given what we already know about how humans interact with machines, and the limitations of these systems themselves.’

THE MILITARY APPLICATION OF AI DSS IN DECISION ON THE USE OF FORCE DEMAND FURTHER MEASURES AND CONSTRAINTS

The report shows that certain technical limitations of AI-DSS may be insurmountable and many existing challenges of human-AI interaction are likely to persist. It therefore may be necessary to adopt additional measures and constraints on the use of AI DSS in military decision-making on the use of force to reduce risks for people affected by armed conflicts and to facilitate compliance with IHL. To that end, it will be necessary to pursue additional research and dialogue to better understand the precise measures and constraints that may be required with regard to the design and use of AI DSS. Moreover, further analysis will be needed to identify the applications of AI DSS in this context that have the biggest impact on decisions on the use of force.

This report is part of the joint initiative ‘Digitalization of Conflict: Humanitarian Impact and Legal Protection’ between the ICRC and the Swiss Chair of International Humanitarian Law at the Geneva Academy of International Humanitarian Law and Human Rights. Other themes addressed under the project include the societal risks and humanitarian impact of cyber operations and, more recently, the legal implications of the rising civilian involvement in cyber operations.

MORE ON THIS THEMATIC AREA

LLM students pleading at the Geneva Academy News

LLM Students Plead on IHL Violations in Gaza and the West Bank

24 April 2024

Half of the class of our LLM in International Humanitarian Law and Human Rights pleaded on 20 April on the current armed conflict in and around Gaza.

Read more

A Map of the region News

Our Experts and Resources on Israel/Palestine

1 March 2024

Discover our resources and what our experts and alumni say about the current situation in Israel and Palestine, with regular updates to include new events, articles, podcasts and comments.

Read more

A session of the UN Human Rights Council Project

IHL Expert Pool

Started in January 2022

The IHL-EP works to strengthen the capacity of human rights mechanisms to incorporate IHL into their work in an efficacious and comprehensive manner. By so doing, it aims to address the normative and practical challenges that human rights bodies encounter when dealing with cases in which IHL applies.

Read more

surveillance image of people Project

Human Rights in a Digitalized World: Mapping Risk, Strengthening Regulation and Promoting the Development of International Human Rights Law

Started in August 2023

To unpack the challenges raised by artificial intelligence, this project will target two emerging and under-researched areas: digital military technologies and neurotechnology.

Read more

Cover of the 2023 Geneva Academy Annual Report Publication

Annual Report 2023

published on July 2024

Read more

Cover Page of Research Brief Publication

Between Science-Fact and Science-Fiction Innovation and Ethics in Neurotechnology

published on May 2024

Milena Costas, Timo Istace

Read more