New Report on Artificial Intelligence and Related Technologies in Military Decision-Making on the Use of Force in Armed Conflicts

13 May 2024

Our latest report, ‘Artificial Intelligence And Related Technologies In Military Decision-Making On The Use Of Force In Armed Conflicts: Current Developments And Potential Implications’, highlights and examines some of the themes that arose during two expert workshops on the role of AI-based decision support systems (AI DSS) in decision-making on the use of force in armed conflicts. Drafted by Anna Rosalie Greipl, Geneva Academy Researcher, with contributions from Neil Davison and Georgia Hinds, International Committee of the Red Cross (ICRC), the report aims to provide a preliminary understanding of the challenges and risks related to such use of AI DSS and to explore what measures may need to be implemented regarding their design and use, to mitigate risks to those affected by armed conflict.

Anna Rosalie Greipl explained, ‘The introduction of AI to DSS for military decision-making on the use of force in armed conflicts adds a new dimension to existing challenges relating to non-AI-based DSS and distinct legal and conceptual issues to those raised by Lethal Autonomous Weapon Systems. These systems carry the potential to reduce the human judgment involved in military decision-making on the use of force in armed conflicts in a manner that raises serious humanitarian, legal, and ethical questions.’

Georgia Hinds went on to underscore the critical role of human judgment in addressing these concerns. ‘It’s crucial to preserve human judgment when it comes to decisions on the use of force in armed conflicts. And this will have implications for the way these systems are designed and used, particularly given what we already know about how humans interact with machines, and the limitations of these systems themselves.’

THE MILITARY APPLICATION OF AI DSS IN DECISION ON THE USE OF FORCE DEMAND FURTHER MEASURES AND CONSTRAINTS

The report shows that certain technical limitations of AI-DSS may be insurmountable and many existing challenges of human-AI interaction are likely to persist. It therefore may be necessary to adopt additional measures and constraints on the use of AI DSS in military decision-making on the use of force to reduce risks for people affected by armed conflicts and to facilitate compliance with IHL. To that end, it will be necessary to pursue additional research and dialogue to better understand the precise measures and constraints that may be required with regard to the design and use of AI DSS. Moreover, further analysis will be needed to identify the applications of AI DSS in this context that have the biggest impact on decisions on the use of force.

This report is part of the joint initiative ‘Digitalization of Conflict: Humanitarian Impact and Legal Protection’ between the ICRC and the Swiss Chair of International Humanitarian Law at the Geneva Academy of International Humanitarian Law and Human Rights. Other themes addressed under the project include the societal risks and humanitarian impact of cyber operations and, more recently, the legal implications of the rising civilian involvement in cyber operations.

MORE ON THIS THEMATIC AREA

News

New Series of 'In and Around War(s) Podcast Coming Soon

17 April 2024

Our podcast In and Around War(s) returns for a third season.

Read more

Military use of AI Drone News

New Report on Artificial Intelligence and Related Technologies in Military Decision-Making on the Use of Force in Armed Conflicts

13 May 2024

This report examines themes that arose during two expert workshops on the role of AI-based decision support systems in decision-making on the use of force in armed conflicts.

Read more

Computer screen with warning: civilian infrastucture: do not attack Project

The Digitalization of Armed Conflict

Started in September 2020

This project will explore humanitarian consequences and protection needs caused by the digitalization of armed conflicts and the extent to which these needs are addressed by international law, especially international humanitarian law.

Read more

surveillance image of people Project

Human Rights in a Digitalized World: Mapping Risk, Strengthening Regulation and Promoting the Development of International Human Rights Law

Started in August 2023

To unpack the challenges raised by artificial intelligence, this project will target two emerging and under-researched areas: digital military technologies and neurotechnology.

Read more

Cover Page of Research Brief Publication

Between Science-Fact and Science-Fiction Innovation and Ethics in Neurotechnology

published on May 2024

Milena Costas, Timo Istace

Read more

Cover of Report Publication

Artificial Intelligence And Related Technologies In Military Decision-Making On The Use Of Force In Armed Conflicts: Current Developments And Potential Implications

published on May 2024

Anna Rosalie Greipl, Neil Davison, Georgia Hinds

Read more