Artificial Intelligence

Weapons powered by artificial intelligence pose a frontier risk and need to be regulated

A robot gives a handsign. Unsplash/Possessed Photography

The military applications of artificial intelligence are a growing concern. Image: Unsplash/Possessed Photography

Jake Okechukwu Effoduh
Alumni, Global Shapers Community, Osgoode Hall Law School
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Listen to the article

  • Frontier risks are low-likelihood, high-impact threats that could arise as humans explore new realms, whether technological, ecological or territorial.
  • Lethal Autonomous Weapons Systems don't need human control to act and pose a critical frontier risk.
  • Global regulations on the use of militarized artificial intelligence are insufficient but there are pathways forward

Major advances in artificial intelligence (AI) technology in recent years have led to the growing sophistication of military weaponry. Among the most noteworthy new devices are Lethal Autonomous Weapon Systems (L.A.W.S.): machines engineered to identify, engage and destroy targets without human control.

These machines, and the risks posed by their deployment, have been fodder for science fiction literature and cinema with the German film Metropolis in 1927 and Ray Bradbury’s 1953 dystopian novel Fahrenheit 451. But L.A.W.S are not just speculation anymore. AI systems can now independently use data, predict human behavior, gather intelligence, perform surveillance, identify potential targets and make tactical decisions in virtual combat. The use of robots in the military is becoming commonplace and the leap to fully autonomous robots being optimized for warfare is already realistic. The big question now is not what kinds of L.A.W.S could exist, but if they pose a frontier risk to humanity.*

What are frontier risks?

Frontier risks are low-likelihood, high-impact threats that could arise as humans explore new realms, whether technological, ecological or territorial. The outcomes cannot be predicted easily (especially by a non-niche group without expert insights), but they have the capacity to alter the world as we know it. These frontier risks are best illustrated by three metaphors: “dragon kings” (rare, high-impact events that are somewhat predictable), “grey rhinos” (highly probable, high-impact events that occur only after a lengthy series of warnings) and “black swans’ (high-impact events that are virtually impossible to predict). The frontier risks that could emerge from the full militarization of autonomous weapons include catastrophic fallout from army raids and a human existential crisis in the age of machine sentience.

Why are military systems becoming autonomous?

Drones are already used by the military today, but they have vulnerabilities. Most rely on human command through a set of communication channels that can be infiltrated by adversaries. In addition, considerable personnel is required to operate the multiple drones dispatched on the field. By cutting out the need for human control, thousands of L.A.W.S can be dispatched at once, all able to make tactical and timely decisions on their own. This intensifies the speed and expansion of combatants’ reach and reduces the need for proximity to targets. L.A.W.S also exceed the limitations of the human mind and body (for example, they do not risk suffering post-traumatic stress disorders or make erroneous judgment in service of self-preservation).

Have you read?

There are, however, some unanswered questions and potential unforeseen consequences. The race by world powers to become leaders in this field have led to what four-star U.S. Army General David Petraeus called “the early stages of a tech Cold War.” Similarly, the entrepreneur Elon Musk warned in 2017 that the scramble for military applications of artificial intelligence could become so intensified that it causes WWIII. Because of the popularity of robot doomsday speculation in pop culture over the decades, the fear of L.A.W.S. is already ingrained in the public imagination. But due to political sovereignty some states do not seem as moved by the fears of the potential risks from L.A.W.S. There are also insufficient global regulations on the use of militarized AI. While the United Nations and the European Parliament have fielded proposals to ban the use of lethal autonomous robots in war, there are still unanswered questions about how the weapons could be regulated and whether regulation would conflict with states’ sovereignty.

Will the machines save us or kill us?

Machines make mistakes, albeit less often than human soldiers do. But while humans are held responsible for their actions, machines cannot suffer legal consequences. With the complexities of international humanitarian law (IHL) regarding warfare, there could be the risk of setting loose machines that have difficulty differentiating among active combatants, injured soldiers and those surrendering, flouting the 1949 Geneva Conventions. One could argue that the idea of fully autonomous weapons already contravenes IHL. For instance, the Martens Clause prioritizes human dignity and the capacity of soldiers to show compassion toward their fellow humans. But robots powered by lines of code do not have the capacity to make such decisions yet and could end up flouting the rules. Another concern about violations of IHL is underscored by the potential for L.A.W.S. to act of their own accord. An idea that used to seem like fodder for the most imaginative of dystopian fiction is gaining more traction in recent years; Musk and theoretical physicist Professor Stephen Hawking both warned about sentient AIs establishing a takeover and threatening human civilization. The UN Convention on Certain Conventional Weapons and Article 14 of the New Delhi Rules prohibits the use of weapons that could escape from the control of those who employ them, thus endangering the civilian population. While that scenario might still be considered unlikely, this is a kind of frontier risk that could have catastrophic impact on the world.

Pathways to avert risks from L.A.W.S.

While L.A.W.S. are another achievement for military intervention, it may be much safer to employ a “human-on-the-loop” approach in which people retain control over the weapons in a supervisory role. L.A.W.S. may not pose an apparent catastrophic risk like say COVID-19, cyberattacks or climate change, but we need to apply fresh perspectives to thinking about all these risks and how to avert them. We need more transparent channels for risk communication; to invest in institutions that oversee AI risk management and enact policies that check the speed at which military tech is developing. The United Nations can proscribe certain risk levels of military deployment and incentivize those whose deployment are compliant with IHL (e.g., like the Forum’s collaboration with the Smart Toy Awards to reward robot manufacturers that prioritize ethics in their creations). The focus should be less on perfecting the art of war and instead on learning to curbing , through technology, from the pervasive proclivity for waging war. In the end, this would be a win for IHL, scientific ethics and the human civilization.

(This is partly why the World Economic Forum set up the Global Future Council on Frontier Risks: to use its diverse, forward-thinking membership to bring fresh thinking to future global risks, and identify key future shocks for the next generation and propose policy opportunities that will build resiliency today in the face of these risks.)

(Jake Okechukwu Effoduh is a Global Shaper in the Abuja Hub.)

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum