Has International Law Fallen Behind Modern Warfare? The Legal Gap Around Drones and Autonomous Weapons

The pace of battlefield technology has outstripped legal frameworks designed for human decision makers. As states deploy autonomous systems and AI-assisted targeting, a critical question emerges: can international humanitarian law adapt before the accountability gap becomes unbridgeable?


The pace of technological change in warfare has outstripped the evolution of international law. Drones, autonomous systems, and AI-assisted targeting have transformed how states conduct operations, yet the core legal framework governing armed conflict still reflects assumptions from a pre-digital era. It's a bit like trying to regulate self-driving cars with traffic laws written for horse-drawn carriages.

International humanitarian law (IHL) requires distinction, proportionality, and precaution, standards designed for human decision makers. But what happens when machines, not humans, identify targets? How do we assign responsibility when autonomous systems malfunction or misinterpret data? And can intent, a requirement for the most serious international crimes, ever be meaningfully applied to an algorithm? Spoiler alert: we don't really know.

The Technology Has Arrived, But the Rules Haven't

Autonomous weapons systems aren't some distant sci-fi scenario. They're here, right now. Systems like the Phalanx naval defence network can detect, track, and engage incoming threats without anyone touching a trigger. Israeli defence forces have deployed autonomous sentry guns along borders. Russia and China are pouring billions into autonomous military platforms, whilst the United States is busy integrating AI into fighter jets and targeting systems.

In December 2024, the UN General Assembly adopted a resolution on lethal autonomous weapons systems with a rather impressive 166 votes in favour. That's substantial international concern. Yet the resolution stopped short of imposing any binding restrictions, so it's essentially a strongly worded letter to nobody in particular. The International Committee of the Red Cross has warned that autonomous weapons represent a paradigm shift in warfare, removing humans from combat decisions and handing them over to machines.

Now, not everyone thinks this is necessarily catastrophic. Marco Sassòli, Professor of International Law at the University of Geneva, argued in a 2014 article for International Law Studies that autonomous weapons could theoretically improve IHL compliance if properly designed. His logic? Machines might apply targeting rules more consistently than stressed-out humans making split-second decisions. Fair point. The catch, though, is that this assumes technological capabilities that don't yet exist and, let's be honest, may never exist.

The Accountability Black Hole

Here's where things get properly messy. Traditional IHL assigns responsibility to actual people: soldiers who pull triggers, commanders who issue orders, states that wage wars. But when an algorithm makes the wrong call, who exactly do we put on trial?

Bonnie Docherty, Senior Arms Advisor at Human Rights Watch and lecturer at Harvard Law School's International Human Rights Clinic, has been one of the loudest voices sounding the alarm. In her 2015 report "Killer Robots and the Concept of Meaningful Human Control", she argues that machines fundamentally lack the human capacity for empathy and moral judgment that's essential to life-and-death decisions. More recently, in an April 2025 report called "A Hazard to Human Rights", co-published with the Harvard clinic, Docherty doubled down, arguing that autonomous weapons would violate multiple human rights obligations, including the right to life and human dignity.

Picture this scenario: an autonomous drone mistakenly strikes a civilian convoy. Who gets prosecuted at the International Criminal Court? The programmer who wrote the targeting algorithm three years earlier? The commander who pressed "activate" that morning? The manufacturer who sold the system? The state that deployed it? The answer is: nobody really knows, and that's terrifying.

Current command responsibility doctrine holds superiors liable for crimes committed by subordinates, but that assumes subordinates are, well, human. How does this work when your "subordinate" is a machine making thousands of micro-decisions per second? It's an accountability black hole.

The Core Principles Under Strain

IHL rests on three big principles, each of which assumes there's a human brain somewhere in the decision-making process.

Distinction requires identifying who can be lawfully targeted. Sounds simple until you realise it often depends on subtle cues: body language, context, whether someone's holding a weapon defensively or offensively. Can an algorithm spot the difference between a farmer with a hunting rifle and an insurgent with an identical weapon? A 2022 study in the Journal of Conflict and Security Law suggests not reliably, no.

Proportionality is even trickier. Military commanders must weigh expected military advantage against anticipated civilian harm. How many civilian casualties are "acceptable" to destroy an ammunition depot? It's an inherently subjective, value-laden judgment call. Sassòli argues that machines could theoretically make these assessments as well as humans, but as he acknowledges, we'd first need international agreement on how to translate these squishy legal principles into code. Good luck with that.

Precaution requires constantly reassessing whether an attack remains lawful. If circumstances change, if intelligence turns out to be dodgy, if civilians wander into the target area, someone needs to hit the abort button. But autonomous systems that roam for hours or days operate beyond real-time human oversight. Once they're launched, they're essentially on their own.

The International Response: All Talk, No Action

States have been discussing autonomous weapons through the Convention on Certain Conventional Weapons since 2014. A decade on, and progress has been glacial. Austria and others have called for a pre-emptive ban on fully autonomous weapons. Meanwhile, the United States, United Kingdom, Russia, and China are firmly in the "thanks, but no thanks" camp, insisting that existing IHL is perfectly adequate.

The December 2024 UN resolution was progress of sorts, but it's non-binding, which in diplomatic terms means "we're very concerned but not concerned enough to actually do anything." UN Secretary-General António Guterres has called autonomous weapons "politically unacceptable and morally reprehensible", which sounds admirably firm until you realise it's had precisely zero impact on defence budgets.

Meanwhile, technology races merrily ahead. Defence industries worldwide are developing increasingly sophisticated autonomous capabilities. It's a classic arms race: no state wants to fall behind, so everyone rushes forward together, regulatory framework be damned.

Bridging the Gap: What Needs to Happen

Several solutions have been floated. The concept of meaningful human control, championed by the ICRC and civil society groups, holds that humans must maintain sufficient oversight to make context-specific judgments. It's a sensible idea, though defining "meaningful" and "control" with enough precision to actually guide technology development has proven rather challenging.

Enhanced legal review processes could prevent deployment of dodgy systems, but only if states actually commit to robust reviews and accept external scrutiny. Spoiler: many won't. Clearer accountability frameworks establishing who's liable when autonomous systems cause unlawful harm would create proper incentives for careful development. But states are understandably reluctant to accept responsibility for machine actions.

The most ambitious proposal is a new treaty specifically addressing autonomous weapons. This could work, except it requires consensus amongst states with wildly divergent interests. Some argue that waiting for perfect agreement simply allows technology to outpace any possible regulation. They're probably right.

The Deeper Question

Beneath all the legal technicalities lurks a philosophical question: should machines be allowed to decide who lives and dies? Docherty and her allies say absolutely not. Humans possess empathy, moral reasoning, and dignity that machines will never replicate. Delegating kill decisions to algorithms degrades human life to just another data point.

Sassòli and others push back: aren't we romanticising human decision making a bit? Humans commit atrocities. Humans deliberately target civilians. Humans make catastrophic mistakes under stress. If machines could follow IHL more reliably, wouldn't that actually be better?

It's a genuine philosophical dilemma, and legal analysis alone won't resolve it. But regardless of where you stand, the practical reality is unavoidable: states are deploying increasingly autonomous systems. The law needs to catch up, fast.

Racing Against Time

Here's the uncomfortable truth: the longer we delay developing clear rules for autonomous weapons, the harder it becomes to establish them. Once states have invested billions in autonomous military capabilities, they'll resist restrictions. It's basic political economy.

The IHL framework has adapted before to air warfare, naval mines, precision-guided munitions. But adaptation requires deliberate effort, international cooperation, and political will, none of which are abundant at the moment. What is certain is that warfare continues evolving. Autonomous systems will play an increasing role whether international law keeps pace or not.

The gap between law and capability is widening. Closing it requires recognising the urgency and acting before the window slams shut. The integrity of international humanitarian law, and the protection it promises to civilians in armed conflict, depends on how quickly states move from endless discussion to actual decision. The clock is ticking.

Editorial Team

We are a group of interested lawyers, who are interested in how legal definitions are shifting over time. We aim to communicate these legal definitions in clear and concise language to educate people across the board.

Previous
Previous

When War Comes to the High Street: Can International Law Survive Urban Battlefields?

Next
Next

When International Law Meets Deepfakes: The Coming Crisis of Evidence