- 1. Overview
- 2. Etymology
- 3. Cultural Impact
Look, if you insist on dragging humanityâs latest existential crisis into the light, letâs at least be clear about what weâre discussing.
Autonomous military technology system
A Serbian Land Rover Defender, hauling a trailer burdened with a “MiloĹĄ” tracked combat robot . An unsubtle preview of the future, where the leash is getting longer.
Lethal autonomous weapons, or LAWs if youâre into sterile acronyms, represent a type of military drone or military robot with a particularly chilling job description. They are designed to be autonomous, which in this context means they can independently hunt for and engage targets based on a set of programmed constraints and descriptions handed down by their human creators. As of 2025, most of the drones and robots cluttering up the world’s battlefields aren’t truly autonomous, but don’t let that comfort you. The trajectory is clear. These systems are also known by a collection of equally unsettling names: lethal autonomous weapon systems (LAWS), autonomous weapon systems (AWS), robotic weapons, or, for those who prefer their dystopian futures without jargon, “killer robots.” The potential theaters for these instruments of drone warfare are as boundless as human ambition: the air, the land, the surface of the water, the crushing depths of the ocean, and the cold vacuum of space.
Understanding autonomy in weaponry
In the halls of weapons development, the term “autonomous” is a marvel of ambiguity, a word stretched and twisted to mean whatever a particular scholar, nation, or organization needs it to mean. Itâs a semantic fog machine, and everyoneâs got their hand on the switch.
The official United States Department of Defense Policy on Autonomy in Weapon Systems, a document surely crafted by a committee, defines an Autonomous Weapons System as one that “…once activated, can select and engage targets without further intervention by a human operator.” A simple enough definition, until you start picking at the threads of “activated” and “further intervention.” Heather Roff, a writer for the Case Western Reserve University School of Law , offers a more dynamic description. She sees autonomous weapon systems as being “… capable of learning and adapting their ‘functioning in response to changing circumstances in the environment in which [they are] deployed,’ as well as capable of making firing decisions on their own.” This introduces the unsettling concept of a machine that learns on the jobâa job that involves lethal force.
Across the Atlantic, the British Ministry of Defence defines these systems as entities “that are capable of understanding higher level intent and direction.” The definition continues: “From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control - such human engagement with the system may still be present, though. While the overall activity of an autonomous uncrewed aircraft will be predictable, individual actions may not be.” This definition reads less like a technical specification and more like the performance review of a terrifyingly competent, yet unpredictable, employee.
Scholars like Peter Asaro and Mark Gubrud, bless their hearts, try to cut through the noise. They propose that any weapon system capable of deploying lethal force without the direct operation, final decision, or explicit confirmation of a human supervisor can, and should, be deemed autonomous. Itâs a refreshingly straightforward take.
Of course, the reason this definitional squabbling matters is that creating treaties between nationsâthose flimsy paper shields against our worst impulsesârequires a commonly accepted label for what, precisely, constitutes an autonomous weapon. Good luck with that.
Automatic defensive systems
The impulse to have a machine stand guard and pull the trigger is hardly new. The oldest automatically triggered lethal weapon is the humble land mine , a fixture of warfare since at least the 1600s, and its aquatic cousin, the naval mine , which has been lurking in shipping lanes since the 1700s. These are patient, indiscriminate killers, the primitive ancestors of modern autonomous defense.
Some current systems are essentially these old ideas on steroids. Consider the automated “hardkill” active protection systems , such as the radar-guided CIWS (Close-In Weapon System). These have been bolted onto ships since the 1970s, like the American Phalanx CIWS , acting as a last line of defense. Such systems can autonomously identify and unleash a storm of fire upon oncoming missiles, rockets, artillery shells, aircraft, and surface vessels, all based on criteria preset by a human operator. They are the embodiment of twitchy, high-speed paranoia. Similar systems now protect tanks from incoming threats, including the Russian Arena , the Israeli Trophy , and the German AMAP-ADS . In the demilitarized zones of South Korea and on the borders of Israel, stationary sentry guns stand watch, capable of firing on humans and vehicles that cross their path. Many missile defence systems, like the much-lauded Iron Dome , also possess autonomous targeting capabilities to handle threats that move too fast for human reflexes.
The primary justification for excising the “human in the loop ” from these defensive systems is the brutal reality of response time. When a missile is seconds from impact, there is no time for committee meetings. Historically, their use has been confined to protecting personnel and installations from incoming projectiles, a purely reactive function. But the line between defense and offense is getting awfully blurry.
Autonomous offensive systems
According to The Economist , a publication that observes these trends with a certain detached dryness, the relentless march of technology suggests a future where uncrewed undersea vehicles will handle tasks like mine clearance, mine-laying, and establishing anti-submarine sensor networks in contested waters. They could patrol with active sonar, resupply manned submarines, or simply become low-cost missile platforms. In 2018, the U.S. Nuclear Posture Review pointed a finger at Russia, alleging the development of a “new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo,” known as “Status 6 .” A weapon that sounds like it was ripped from a particularly grim Cold War novel.
The Russian Federation is not shy about its ambitions, openly developing artificially intelligent missiles , drones , unmanned vehicles , military robots , and even medic robots.
In 2017, Israeli Minister Ayoob Kara mentioned that Israel is developing military robots, including some as small as fliesâa statement that conjures images of buzzing, weaponized insects.
The mask slipped further in October 2018, when Zeng Yi, a senior executive at the Chinese defense firm Norinco , declared in a speech that “In future battlegrounds, there will be no people fighting,” and that the deployment of lethal autonomous weapons is simply “inevitable.” A year later, US Defense Secretary Mark Esper publicly criticized China for selling drones capable of making life-or-death decisions with no human oversight, a classic case of the pot calling the kettle heavily armed.
The British Army began deploying new uncrewed vehicles and military robots in 2019, while the US Navy is busy developing “ghost” fleets of unmanned ships .
An STM Kargu drone, a quadcopter that looks harmless until you know what it can do.
In 2020, the hypothetical became horrifyingly real. A Kargu 2 drone, operating in Libya, independently hunted down and attacked a human target. This was documented in a March 2021 report from the UN Security Council ’s Panel of Experts on Libya. It may well have been the first recorded instance of an autonomous killer robot, armed and lethal, attacking human beings without a direct command.
The milestones kept coming. In May 2021, Israel deployed an AI-guided combat drone swarm in an attack in Gaza. Since then, reports of swarms and other autonomous weapon systems being used on battlefields globally have become increasingly common.
Meanwhile, DARPA , the American military’s workshop of future horrors, is working to make swarms of 250 autonomous lethal drones available to its soldiers. Because why have one killer robot when you can have a flock?
Ethical and legal issues
This is where the philosophers and lawyers wring their hands, trying to apply old rules to a new and profoundly indifferent kind of violence.
Degree of human control
In a 2012 Human Rights Watch report, Bonnie Docherty attempted to impose some order on the chaos by laying out three classifications for the degree of human control over these systems:
- Human-in-the-loop: A human must explicitly initiate the weapon’s action. The machine suggests, the human decides. This isn’t full autonomy, but a step on the path.
- Human-on-the-loop: The machine can act on its own, but a human supervisor has the ability to intervene and abort the action. A kill switch, in essence, assuming the human is paying attention and the connection doesn’t drop.
- Human-out-of-the-loop: The system operates entirely without human intervention. Once launched, it makes its own decisions.
Standard used in US policy
Current US policy is a masterclass in bureaucratic hedging. It states: “Autonomous ⌠weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” The phrase “appropriate levels” is doing a lot of heavy lifting there. The policy doesn’t forbid systems that kill without human intervention; it merely requires that they be certified as compliant with these vaguely defined “appropriate levels” and other standards. Itâs a loophole you could fly a drone swarm through. Furthermore, “semi-autonomous” hunter-killers that autonomously identify and attack targets don’t even require this certification.
In 2016, then-Deputy Defense Secretary Robert O. Work said the Defense Department would “not delegate lethal authority to a machine to make a decision,” but immediately qualified this by admitting they might have to reconsider, because “authoritarian regimes” might not show the same restraint. In October of that same year, President Barack Obama reflected on his own use of drone warfare , expressing a wariness of a future where a president could “carry on perpetual wars all over the world, and a lot of them covert, without any accountability or democratic debate.” In the US, security-related AI has been under the watch of the National Security Commission on Artificial Intelligence since 2018. On October 31, 2019, the Pentagon’s Defense Innovation Board published a draft report outlining five principles for weaponized AI, making 12 recommendations for its ethical use. The goal was to ensure a human could always look into the ‘black box’ and understand the kill-chain. A noble sentiment, but the real question is how, or if, such a report will ever be meaningfully implemented.
Possible violations of ethics and international acts
Stuart Russell , a professor of computer science at the University of California, Berkeley , has stated his concern with LAWs is that they are fundamentally unethical and inhumane. The core problem, in his view, is the profound difficulty a machine faces in distinguishing between legitimate combatants and non-combatants.
This concern is echoed by some economists and legal scholars, who worry that LAWs would shatter International Humanitarian Law . Specifically, they point to the principle of distinction, which demands the ability to discriminate between combatants and non-combatants, and the principle of proportionality , which requires that collateral damage to civilians be proportional to the military objective. This argument is often used to call for an outright ban, though it’s debatable whether this applies to LAWs that could, theoretically, adhere to these lawsâperhaps even better than a tired, frightened human soldier.
A 2021 report from the American Congressional Research Service flatly states that “there are no domestic or international legal prohibitions on the development of use of LAWs,” while acknowledging the ongoing, and largely fruitless, talks at the UN Convention on Certain Conventional Weapons (CCW).
LAWs, some argue, conveniently blur the lines of responsibility for a killing. Philosopher Robert Sparrow contends that autonomous weapons are causally responsible but not morally responsible, much like child soldiers. In both scenarios, he argues, there is a grave risk of atrocities occurring without anyone appropriate to hold accountable, a violation of the principles of jus in bello. Conversely, Thomas Simpson and Vincent MĂźller suggest that these systems might actually make it easier to track accountability by creating an immutable record of who gave which command. It’s also worth noting that potential violations of International Humanitarian Law are only relevant in conflicts where civilians are present. In the sterile environments of deep space or the abyssal plains of the ocean, such legal obstacles conveniently disappear.
Campaigns to ban LAWs
A rally on the steps of San Francisco City Hall , protesting a vote that would have authorized police to use robots for deadly force. A glimpse of the domestic front in this debate.
The mere possibility of LAWs has, predictably, sparked significant debate, much of it centered on the cinematic fear of “killer robots” running amok. The Campaign to Stop Killer Robots coalesced in 2013 to fight this future. In July 2015, over 1,000 artificial intelligence experts signed a letter warning of an impending artificial intelligence arms race and calling for a preemptive ban on autonomous weapons. The letter, presented in Buenos Aires at the 24th International Joint Conference on Artificial Intelligence (IJCAI-15), bore the signatures of luminaries like Stephen Hawking , Elon Musk , Steve Wozniak , Noam Chomsky , Skype co-founder Jaan Tallinn , and Google DeepMind co-founder Demis Hassabis .
PAX For Peace, a founding member of the campaign, argues that fully automated weapons will lower the threshold for going to war. By removing soldiers from the battlefield, the public becomes distanced from the visceral reality of conflict, giving politicians more leeway to start them. They warn that once these weapons are deployed, democratic control over warfare will become increasingly difficult. This echoes the sentiments of Daniel Suarez , author of the novel Kill Decision , who warned that this technology could recentralize power into the hands of a very small number of people.
Websites have sprung up to protest this development, cataloging the undesirable ramifications of weaponized AI and serving as a repository for news on international meetings and academic articles concerning LAWs.
The Holy See has repeatedly called for an international ban. In November 2018, Archbishop Ivan Jurkovic , its permanent observer to the UN, declared, âIn order to prevent an arms race and the increase of inequalities and instability, it is an imperative duty to act promptly: now is the time to prevent LAWs from becoming the reality of tomorrowâs warfare.â The Church fears these weapons will irreversibly alter warfare’s nature, detach conflict from human agency, and call into question the very humanity of our societies.
As of March 2019, a majority of governments at a UN meeting on the subject favored a ban. A powerful minority, including Australia, Israel, Russia, the UK, and the US, opposed it. The United States has even argued that autonomous weapons have helped prevent the killing of civilians, a claim that invites skepticism.
In December 2022, the debate hit home in America when the San Francisco Board of Supervisors voted to authorize the San Francisco Police Department to use LAWs. The move drew national attention and protests, forcing the Board to reverse its decision in a subsequent meeting.
Regulation without banning
A third path, for those who find outright bans unrealistic, is to focus on regulation. Proponents argue that Military AI arms control will require new international norms, embodied in technical specifications and combined with active monitoring, informal diplomacy by expert communities, and a robust legal and political verification process. In 2021, the United States Department of Defense reportedly requested a dialogue with the Chinese People’s Liberation Army on AI-enabled autonomous weapons. They were refused.
In 2023, a summit of 60 countries was held to discuss the responsible use of AI in the military. On December 22, 2023, a United Nations General Assembly resolution was adopted to support international discussion regarding concerns about LAWs. The vote was 152 in favor, four against, and 11 abstentions. A small step. A gesture. Whether it means anything remains to be seen.