Flashing red lights. ERROR. Apocalyptic alarms sound as people frantically type in the U.S. Department of Defense’s central control room. U.S. military officials look in astonishment at the large wall-sized display, which shows a nuclear weapon spiraling toward Earth. The target: an ally.
There is no superhero to stop the missile in its tracks and no prodigious tech genius to override the system and prevent tragedy. The weapon functions under the orders of the computer—and only the computer.
Although this may sound like the opening scene of a blockbuster film or the 2021 movie *Outside the Wire,* this scenario represents a very real and serious possibility. The problem is that no preventative policy exists regarding AI-driven autonomous weaponry.
The current age of rapid technological innovation has grown to include the development of militarized artificial intelligence (AI). AI is a technological intelligence capable of problem-solving, learning, generalizing, inferring, and adapting through computer science and large datasets. This definition highlights AI’s vast capabilities and the unpredictability that comes with its adaptability.
One aspect of AI that has recently developed across the globe is the progression of lethal autonomous weapons (LAWs). LAWs—including drones and unmanned vehicles such as tanks, ships, and aircraft—can use artificial intelligence to identify and eliminate threats.
There are three recognized levels of human control over autonomous weapons. The first level requires humans to initiate every action by giving the AI direct commands. The second allows AI systems to assess targets and execute plans autonomously, though humans can still intervene to abort an action. The third level grants AI full command of LAWs, with no human involvement.
Although these three levels are recognized by various governments and nonstate actors, there are global inconsistencies in how policies address them. U.S. policy on LAWs requires at least the second level of human control. The United Kingdom, however, insists that the first level of human control is necessary.
Additionally, Israel is developing drones that do not use live rounds but deploy e-stun grenades, tear gas, and sponge-tipped bullets that cause nonlethal harm. These conflicting policies contribute to a muddied political future where there is no defined limit to the degree of autonomy LAWs may reach.
As technology advances faster than policy, the window for establishing meaningful global regulations is rapidly closing. Without coordinated action, humanity risks entering an era in which the decision to take human life rests with algorithms operating beyond human control—a future that may be irreversible.
Edited by: Krithiga Narayanan

