According to a 2016 Wired report, THE NEXT PRESIDENT WILL DECIDE THE FATE OF KILLER ROBOTS—AND THE FUTURE OF WAR.
THE NEXT PRESIDENT will have a range of issues on their plate, from how to deal with growing tensions with China and Russia, to an ongoing war against ISIS. But perhaps the most important decision they will make for overall human history is what to do about autonomous weapons systems (AWS), aka “killer robots.” The new president will literally have no choice. It is not just that the technology is rapidly advancing, but because of a ticking time bomb buried in US policy on the issue.WIRED; 6 September 2016
In 2012, the Obama administration created Department of Defense Directive 3000.09. is from September 6, 2016.
It’s a program the Obama administration and the Department of Defense created in 2012 and set it to expire in 5 years, 2017, leaving it up to the next POTUS to set/reset it’s policy or if they were to let it expire, “then they enter the wild west of AI.”
According to a Gizmodo article, “the U.S. Army put out a call to private companies for ideas about how to improve its planned semi-autonomous, AI-driven targeting system for tanks. In its request, the Army asked for help enabling the Advanced Targeting and Lethality Automated System (ATLAS) to “acquire, identify, and engage targets at least 3X faster than the current manual process.””
The Army wants you to know that their “killer robot tanks will have the highest ethical standards.”
But because that language caused stir of concern the US Army released a disclaimer “to call for white papers in a move first spotted by news website Defense One.”
(Defense One) – US Military Changing ‘Killing Machine’ Robo-tank Program After Controversy.
It was a frightening and dramatic headline: “The US Army Wants to Turn Tanks Into AI-Powered Killing Machines.” The story, published this week in Quartz, details the new Advanced Targeting and Lethality Automated System, or ATLAS, which seeks to give ground combat vehicles the ability to “acquire, identify, and engage targets at least 3X faster than the current manual process.”
The response seems to have spooked the Army, which is now changing its request for information to better emphasize that the program will follow Defense Department policy on human control of lethal robots. They are also drafting talking points to further the new emphasis.
The robot’s ability to identify, target, and engage doesn’t mean “we’re putting the machine in a position to kill anybody,” one Army official told Defense One.
A second Army official said the changes had been “suggested” by the Office of the Secretary of Defense to the AI task force of the Army’s Futures Command. The official didn’t know whether the changes had been made, but said they’d likely be made before the program’s March 12 industry day.
A Defense Department official said the language change might be followed by other unspecified ones.
The ATLAS program shows how much has changed since 2014 when the idea of armed ground robots was anathema to the U.S.military. The idea has seen ups and downs. In 2003, the Defense Department began to experiment with a small, machine-gun tank robot called SWORDS. In 2007, it was sent to Iraq. But the military ended the program after the robot began to behave unpredictably, moving its gun chaotically.