In late May, news broke that a Turkish-made Kargu-2 drone had possibly taken action without the intervention or command of a human operator to hunt down, engage and possibly kill or injure human beings in the Libyan desert last year. The incident, described in a recent United Nations report, spurred a rash of media coverage and commentary along the lines of an Axios article, titled, “The Age of Killer Robots Has Already Begun.” Though some observers argue that what happened, as described by the U.N. report, is less serious than what the media is reporting, the event represents a watershed moment in international efforts to build a prohibitionary norm against “killer robots” and will likely galvanize efforts to create a treaty banning them.
For some who follow autonomous weapons closely, the recent media attention may have seemed overblown. First, the reference appeared in a report not about the perils of autonomous weapons, but about international involvement with the belligerent parties in Libya’s military conflict. The mention of the drone was not in the context of highlighting the threat posed by weapon systems capable of killing autonomously, but rather of chastising Turkey for deploying military weapons at all to the conflict, presumably in violation of a 2011 Security Council resolution.
Moreover, as James Vincent writes in The Verge, the Kargu-2, though capable of acting autonomously, can also be human-operated. So it may not actually have been used in the incident cited by the report as an “autonomous weapon,” based on understandings percolating through the U.N. system, NGO sector and defense industry. And for the same reason, Zachary Kallenborn argues, making the determination of whether it and other systems with similar dual-capability modes are acting autonomously in any given instance will always be difficult.