Friday, November 24, 2017 by Ethan Huff
The idea that artificial intelligence (AI) is soon to invade every facet of human life – including the theater of war – has become all too real at the United Nations. A U.N. panel, recognizing that the development of AI technologies is now moving at breakneck speed, recently advised that international regulations be established to limit the types of weaponry and techniques that “killer robots” can use to target human beings.
That’s right: The U.N. is now treating seriously the possibility that AI robots uncontrolled by humans might soon be used during battle to fight against their pre-programmed “enemies,” or potentially turn against their “allies.” And because of this, the agency is warning that certain guidelines need to be put in place to restrict how much damage they can do. According to U.N. officials, governments simply aren’t doing enough to preemptively thwart a potential disaster scenario caused by the eventual rollout of advanced AI weaponry.
More than 80 countries took part in the panel, which centered around what the agency dubbed “Lethal Autonomous Weapons Systems.” The U.N. presented a video illustration similar to a scenario one might expect to see in the Terminator movies, suggesting an uncertain fate for humanity should the nations of the world fail to act in reigning in such technology before it has the chance to do this, or much worse, to humanity.
Falling under the U.N.’s Convention on Certain Conventional Weapons, also known as the Inhumane Weapons Convention, the meeting falls in line with a 37-year-old agreement that binds U.N. member nations to set limits on the types of arms and explosives that are permitted for use during wartime. Landmines, blinding laser weapons, and booby traps are the types of things that were addressed in the past. But now AI systems have taken the spotlight.
Leading the charge at the U.N. is Ambassador Amandeep Gill of India, who brought up some ideas as to how to craft a legally binding code of conduct, or at the very least, a specific technology review process, to address this emerging issue. He’s joined by the Campaign to Stop Killer Robots, an umbrella advocacy group representing 22 different countries that are attempting to ban such weaponry entirely.
The Human Rights Watch group, a member of this campaign, is advocating for an agreement to be made by the year 2019 that will at least regulate AI killer robot weaponry. But reports suggest that such an idea is “a long shot,” as getting enough member nations onboard could be a challenge.
“The group operates by consensus, so the least ambitious goals are likely to prevail, and countries including Russia and Israel have firmly staked out opposition to any formal ban,” claims the Associated Press (as reported by the DailyMail Online). “The United States has taken a go-slow approach, rights groups say.”
While truly autonomous killer robots do not yet exist – at least not officially – things seem to be moving in that direction. Defining what they are and what would be necessary to keep them under control is the U.N.’s goal with addressing the issue now, though U.S. representatives argue that this is “premature.”
That’s because the U.S. is among the nations leading the charge to unveil killer robots, which it claims will help in “reducing the likelihood of inadvertently striking civilians” – a likely story when considering the fact that the U.S. has been using automated drone technologies for many years now that have led to the deaths of countless innocent civilians.
“The bottom line is that governments are not moving fast enough,” warns Steven Goose, executive director of arms at Human Rights Watch, about where this all will lead if it’s not nipped in the bud now. Establishing a treaty by the end of 2019 is “the kind of timeline we think this issue demands,” he added.
Discover more stories about killer robots at Robots.news.
Sources for this article include: