AI expert warns that military robots could go off the rails and wipe out the very people they are supposed to protect


Richard Moyes, a founding member of the non-government organization Campaign to Stop Killer Robots (CSKR), warns that autonomous military robots may develop their own intelligence and turn against its human creators. In particular, Moyes says that advancing technologies are opening doors to opportunities (and risks) that even the smartest experts cannot predict.

“A human can’t really make legal or moral judgments about the effects that they are creating through the use of these systems,” he says.

Moyes expresses his concern that a “malfunction” in how these robots operate can potentially lead to the loss of lives of actual human soldiers. He is pushing toward creating an international law that would ensure humans remain in complete control of robotic weapons systems.

“This is about protecting civilians and human dignity — but it is also a practical issue for [the military]. Soldiers don’t want to be sent into battle alongside systems that are unpredictable and might go off the rails,” he admonishes.

CSKR has repeatedly called for autonomous weaponry to be banned on all battlefields. Their efforts, however, have not manifested into anything concrete — robot systems are still being deployed in today’s warzones.

The evolution of war and how it is fought

Gone are the days of warriors on horses, wielding swords, and screaming battle cries. As society has changed, so too has warfare and how we fight it. Today, battles are fought in the technological sphere, and we’re more likely to die from people pressing buttons or wielding virtual reality data than being shot.

That doesn’t mean that physical battles do not happen. Tactics also play a huge role in which side will win a skirmish. For these scenarios, governments are now turning toward robotic systems that not only possess more physical strength than a human, but can think and react to various stimuli autonomously. Prototype robot soldiers are already being deployed in many of today’s warzones, including those being fought in Iran and Afghanistan. They range in size from tiny eight-pound robots to the world’s biggest robot, a 700-ton dump truck that can haul 240 tons of earth at a time.

These robots are not even what military experts call “finished.” Robotics executives say that what we’re seeing is only the first stages of this type of technology.

Are we sowing the seeds of our own destruction?

The concept of using robots for war is based on life preservation. If less humans were in vulnerable states, such as in a warzone, then it makes sense that less humans would die. Furthermore, robots don’t need to sleep, never get tired, and are accurate almost all of the time. They are the perfect killing machines. It seems like a win-win situation until you stop to consider the potential consequences of making an amoral being highly intelligent and capable.

Or, perhaps amorality is no longer an accurate characteristic of robotic systems. When discussing the ethics of artificial intelligence, experts warn that autonomous systems are inherently “tainted” by its programmers. Decision-making processes, for example, are biased toward the person who designed the software.

Foregoing the potential ramifications that this might have in war, this could also imply that robots may eventually develop very real human emotions. We may soon witness robot psychologists getting sad, or robot doctors getting stressed.

Or robot soldiers getting angry?

It is difficult to determine what lies ahead in our future, especially as our horizon is slowly (yet steadily) being colored by the steel touch of robotics.

Sources include:

DailyStar.co.uk

Brookings.edu

NickBostrom.com [PDF]

style="display:inline-block;width:728px;height:90px"

data-ad-client="ca-pub-8193958963374960"

data-ad-slot="6833476334">



Comments
comments powered by Disqus

RECENT NEWS & ARTICLES