Deadly robots hold a special place in the heart of pop culture consumers.
A Special Weapons Observation Remote Direct-Action System (SWORDS) robot.
Even in some of their earliest appearances in the pulp novels of the 20th century, robots were often treated as evil killing machines. Decades of research into robotics has helped tone down their image somewhat. There are a lot more robots these days that perform other tasks like assisting with dental training or doing creepy dances in Honda commercials. We still, however, keep the idea of the unstoppable killing machine in our minds.
It appears that movies like the Terminator series may have actually anticipated the reality, albeit without the time travelling or self-healing indestructible liquid metal. University of Sheffield computer science expert Prof Noel Sharkey believes that advanced robotics research is leading to battlefield robots with artificial intelligence which gives them the ability to decide whether to kill or not.
Sharkey believes that robots are not only the future of war, but the future of terrorism as well. As the technology progresses, Sharkey believes that human suicide bombers will increasingly be replaced by robotic ones.
Sharkey said: “There’s a massive drive towards developing autonomous robots for more complex missions. We are rapidly moving towards robots that can make the decision to apply lethal force, when to apply it and who to apply it to. I think maybe we’re talking about a 10-year time frame. If one country develops autonomous robots, it is clear that the other countries will follow suit.”
That day doesn’t seem very far off. Last year the U.S. government announced plans to pour $25 billion into robotics research. It’s only a matter of time. Then we’ll get to have our first robotic arms race.
I’m not against advanced robotic technology, or even robotic warfare. I’d rather see a day where all our fighting could be done by robots than see human lives lost. That doesn’t appear to be what these robots are, however. A robot that can autonomously decide when and if to apply deadly force will be applying that deadly force to humans, not other robots. These are literally robotic killing machines.
I wouldn’t even have a problem with killer robots being used by the military if the robots were not autonomous. The military uses robots all the time for some good purposes, including a robot that can lift injured soldiers out of the battlefield. But most of those have human controllers. One thing anyone who has ever owned a piece of advanced technology can tell you is that they screw up.
It’s one thing to have an iPhone that has a glitch. It’s another thing to have an autonomous killing machine go haywire. I think most of us remember the scene in RoboCop where the ED-209 kills the employee because its sensors didn’t realize he had dropped the gun. It’s frightening to think that this could happen in a battlefield situation, in which peoples’ lives could be at stake. While I support military robots, I don’t think I could morally accept an autonomous robot’s deployment against humans. What’s your opinion on the matter?
Info from Telegraph