Killer Robots (again?)

The International Joint Conference on Artificial Intelligence in July announced an open letter on Autonomous Weapons: an Open Letter from AI & Robotics Researchers which has probably broken the 20,000 signatures mark by now. (Wouldn’t you like your name on a letter signed by Stephan Hawking and Elon Musk, among other impressive figures?)    This touches on the cover topic of SSIT’s Technology and Society Magazine article in Spring of 2009 whose cover image just about says it all:spg09cov

The topic of this issue is Lethal Robots.  The letter suggests that letting AI software decide when to initiate fatal actions was not a good idea.  Specifically, “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.”

Unfortunately, I can’t exactly think of any way to actually prevent the development of such systems by organizations that would like to pursue the items listed above for which the killer robots are ideally suited.  Perhaps you have some thoughts?  How can we make these not just “not beneficial” but discourage their development? Or is that possible?

SSIT is a sponsor of a new IEEE Collaboratec community on Collaboratec CyberEthics and CyberPeace. I encourage you to join this community (which is not limited to IEEE members) and contribute to a discussion there.

One thought on “Killer Robots (again?)

  1. Not only is it unclear how the ban would be implemented and enforced, but if it’s bad to have autonomous AI decisions that lead to harm, then research on cyberweapons (many of which are already autonomous, i.e., self-executing and self-proliferating) and civilian technologies (e.g., autonomous cars that make crash decisions) could be affected.

    More here, including an analysis of the ethics arguments against “killer robots”: http://ethics.calpoly.edu/ccw_testimony.pdf

Leave a Reply