The International Joint Conference on Artificial Intelligence in July announced an open letter on Autonomous Weapons: an Open Letter from AI & Robotics Researchers which has probably broken the 20,000 signatures mark by now. (Wouldn’t you like your name on a letter signed by Stephan Hawking and Elon Musk, among other impressive figures?) This touches on the cover topic of SSIT’s Technology and Society Magazine article in Spring of 2009 whose cover image just about says it all:
The topic of this issue is Lethal Robots. The letter suggests that letting AI software decide when to initiate fatal actions was not a good idea. Specifically, “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.”
Unfortunately, I can’t exactly think of any way to actually prevent the development of such systems by organizations that would like to pursue the items listed above for which the killer robots are ideally suited. Perhaps you have some thoughts? How can we make these not just “not beneficial” but discourage their development? Or is that possible?
SSIT is a sponsor of a new IEEE Collaboratec community on Collaboratec CyberEthics and CyberPeace. I encourage you to join this community (which is not limited to IEEE members) and contribute to a discussion there.