The Wall St. Journal has a note today comparing Amazon’s Echo and Google Home as voice activated, in-home assistants. This space is fraught with impacts on technology and society — from services that can benefit house-bound individuals, to serious opportunities for abuse by hacking, for commercial purposes, or governmental ones. To put it in a simple form: you are being asked to “bug your house” with a device that listens to every noise in the house. Of course you may have already bugged your pocket with a device that is listening for the magic words “hey, Siri” (or the person next to you in the office, train, or restaurant may be carrying that “wire”.) Robots that respond to “OK Google” or “Alexa” are expanding into our monitored domains. (What to folks named Alexa or Siri have to look forward to in this world?) (Would you name your child “OK Google”?)
The immediate use cases seem to be a cross between control of the “Internet of Things”, and the specific business models of the suppliers; online sales for Amazon Alexa, and more invasive advertising for Google. Not only can these devices turn on and off your lights, they can order new bulbs …ones that blink subliminal advertising messages (uh oh, now I’ve given someone a bad idea.)
From our technology and society perspective we need to look forward to the pros and cons of these devices. What high benefit services might be offered? What risks do we run? Are there policy or other guidelines that should be established? …. Please add your thoughts to the list …
Meanwhile I’m trying to find out why my new car’s navigation system keeps trying to take me to Scotland when I ask “Find McDonald’s”.
A growing area reflecting the impact of technology on society is ethics and AI. This has a few variations… one is what is ethical in terms of developing or applying AI, the second is what is ethical for AI’s. (Presumably for an AI to select an ethical vs unethical course of action either it must be programmed that way, or it must learn what is ethical as part of it’s education/awareness.)
This is a great opportunity for learning about the issues in the classroom, to develop deep background for policy and press folks — concerns will emerge here — consider self driving cars, robots in warfare or police work, etc. and of course the general public where misconceptions and misinformation are likely. We see many movies where evil technology is a key plot device, and get many marketing messages on the advantages of progress. Long term challenges for informed evolution in this area will require less simplistic perspectives of the opportunities and risks.
The recent murder of police officers in Dallas, finally terminated by the lethal use of a robot to kill the shooter has triggered an awareness of related ethical issues. First, it must be understood that the robot was under human control during it’s entire mission, so in this particular case it reflects a sophisticated “projection of power” with no autonomous capability. The device might have been a drone controlled remotely and simply paralleled the current use of drones (and no doubt other devices) as weapons.
We already have examples of “autonomous” devices as well. Mines, both land and ocean (and eventually space) all reflect devices “programmed to kill” that operate with no human at the trigger. If anything, a lethal frustration with these devices is that they are too dumb, killing long after their intended use.
I saw a comment in one online discussion implying that robots are or would be programed with Asimov’s first law: “Do not harm humans”. But of course this is neither viable at this time (it takes an AI to evaluate that concept) nor is it a directive that is likely to be implemented in actual systems. Military and police applications are among the most likely for robotic systems of this kind, and harming humans may be a key objective.
Projecting lethal force at a distance may be one of the few remaining characteristics of humans (since we have found animals innovating tools, using language and so forth). Ever since Homo Whomever (pre Sapians as I understand it), tossed a rock to get dinner we have been on this slippery slope. The ability to kill a target from a ‘position of safety’ is essentially the basic design criteria for many weapon systems. Homo Whomever may have also crossed the autonomous Rubicon with the first snare or pit-fall trap.
Our challenge is to make sure our systems designers and those acquiring the systems have some serious ethical training with practical application. Building in the safeguards, expiration dates, decision criteria, etc. should be essential aspects of lethal autonomous systems design. “Should” is unfortunately the case, it is unlikely in many scenarios.
Let me start it with privacy and link it to security. Well, all of us know about the privacy settings on Facebook and we like them so much as we can hide from our family members, the things we do and the people we’re with. But wait, what about security? How is privacy linked to security?
Let’s leave the digital platform and move our focus towards our daily lives. We need security in our banks, schools, public places and even in our homes and parks. But have you ever wondered what price we pay for this non-existent blanket of security? Privacy. Let me reiterate – security at the price of privacy. Those cute little things we see on the ceilings of our school corridors; we call them “CCTV” – they are installed for our security. But security from? No one bothers to ask. Maybe they (the authorities) want to tape everything in case something bad happens so that they can go through the tapes and catch perps red-handed. But they are taping every single thing and we don’t take this as them breaching our privacy?
A number of times these tapes have been misused causing niggling unpleasantries and yet it’s ok. There’s a famous proverb in Hindi that translates to this,“You have to sacrifice one thing to get another”. Here we sacrifice our privacy to get security. With self-driving cars grabbing all the attention, there goes more data to stay connected and apparently, “secure”.
Similarly, some companies check what their employees are up to and what they are doing on their computers while they are at work. This, from the company’s perspective is to avoid plausible breach of sensitive data but is such constant monitoring even ethical? So, does it really have to be a tradeoff? Security for privacy and vice versa?
SSIT was honored with an opening address from Michael D. Higgins, President of Ireland, at the 2015 IEEE-SSIT International Symposium on Technology and Society (ISTAS ‘15) on November 11 in Dublin, Ireland. It is the first time a head of state has addressed an ISTAS event. Full coverage of the conference will appear in the January 2016 SSIT e-newsletter and on line at ieeessit.org. The President’s remarks will be published in the March 2016 issue of T&S Magazine. A Special Issue on ISTAS ‘15 will appear in the September 2016 issue of T&S, and will be guest edited by ISTAS ‘15 conference chair, Paul Cunningham.
4 President’s Message
Coping with Machines Greg Adamson Book Reviews 5Marketing the Moon: The Selling of the Apollo Lunar Mission 7Alan Turing: The Enigma 10 Editorial
Resistance is Not Futile, nil desperandum MG Michael and Katina Michael 13 Letter to the Editor
Technology and Change Kevin Hu 14 Opinion
Privacy Nightmare: When Baby Monitors Go Bad Katherine Albrecht and Liz Mcintyre 15 From the Editor’s Desk
Robots Don’t Pray Eugenio Guglielmelli 17 Leading Edge
Unmanned Aircraft: The Rising Risk of Hostile Takeover Donna A. Dulo 20 Opinion
Automatic Tyranny, Re-Theism, and the Rise of the Reals Sand Sheff 23 Creating “The Norbert Wiener Media Project” J. Mitchell Johnson 25 Interview
A Conversation with Lazar Puhalo 88 Last Word
Technological Expeditions and Cognitive Indolence Christine Perakslis
SPECIAL ISSUE: Norbert Wiener in the 21st Century
33_ Guest Editorial Philip Hall, Heather A. Love and Shiro Uesugi 35_ Norbert Wiener: Odd Man Ahead Mary Catherine Bateson 37_ The Next Macy Conference: A New Interdisciplinary Synthesis Andrew Pickering 39_ Ubiquitous Surveillance and Security Bruce Schneier 41_ Reintroducing Wiener: Channeling Norbert in the 21st Century Flo Conway and Jim Siegelman 44_ Securing the Exocortex* Tamara Bonaci, Jeffrey Herron, Charles Matlack, and Howard Jay Chizeck 52_ Wiener’s Prefiguring of a Cybernetic Design Theory* Thomas Fischer 60_ Norbert Wiener and the Counter-Tradition to the Dream of Mastery D. Hill 64_ Down the Rabbit Hole* Laura Moorhead
74_ Opening Pandora’s 3D Printed Box Phillip Olla 81_ Application Areas of Additive Manufacturing N.J.R. Venekamp and H.Th. Le Fever
Recent attacks on citizens in all too many countries have raised the question of creating back-doors in encrypted communications technology. A November 22 NY Times article by Zeynep Tufekci: “The WhatsApp Theory of Terrorism“, does a good job of explaining some of the flaws in the “simplistic” – government mandated back-doors. The short take: bad guys have access to tools that do not need to follow any government regulations, and bad guys who want to hack your systems can use any backdoor that governments do mandate — no win for protection, big loss of protection.
Toys? The Dec. 1 Wall Street Journal covered: “Toy Maker Says Hack Accessed Customer Information“. While apparently no social security or credit card data was obtained, there is value in having names – birthdates – etc for creating false credentials. How does this relate to the Terrorist Threat? — two ways actually:
there are few, if any, systems that hackers won’t target — so a good working assumption is someone will try to ‘crack’ it.
technologists, in particular software developers, need to be aware, consider and incorporate appropriate security requirements into EVERY online system design.
We are entering the era of the Internet of Things (IoT), with many objects now participating in a globally connected environment. There are no doubt some advantages (at least for marketing spin) with each such object. There will be real advantages for some objects. New insight may be discovered though the massive amount of data available – for example, can we track global warming via the use of IoT connected heating/cooking devices? However, there will be potential abuses of both individual objects (toys above), and aggregations of data. Software developers and their management need to apply worst case threat-analysis to determine the risks and requirements for EVERY connected object.
Can terrorists, or other bad guys, use toys? Of Course! There are indications that X-Box and/or Playstations were among the networked devices used to coordinate some of the recent attacks. Any online environment that allows users to share data/objects can be used as a covert communications channel. Combining steganography and ShutterFly, Instagram, Minecraft, or any other site where you can upload or manipulate a shareable image is a channel. Pretending we can protect them all is a dangerous delusion.
Is your employer considering IoT security? Is your school teaching about these issues?