The Wall St. Journal has a note today comparing Amazon’s Echo and Google Home as voice activated, in-home assistants. This space is fraught with impacts on technology and society — from services that can benefit house-bound individuals, to serious opportunities for abuse by hacking, for commercial purposes, or governmental ones. To put it in a simple form: you are being asked to “bug your house” with a device that listens to every noise in the house. Of course you may have already bugged your pocket with a device that is listening for the magic words “hey, Siri” (or the person next to you in the office, train, or restaurant may be carrying that “wire”.) Robots that respond to “OK Google” or “Alexa” are expanding into our monitored domains. (What to folks named Alexa or Siri have to look forward to in this world?) (Would you name your child “OK Google”?)
The immediate use cases seem to be a cross between control of the “Internet of Things”, and the specific business models of the suppliers; online sales for Amazon Alexa, and more invasive advertising for Google. Not only can these devices turn on and off your lights, they can order new bulbs …ones that blink subliminal advertising messages (uh oh, now I’ve given someone a bad idea.)
From our technology and society perspective we need to look forward to the pros and cons of these devices. What high benefit services might be offered? What risks do we run? Are there policy or other guidelines that should be established? …. Please add your thoughts to the list …
Meanwhile I’m trying to find out why my new car’s navigation system keeps trying to take me to Scotland when I ask “Find McDonald’s”.
12-14 December 2016 ; Reston, VA, USA
IoT: Smart Innovation for Vibrant Ecosystems
“We are literally moving this global work on IoT forward in our collaboration on emerging technologies such as 5G, software-defined IoT, and networked control for cyber-physical systems, as well as the unique applications of the IoT, including the Social Internet of Things, and the IoT as the driver for the co-created smart city. Hot button issues including standards, user-centric security and privacy, and ethics also figure prominently in the WF-IoT program.”
– Geoff Mulligan, WF-IoT 2016 General Chair
The time is now to register to attend the 3rd Annual World Forum on Internet of Things (WF-IoT) to experience industry leading keynote speakers, in-depth technical sessions, industry forum panels, workshops and tutorials, and a Doctoral Symposium. The forum will again convene industry leaders, academics and decision-making government officials from around the world to investigate and discuss aspects of this year’s conference theme, IoT: Smart Innovation for Vibrant Ecosystems.
De Lange Conference X on Humans, Machines, and the Future of Work
December 5-6, 2016 at Rice University, Houston, TX
For details, registration, etc. See http://delange.rice.edu/
- What advances in artificial intelligence, robotics and automation are expected over the Next 25 years?
- What will be the impact of these advances on job creation, job destruction and wages in the labor market?
- What skills are required for the job market of the future?
- Can education prepare workers for that job market?
- What educational changes are needed?
- What economic and social policies are required to integrate people who are left out of future labor markets?
- How can we preserve and increase social mobility in such an environment?
For those of us who have been enjoying the antics of 007, aka James Bond — and those of us in the real world who have been providing technology that helps our covert entities to accomplish their missions…. it is worthwhile to note that Alex Younger, head of UK’s MI6 agency (which of course does not exist), indicates Bond’s personality and activities do not meet their ethical standards.
“It’s safe to say that James Bond wouldn’t get through our recruitment process and, whilst we share his qualities of patriotism, energy and tenacity, an intelligence officer in the real MI6 has a high degree of emotional intelligence, values teamwork and always has respect for the law… unlike Mr Bond.“
27 Oct 2016 UK Telegraph article
A number of technologists are called upon to support covert, military or police organizations in their countries. There is some comfort in thinking that such entities, including MI6 (yes it is real), have some level of ethical standards they apply. Which does not exempt an individual from applying their own professional and other standards as well in their work.
A growing area reflecting the impact of technology on society is ethics and AI. This has a few variations… one is what is ethical in terms of developing or applying AI, the second is what is ethical for AI’s. (Presumably for an AI to select an ethical vs unethical course of action either it must be programmed that way, or it must learn what is ethical as part of it’s education/awareness.)
Folks playing in the AI Ethics domain include a recent consortia of industry players (IBM, Google, Facebook, Amazon and Microsoft), the IEEE Standards folks, and the White House (with a recent white paper).
This is a great opportunity for learning about the issues in the classroom, to develop deep background for policy and press folks — concerns will emerge here — consider self driving cars, robots in warfare or police work, etc. and of course the general public where misconceptions and misinformation are likely. We see many movies where evil technology is a key plot device, and get many marketing messages on the advantages of progress. Long term challenges for informed evolution in this area will require less simplistic perspectives of the opportunities and risks.
There is a one day event in Brussels, Nov. 15, 2016 that will provide a current view on some of the issues, and discussions.
The Wall St. Journal has a piece “Your Next Friend Could Be a Robot“, which is talking about a device in your home, not a disembodied associate on Facebook. The initial example is “embodied” as a speaker/microphone in Amazon’s Echo Dot, but also includes similar devices from Google, cell phones and even Toyota. So what?
Folks, the article focuses on a 69 year old woman living alone, have a relationship with the devices. These are 24/7 connected to the Internet, with a back end AI voice recognition/response system. (The article asserts it’s not AI because it’s not conscious, which is a different consideration.) … Apparently “double digit” percentages of interactions with Alexa (Amazon’s non-AI personality) are “non-utilitarian” presumably not triggering orders for Amazon products.
The good news: folks feel less lonely, more connected, and have “someone” there 24/7 — and responding to queries (with pre-programed answers) such as “what are the laws of robotics” — see Reddit’s list of fun questions. … but
The bad news — it’s not clear what happens when you tell Alexa to call 911, or that you have fallen down and can’t get up, etc. While there are “wakeup” and “sleep” words you can use, just the fact that a wakeup word can be recognized indicates that a level of 24/7 monitoring is in place. No doubt this can be hacked, and tapped, and otherwise abused.
What is Amazon’s liability if you tell Alexa you need help and no effective response occurs? No doubt time and lawsuits will tell.
The Internet is a widespread tool reflecting, to some degree, free speech and freedom of the ‘press’. As such, it is a threat to entities that wish to suppress these, or make them subservient to other priorities. A recent report on DefenseOne.com outlines the ways in which some countries have been able to put an “on-off” switch in place, and use this. The trick is having all or most of the traffic going though a small number of (authorized) intermediate nodes where the pug can be pulled.
Countries like Egypt and China have such bottlenecks. Countries with large numbers of intermediate nodes connected outside the country include Canada, Germany and the Netherlands. Surprisingly Russia has a very large number of such connections — explained by the article as a complexity designed to make tracking cyber-crime nearly impossible.
Johnson & Johnson recently disclosed that one if its insulin pumps might be subject to hacking. This follows assertions about pacemakers and implanted defibrillators might also be subject to attack. No doubt some wireless medical devices will have security vulnerabilities with at least software if not hardware attack vectors.
The motives for attack are perhaps equally important in any case. Hacking a fleet of cars can have widespread visibility and will be associated with a different set of motives than a personal attack via a medical device. However, murder or assassination are potential uses for these types of flaws.
“No instances of medical-device hacking have been disclosed.” according to the related WSJ article. Of course, when a diabetic dies of an insulin excess or deficit, murder by hacking might not be on the post mortum evaluation list. The abuses here are (hopefully) rare, but the lack of disclosure does not imply the lack of a successful attack.
Workshop on Advanced NeuroTechnologies for BRAIN Initiatives workshop, held 10-11 November in San Diego. This is a convenient time and location for those who are traveling to attend Neuroscience 2016 as that begins 12 November also in San Diego. The link for registration is here: http://brain.ieee.org/news/antbi/
Technology & Society has touched on this a few times… RFID implants in people. WSJ has an update worth noting. My new car uses RFID chips to open doors and start the ignition. Having these “embedded” could be of value… but what if I buy a different car? The article lists electronic locks as one application, and also embedding medical history, contact information, etc. Your “RFID” constellation (credit cards, ID cards, keys, etc.) can identify you uniquely — for example as you enter a store, etc. So the ‘relationship’ between your RFID and the intended devices goes beyond that one-to-one application.
An ethical issue raised was that of consent associated with embedding RFID in a person who may not be able to provide consent, but would benefit from the ID potential, lock access (or denial), etc. An obvious example is tracking a dementia patient if they leave the facility. Of course we already put on wrist bands that are difficult to remove, and these might contain RFID or other locating devices.
What applications might cause you to embed a device under your skin? What concerns do you have about possible problems/issues?