Your TV might be Binge watching you!

VIZIO is reportedly paying fines for using users TVs to track their viewing patterns in significant detail as well as associating this with  IP address data including age, sex, income, marital status, household size, education level, home ownership, and home values.

Presumably this might have been avoided if VIZIO had presented the users with a “privacy statement” or “terms of use” when they installed their TV.  But failure to have obtained the appearance of consent put them in this situation.

It has been clear that all “free” media (and many paid channels), for TV, Cable, Radio, Internet streaming, etc. all want to track this information.  On one hand they can use it to provide “a better user experience” (show you the ads/suggested programs that match your demographics) … and of course the flip side is also true, selling your data to 3rd parties (a.k.a. ‘trusted business partners’)  so they can be more effective at interacting with you is part of the game.

Now lets step it up a notch.  Your TV (or remote controller) may use voice recognition, often using the “mother ship’ resources for the AI analysis if what you have requested. That is, your voice is sent back to servers that interpret and respond.  This leads to another level of monitoring … some of your characteristics might be infered from your voice, and others from background sounds or voices, and even more if the recording device just happens to track you all the time.  “Seri are you listening in again?” — and then add a camera … now the fun can really start.

Hacking Medical Devices

Johnson & Johnson recently disclosed that one if its insulin pumps might be subject to hacking.   This follows assertions about pacemakers and implanted defibrillators might also be subject to attack.  No doubt some wireless medical devices will have security vulnerabilities with at least software if not hardware attack vectors.

The motives for attack are perhaps equally important in any case. Hacking a fleet of cars can have widespread visibility and will be associated with a different set of motives than a personal attack via a medical device.  However, murder or assassination are potential uses for these types of flaws.

“No instances of medical-device hacking have been disclosed.” according to the related WSJ article. Of course, when a diabetic dies of an insulin excess or deficit, murder by hacking might not be on the post mortum evaluation list.  The abuses here are (hopefully) rare, but the lack of disclosure does not imply the lack of a successful attack.

To GO or Not to GO?

Pokemon Go has become a delightful and disturbing experiment in the social impact of technology. This new “Free” software for smart phones implements an augmented reality, overlaying the popular game on the real world. Fans wander the streets, byways, public, and in some cases private spaces following the illusive characters on their smart phone to capture them, or “in world”, or to collect virtual items.  The uptake has been amazing, approaching Twitter in terms of user-hours in just days after introduction. It has also added $12 billion to Nintendo’s stock value (almost double).

Let’s start with “Free”, and $12 billion dollars. The trick is having a no-holds barred privacy policy. Not surprising, the game knows who you are and where you are. It also can access/use your camera, storage, email/phone contacts, and potentially your full Google account (email contents, Drive contents, etc.)  Them money comes because all of this is for sale, in real time. (“While you track Pokemon, Pokemon Go tracks you”, USA Today, 12 July 16) Minimally you can expect to see “Luremodules” (a game component) used to bring well vetted (via browser history, email, call history, disk content, etc.) customers into stores that then combine ad-promotions with in-store characters. Perhaps offering your favorite flavor ice cream, or draw you into a lawyer’s office that specializes in the issues you have been discussing on email, or a medical office that …well you get the picture, and those are just the legitimate businesses.  Your emails from your bank may encourage less honest folks to lure you into a back alley near an ATM machine .. a genre of crime that has only been rumored so far.

The July 13th issue of USA Today outlines an additional set of considerations. Users are being warned by police, property owners, and various web sites for various reasons. The potential for wandering into traffic is non-trivial while pursuing an illusive virtual target, or a sidewalk obstruction, or over the edge of the cliff (is there a murder plot hiding in here?) Needless to say playing while driving creates a desperate need for self-driving cars. Since the targets change with time of day, folks are out at all hours, in all places, doing suspicious things. This triggers calls to police. Some memorial sites, such as Auschwitz and the Washington DC Holocaust Memorial Museum have asked to be exluded from the play-map. There are clearly educational opportunities that could be built into the game — tracing Boston’s “freedom trail”, and requiring player engagement with related topics is a possible example. However, lacking the explicit consideration of the educational context, there are areas where gaming is inappropriate. Also, some public areas are closed after dark, and the game may result in players trespassing in ways not envisioned by the creators, which may create unhealthy interactions with the owners, residents, etc. of the area.

One USA Today article surfaces a concern that very likely was missed by Nintendo, and is exacerbated by the recent deaths of black men in US cities, and the shooting of police in Dallas. “For the most part, Pokemon is all fun and games. Yet for many African Americans, expecially men, their enjoyment is undercut by fears they may raise suspicion with potentially lethal consequences.”  Change the countries and communities involved and similar concerns may emerge in other countries as well. This particular piece ends with an instance of a black youth approaching a policeman who was also playing the game, with a positive moment of interaction as they helped each other pursue in-game objectives.

It is said every technology cuts both ways.  We can hope that experience, and consideration will lead both players and Nintendo to evolve the positive potential for augmented reality, and perhaps with a bit greater respect for user privacy.

Ethics of Killing with Robots

The recent murder of police officers in Dallas, finally terminated by the lethal use of a robot to kill the shooter has triggered an awareness of related ethical issues. First, it must be understood that the robot was under human control during it’s entire mission, so in this particular case it reflects a sophisticated “projection of power” with no autonomous capability. The device might have been a drone controlled remotely and simply paralleled the current use of drones (and no doubt other devices) as weapons.

We already have examples of “autonomous” devices as well. Mines, both land and ocean (and eventually space) all reflect devices “programmed to kill” that operate with no human at the trigger. If anything, a lethal frustration with these devices is that they are too dumb, killing long after their intended use.

I saw a comment in one online discussion implying that robots are or would be programed with Asimov’s first law: “Do not harm humans”. But of course this is neither viable at this time (it takes an AI to evaluate that concept) nor is it a directive that is likely to be implemented in actual systems. Military and police applications are among the most likely for robotic systems of this kind, and harming humans may be a key objective.

Projecting lethal force at a distance may be one of the few remaining characteristics of humans (since we have found animals innovating tools, using language and so forth). Ever since Homo Whomever (pre Sapians as I understand it), tossed a rock to get dinner we have been on this slippery slope. The ability to kill a target from a ‘position of safety’ is essentially the basic design criteria for many weapon systems.  Homo Whomever may have also crossed the autonomous Rubicon with the first snare or pit-fall trap.

Our challenge is to make sure our systems designers and those acquiring the systems have some serious ethical training with practical application.  Building in the safeguards, expiration dates, decision criteria, etc. should be essential aspects of lethal autonomous systems design. “Should” is unfortunately the case, it is unlikely in many scenarios.

It’s 10PM do you know what your model is doing?

“Customers like you have also …”  This concept appears explicitly, or implicitly at many points in the web-of-our-lives, aka the Internet. Specific corporations, and aggregate operations are building increasingly sophisticated models of individuals.  Not just “like you”, but “you”! Prof. Pedro Domingos at UW  in his book “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World” suggests this model of you may become a key factor of your ‘public‘ interactions.

Examples include having Linked-in add a “find me a job” button that will conduct interviews with relevant open positions and provide you a list of the best.  Or perhaps locating a house, a car, a spouse, …well, maybe somethings are better done face-2-face.

Apparently a Asian firm, “Deep Knowledge” has appointed a virtual director to their Board. In this case it is a construct designed to detect trends that the human directors might miss.  However, one suspects that Apple might want a model of Steve Jobs around for occasional consultation, if not back in control again.

Ethics of Virtual Reality

The Jan. 4, 2016 Wall St Journal has an article “VR Growth Sparks Questions About Effects on Body, Mind” pointing out, as prior publications have, that 2016 is likely to be the Year of VR. The U.S. Consumer Electronics Show is starting this week in Las Vegas, where many neat, new and re-packaged concepts will be strongly promoted.

The article points to issues of physical health – nasua is one well documented potential factor. But work has been taking place on residual effects (how soon should you drive after VR?), how long to remain immersed before you ‘surface’, etc. Perhaps the key consideration is degree to which our bodies/brains accept the experiences of VR as real — altering our thinking and behaviour. (Prof. Jeremy Bailenson, director of Stanford’s Virtual Human Interaction Lab confirms this is one impact.)

All of the pundits point out that every new technology has it’s potential uses/abuses. But that does not excuse the specific considerations that might apply to VR.  A point raised in the article “Scares in VR are borderline immoral”. There is a line of technology from “watching” to “first person” to “immersion” that should be getting our attention.  The dispute over “children impacted by what they watch on TV”, moving to first-person shooter video games, to VR is sure to occur.  But in VR, you can be the victim as well. I first encountered the consideration of the after effects of rape in a video game environment at an SSIT conference some years ago.  Even with the third party perspective in that case, the victim was traumatized. No doubt VR will provide a higher impact.  There are no-doubt lesser acts that can be directed at a VR participant that will have greater impact in VR than they might with less immersive technology.

This is the time to start sorting out scenarios, possible considerations for vendors of technology, aps and content, and also to watch for the quite predictable unexpected effects.  Do you have any ‘predictions’ for 2016 and the Year of VR?

 

Predictive Analytics – Rhinos, Elephants, Donkeys and Minority Report

The  IEEE Computer Society published “Saving Rhinos with Predictive Analytics” in both IEEE Intelligent Systems, and in the more widely distributed ‘Computing Edge‘ (a compendium of interesting papers taken from 13 of the CS publications and provided to members and technologists at no cost.  The article describes how data based analysis of both rhino and poacher activity in concert with AI algorithms can focus enforcement activities in terms of timing and location and hopefully save rhinos.

For those outside of the U.S., the largest population of elephants (Republicans) and donkeys (Democrats) are in the U.S.– these animals being symbols for the respective political parties, and now on the brink of the 2016 presidential primaries, these critters are being aggressively hunted — ok, actually sought after for their votes.  Not surprisingly the same tools are used to locate, identify and predict the behaviour of these persons.   When I was young (1964) I read a book called The 480, which described the capabilities of that timeframe for computer based political analysis and targeting of “groups” required to win an election. (480 was the number of groupings of the 68 million voters in 1960 to identify which groups you needed to attract to win the election.)   21st century analytics are a bit more sophisticated — with as many as 235 million groups, or one per potential voter (and over 130 million voters likely to vote.).  A recent kerfuffle between the Sanders and Clinton campaign over “ownership/access” to voter records stored on a computer system operated by the Democratic National Committee reflects the importance of this data.  By cross connecting (data mining) registered voter information with external sources such as web searches, credit card purchases, etc. the candidates can mine this data for cash (donations) and later votes.  A few percentage point change in delivering voters to the polls (both figuratively, and by providing rides where needed) in key states can impact the outcome. So knowing each individual is a significant benefit.

Predictive Analytics is saving rhinos, and affecting the leadership of super powers. But wait, there’s more.  Remember the movie “Minority Report” (2002). This movie started on the surface with apparent computer technology able to predict future crimes by specific individuals — who were arrested to prevent the crimes.  (Spoiler alert) the movie actually proposes a group of psychics were the real source of insight.  This was consistent with the original story (Philip K Dick) in 1956, prior to The 480, and the emergence of the computer as a key predictive device.  Here’s the catch, we don’t need the psychics, just the data and the computers.  Just as the probability of a specific individual voting for a specific candidate or a specific rhino getting poached in a specific territory can be assigned a specific probability, we are reaching the point where aspects of the ‘Minority Report’ predictions can be realized.

Oddly, in the U.S., governmental collection and use of this level of Big Data is difficult due to privacy illusions, and probably bureaucratic stove pipes and fiefdoms.   These problems do not exist in the private sector.  Widespread data collection on everybody at every opportunity is the norm, and the only limitation on sharing is determining the price.  The result is that your bank or insurance company is more likely to be able to predict your likely hood of being a criminal, terrorist, or even a victim of a crime than the government.  Big Data super-powers like Google, Amazon, Facebook and Acxiom have even more at their virtual fingertips.

Let’s assume that sufficient data can be obtained, and robust AI techniques applied to be able to identify a specific individual with a high probability of a problematic event — initiating or victim of a crime in the next week.  And this data is implicit or even explicit in the hands of some corporate entity.  Now what?  What actions should said corporation take? What probability is needed to trigger such actions? What liability exists for failure to take such actions (or should exist)?

These are issues that the elephants, and donkeys will need to consider over the next few years — we can’t expect the rhinos to do the work for us.  We technologists may also have a significant part to play.

Car Reporting Accidents, Violations

In addition to car’s using network connections to call for assistance, here is a natural consequence — your car may notify police of an accident, in this case a driver leaving a hit-and-run situation. My insurance company offered to add a device to my car that would allow them to increase my rates if they go faster than they think I should.  Some insurance companies will raise your rates if you exceed their limit (70 MPH) even in areas where the legal limit is higher (Colorado, Wyoming, etc. have 75+ posted limits).  A phone company is promoting a device to add into your car to provide similar capabilities (presented for safety and comfort rationale.)

So what are the possibilities?

  • Detect accident situations and have emergency response arrive even if you are unable to act — and as noted above this may also detect hit-and-run accidents.
  • Provide a channel for you to communicate situations like “need roadside assistance” or “report roadside problem”.
  • Monitor car performance characteristics and notify user (shop?) of out-of-spec conditions
  • Using this same “diagnostic port”, taking remote control of car
    • Police action – to stop driver from escaping
    • Ill-intended action, to cause car to lose control

So, in line with the season, your car  is making a list, checking it twice and going to report if you are naughty or nice —

====

One additional article from the WSJ Dec. 10th on the Battle between car manufacturers and smartphone companies for control of the car-network environment.  The corporate view, from Don Butler, Ford Motor’s Director of Connected Vehicles: “We are competing for mind-share inside the vehicle.”  Or as the WSJ says, “Car makers are loath to give up key information and entertainment links… and potentially to earn revenue by selling information and mobile connectivity.”  In short, the folks directing the future of connected vehicles are not focusing on the list of possibilities and considerations above.

 

T&S Magazine June 2015 Contents

cover 1

Volume 34, Number 2, June 2015

3 ISTAS 2015 – Dublin
4 President’s Message
Deterministic and Statistical Worlds
Greg Adamson
5 Editorial
Mental Health, Implantables, and Side Effects
Katina Michael
8 Book Reviews
Reality Check: How Science Deniers Threaten Our Future
Stealing Cars: Technology & Society from the Model T to the Gran Torino
13 Leading Edge
“Ich liebe Dich UBER alles in der Welt” (I love you more than anything else in the world)
Sally Applin
Opinion
16 Tools for the Vision Impaired
Molly Hartman
18 Learning from Delusions
Brian Martin
21 Commentary
Nanoelectronics Research Gaps and Recommendations*
Kosmas Galatsis, Paolo Gargini, Toshiro Hiramoto, Dirk Beernaert, Roger DeKeersmaecker, Joachim Pelka, and Lothar Pfitzner
80 Last Word
Father’s Day Algorithms or Malgorithms?
Christine Perakslis

SPECIAL ISSUE—Ethics 2014/ISTAS 2014

31_ Guest Editorial
Keith Miller and Joe Herkert
32_ App Stores for the Brain: Privacy and Security in Brain-Computer Interfaces*
Tamara Bonaci, Ryan Calo, and Howard Jay Chizeck
40_ The Internet Census 2012 Dataset: An Ethical Examination*
David Dittrich, Katherine Carpenter, and Manish Karir
47_ Technology as Moral Proxy: Autonomy and Paternalism by Design*
Jason Millar
56_ Teaching Engineering Ethics: A Phenomenological Approach*
Valorie Troesch
64_ Informed Consent for Deep Brain Stimulation: Increasing Transparency for Psychiatric Neurosurgery Patients*
Andrew Koivuniemi
71_ Robotic Prosthetics: Moving Beyond Technical Performance*
N. Jarrassé, M. Maestrutti, G. Morel, and A. Roby-Brami

*Refereed Articles

 

Auto(mobile) hacking – is it just a myth?

Scientific American ran a “Technofiles” piece  trying to debunk the idea that cars can be hacked.  The online version corrects errors made in their November 2015 issue where the variation of the article overstated the time required, understated the number of potentially ‘at risk’ cars, and mis-stated the proximity required to accomplish the feat.

This has been a topic here before – so I won’t repeat that perspective.  However, I will copy my reply to the article posted on the Scientific American web site, since I think that this effort to dismiss the risk does a poor service to both the public, and to the industry that needs to give serious consideration for how they manage software and communications that can affect the health and safety of consumers.

David, et al, are not getting the message.
Yes, some of the details are wrong in David’s article (I guessed they were without being party to the Wired article) … also wrong is the “Internet” connection required assumption — external communications that can receive certain types of data is all that is required. (OnStar does not use the Internet) and the “premium savings” device advocated by my insurance company (“oh no, our folks assure us it can’t be hacked”) connects to the diagnostic port of the car (i.e. ability to control/test all aspects of operation) and is cell-phone connected to whomever can dial the number.
This is not model specific since all OnStar and after-market components span multiple models and multiple suppliers. This is not internet specific, but truly remote control would require either the cellular or internet connectivity (WiFi and Blue tooth, which are also likely “bells and whistles” are proximity limited.)
This does not require purchasing a car… they do rent cars you know. And to the best of my knowledge no automobile manufacturers have licensed software engineers reviewing and confirming a “can’t be done” — even if they did patch the flaw that the U.S. DoD/DARPA folks exploited for Sixty Minutes. — Until 9/11 no one had hijacked a commercial jet to destroy a major landmark before, so the lack of examples is not a valid argument. We have multiple proofs of concept at this point, that significantly reduces the cost and time required to duplicate this. There are substantial motives, from blackmail to terrorism (a batch of cars, any cars – terrorists don’t need to select, going off the road after a short prior notice from a terrorist organization would get the front page coverage that such folks desire.) The issues here, including additional considerations on privacy, etc. are ongoing discussions in the IEEE Society for the Social Implications of Technology … the worlds largest technical professional society (IEEE)’s forum for such considerations. see http://ieeessit.org/?p=1364 for related postings”

I’m not sure the editors will “get it” … but hopefully our colleagues involved in developing the cars and after-market devices can start implementing some real protections.

A question for a broader audience: “How do cell phone or internet based services (such as On-Star) affect your potential car buying?”