Big Brother/Data 2016

The power of big data, AI/analytics, and subtle data collection are converging to a future only hinted at in Orwell’s 1984.  With the rapid developments on many fronts, it is not surprising that those of us who are only moderately paranoid have not been tracking it all. So here’s an update on some of the recent information on who is watching you and why:

Facebook (no surprise here) has been running personality quizzes that evaluate how your OCEAN score lines up.  That is Openness, Conscientiousness, Extroversion, Agreeableness and Neuroticism.  These “Free” evaluations are provided by Cambridge Analytica. The applications of this data to political election influence is documented by the NY Times (subscription required) and quoted in part by others.  The short take is that your Facebook profile (name, etc.) is combined with your personality data, and “onboarding” data from other sources such as age, income, debt, purchases, health concerns, car, gun  and home ownership and more.  Cambridge Analytica is reported to have records with 3 to 5 thousand data points on each of 230 million adult Americans. — which is most of us.

How to they use this data?  Psycho-graphic micro-targeted advertising is the recent target, seeking to influence voting in the U.S. Election.  They only support Republican candidates, so other parties will have to develop their own doomsday books.  There is no requirement that the use of the quizzes be disclosed, nor that the “ads” be identified as political or approved by any candidate.  The ads might not appear to have any specific political agenda, they might just point out news (or fake news) stories that play to your specific personality and have been test-marketed to validate the influence they will have on the targeted voter(s).  This may inspire you to get out and vote, or to stay-home and not bother — depending on what candidate(s) you support (based on social media streams, or more generalize characteristics if you personally have not declared your preferences.)  — Impact — quite possibly the U.S. Presidency.

But wait, that’s not all.

The U.K is expanding their surveillance powers, requiring Internet Companies to retain interactions/transactions for a year, including every web site you have accessed. This apparently is partially in response to the assertions by France that similar powers had foiled an ISIS attack in France. The range of use (abuse) that might be applied by the UK government and their allies remains to be seen (or more likely will remain hidden.)

But, consider what China is doing to encourage residents to be “sincere”. [Here is a serious limitation of my linguistic and cultural skills — no doubt there is a Mandarin word that is being used and translated to “sincere”, and that it carries cultural implications that may not be evident in translation.]  Data collected to determine your “social credibility rating”. includes: tax, loan, bill, and other payments (on time?), adherence to traffic rules, family planning limits, academic record, purchasing, online interactions, nature of information you post online, volunteer activity, and even “filial piety” (respect for elders/ancestors). And the applications of such data?  So far 4.9 million airline tickets have been refused. Your promotion, or even job opportunities can be limited with “sensitive” jobs being subject to review — judges, teachers, accountants, etc. A high score will open doors — possible faster access to government services.  By letting citizens see their score, they can be encouraged to ‘behave themselves better’.  By not disclosing all of the data collected, nor all of the implications the state can bully citizens into far greater sincerity than they might adopt if they were just trying to not break the law.

Your comments, thoughts and responses are encouraged, but remember — they are being recorded by others for reasons you may never know.  … Sincerely yours, Jim

Robot Friends

The Wall St. Journal has a piece “Your Next Friend Could Be a Robot“, which is talking about a device in your home, not a disembodied associate on Facebook. The initial example is “embodied” as a speaker/microphone in Amazon’s Echo Dot, but also includes similar devices from Google, cell phones and even Toyota.  So what?

Folks, the article focuses on a 69 year old woman living alone, have a relationship with the devices.  These are 24/7 connected to the Internet, with a back end AI voice recognition/response system. (The article asserts it’s not AI because it’s not conscious, which is a different consideration.) … Apparently “double digit” percentages of interactions with Alexa (Amazon’s non-AI personality) are “non-utilitarian” presumably not triggering orders for Amazon products.

The good news: folks feel less lonely, more connected, and have “someone” there 24/7 — and responding to queries (with pre-programed answers) such as “what are the laws of robotics” — see Reddit’s list of fun questions.  … but

The bad news — it’s not clear what happens when you tell Alexa to call 911, or that you have fallen down and can’t get up, etc.  While there are “wakeup” and “sleep” words you can use, just the fact that a wakeup word can be recognized indicates that a level of 24/7 monitoring is in place.  No doubt this can be hacked, and tapped, and otherwise abused.

What is Amazon’s liability if you tell Alexa you need help and no effective response occurs?  No doubt time and lawsuits will tell.

Is RFID Getting Under Your Skin?

Technology & Society has touched on this a few times… RFID implants in people.  WSJ has an update worth noting. My new car uses RFID chips to open doors and start the ignition.  Having these “embedded” could be of value… but what if I buy a different car?   The article lists electronic locks as one application, and also embedding medical history, contact information, etc.   Your “RFID” constellation (credit cards, ID cards, keys, etc.) can identify you uniquely — for example as you enter a store, etc.  So the ‘relationship’ between your RFID and the intended devices goes beyond that one-to-one application.

An ethical issue raised was that of consent associated with embedding RFID in a person who may not be able to provide consent, but would benefit from the ID potential, lock access (or denial), etc.  An obvious example is tracking a dementia patient if they leave the facility.  Of course we already put on wrist bands that are difficult to remove, and these might contain RFID or other locating devices.

What applications might cause you to embed a device under your skin? What concerns do you have about possible problems/issues?

Privacy and Security

Guest Post from: Marvi Islam

Let me start it with privacy and link it to security. Well, all of us know about the privacy settings on Facebook and we like them so much as we can hide from our family members, the things we do and the people we’re with. But wait, what about security? How is privacy linked to security?

Let’s leave the digital platform and move our focus towards our daily lives. We need security in our banks, schools, public places and even in our homes and parks. But have you ever wondered what price we pay for this non-existent blanket of security? Privacy.  Let me reiterate –  security at the price of privacy. Those cute little things we see on the ceilings of our school corridors; we call them “CCTV” –  they are installed for our security. But security from? No one bothers to ask. Maybe they (the authorities) want to tape everything in case something bad happens so that they can go through the tapes and catch perps red-handed. But they are taping every single thing and we don’t take this as them breaching our privacy?

A number of times these tapes have been misused causing niggling unpleasantries and yet it’s ok. There’s a famous proverb in Hindi that translates to this,“You have to sacrifice one thing to get another”. Here we sacrifice our privacy to get security. With self-driving cars grabbing all the attention, there goes more data to stay connected and apparently, “secure”.

Similarly, some companies check what their employees are up to and what they are doing on their computers while they are at work. This, from the company’s perspective is to avoid plausible breach of sensitive data but is such constant monitoring even ethical? So, does it really have to be a tradeoff? Security for privacy and vice versa?

Marvi Islam is from Islamabad, Pakistan and studies at Capital University of Science and Technology, Islamabad. https://www.facebook.com/marvi.islam

Predictive Analytics – Rhinos, Elephants, Donkeys and Minority Report

The  IEEE Computer Society published “Saving Rhinos with Predictive Analytics” in both IEEE Intelligent Systems, and in the more widely distributed ‘Computing Edge‘ (a compendium of interesting papers taken from 13 of the CS publications and provided to members and technologists at no cost.  The article describes how data based analysis of both rhino and poacher activity in concert with AI algorithms can focus enforcement activities in terms of timing and location and hopefully save rhinos.

For those outside of the U.S., the largest population of elephants (Republicans) and donkeys (Democrats) are in the U.S.– these animals being symbols for the respective political parties, and now on the brink of the 2016 presidential primaries, these critters are being aggressively hunted — ok, actually sought after for their votes.  Not surprisingly the same tools are used to locate, identify and predict the behaviour of these persons.   When I was young (1964) I read a book called The 480, which described the capabilities of that timeframe for computer based political analysis and targeting of “groups” required to win an election. (480 was the number of groupings of the 68 million voters in 1960 to identify which groups you needed to attract to win the election.)   21st century analytics are a bit more sophisticated — with as many as 235 million groups, or one per potential voter (and over 130 million voters likely to vote.).  A recent kerfuffle between the Sanders and Clinton campaign over “ownership/access” to voter records stored on a computer system operated by the Democratic National Committee reflects the importance of this data.  By cross connecting (data mining) registered voter information with external sources such as web searches, credit card purchases, etc. the candidates can mine this data for cash (donations) and later votes.  A few percentage point change in delivering voters to the polls (both figuratively, and by providing rides where needed) in key states can impact the outcome. So knowing each individual is a significant benefit.

Predictive Analytics is saving rhinos, and affecting the leadership of super powers. But wait, there’s more.  Remember the movie “Minority Report” (2002). This movie started on the surface with apparent computer technology able to predict future crimes by specific individuals — who were arrested to prevent the crimes.  (Spoiler alert) the movie actually proposes a group of psychics were the real source of insight.  This was consistent with the original story (Philip K Dick) in 1956, prior to The 480, and the emergence of the computer as a key predictive device.  Here’s the catch, we don’t need the psychics, just the data and the computers.  Just as the probability of a specific individual voting for a specific candidate or a specific rhino getting poached in a specific territory can be assigned a specific probability, we are reaching the point where aspects of the ‘Minority Report’ predictions can be realized.

Oddly, in the U.S., governmental collection and use of this level of Big Data is difficult due to privacy illusions, and probably bureaucratic stove pipes and fiefdoms.   These problems do not exist in the private sector.  Widespread data collection on everybody at every opportunity is the norm, and the only limitation on sharing is determining the price.  The result is that your bank or insurance company is more likely to be able to predict your likely hood of being a criminal, terrorist, or even a victim of a crime than the government.  Big Data super-powers like Google, Amazon, Facebook and Acxiom have even more at their virtual fingertips.

Let’s assume that sufficient data can be obtained, and robust AI techniques applied to be able to identify a specific individual with a high probability of a problematic event — initiating or victim of a crime in the next week.  And this data is implicit or even explicit in the hands of some corporate entity.  Now what?  What actions should said corporation take? What probability is needed to trigger such actions? What liability exists for failure to take such actions (or should exist)?

These are issues that the elephants, and donkeys will need to consider over the next few years — we can’t expect the rhinos to do the work for us.  We technologists may also have a significant part to play.

Car Reporting Accidents, Violations

In addition to car’s using network connections to call for assistance, here is a natural consequence — your car may notify police of an accident, in this case a driver leaving a hit-and-run situation. My insurance company offered to add a device to my car that would allow them to increase my rates if they go faster than they think I should.  Some insurance companies will raise your rates if you exceed their limit (70 MPH) even in areas where the legal limit is higher (Colorado, Wyoming, etc. have 75+ posted limits).  A phone company is promoting a device to add into your car to provide similar capabilities (presented for safety and comfort rationale.)

So what are the possibilities?

  • Detect accident situations and have emergency response arrive even if you are unable to act — and as noted above this may also detect hit-and-run accidents.
  • Provide a channel for you to communicate situations like “need roadside assistance” or “report roadside problem”.
  • Monitor car performance characteristics and notify user (shop?) of out-of-spec conditions
  • Using this same “diagnostic port”, taking remote control of car
    • Police action – to stop driver from escaping
    • Ill-intended action, to cause car to lose control

So, in line with the season, your car  is making a list, checking it twice and going to report if you are naughty or nice —

====

One additional article from the WSJ Dec. 10th on the Battle between car manufacturers and smartphone companies for control of the car-network environment.  The corporate view, from Don Butler, Ford Motor’s Director of Connected Vehicles: “We are competing for mind-share inside the vehicle.”  Or as the WSJ says, “Car makers are loath to give up key information and entertainment links… and potentially to earn revenue by selling information and mobile connectivity.”  In short, the folks directing the future of connected vehicles are not focusing on the list of possibilities and considerations above.

 

T&S Magazine June 2015 Contents

cover 1

Volume 34, Number 2, June 2015

3 ISTAS 2015 – Dublin
4 President’s Message
Deterministic and Statistical Worlds
Greg Adamson
5 Editorial
Mental Health, Implantables, and Side Effects
Katina Michael
8 Book Reviews
Reality Check: How Science Deniers Threaten Our Future
Stealing Cars: Technology & Society from the Model T to the Gran Torino
13 Leading Edge
“Ich liebe Dich UBER alles in der Welt” (I love you more than anything else in the world)
Sally Applin
Opinion
16 Tools for the Vision Impaired
Molly Hartman
18 Learning from Delusions
Brian Martin
21 Commentary
Nanoelectronics Research Gaps and Recommendations*
Kosmas Galatsis, Paolo Gargini, Toshiro Hiramoto, Dirk Beernaert, Roger DeKeersmaecker, Joachim Pelka, and Lothar Pfitzner
80 Last Word
Father’s Day Algorithms or Malgorithms?
Christine Perakslis

SPECIAL ISSUE—Ethics 2014/ISTAS 2014

31_ Guest Editorial
Keith Miller and Joe Herkert
32_ App Stores for the Brain: Privacy and Security in Brain-Computer Interfaces*
Tamara Bonaci, Ryan Calo, and Howard Jay Chizeck
40_ The Internet Census 2012 Dataset: An Ethical Examination*
David Dittrich, Katherine Carpenter, and Manish Karir
47_ Technology as Moral Proxy: Autonomy and Paternalism by Design*
Jason Millar
56_ Teaching Engineering Ethics: A Phenomenological Approach*
Valorie Troesch
64_ Informed Consent for Deep Brain Stimulation: Increasing Transparency for Psychiatric Neurosurgery Patients*
Andrew Koivuniemi
71_ Robotic Prosthetics: Moving Beyond Technical Performance*
N. Jarrassé, M. Maestrutti, G. Morel, and A. Roby-Brami

*Refereed Articles

 

Auto(mobile) hacking – is it just a myth?

Scientific American ran a “Technofiles” piece  trying to debunk the idea that cars can be hacked.  The online version corrects errors made in their November 2015 issue where the variation of the article overstated the time required, understated the number of potentially ‘at risk’ cars, and mis-stated the proximity required to accomplish the feat.

This has been a topic here before – so I won’t repeat that perspective.  However, I will copy my reply to the article posted on the Scientific American web site, since I think that this effort to dismiss the risk does a poor service to both the public, and to the industry that needs to give serious consideration for how they manage software and communications that can affect the health and safety of consumers.

David, et al, are not getting the message.
Yes, some of the details are wrong in David’s article (I guessed they were without being party to the Wired article) … also wrong is the “Internet” connection required assumption — external communications that can receive certain types of data is all that is required. (OnStar does not use the Internet) and the “premium savings” device advocated by my insurance company (“oh no, our folks assure us it can’t be hacked”) connects to the diagnostic port of the car (i.e. ability to control/test all aspects of operation) and is cell-phone connected to whomever can dial the number.
This is not model specific since all OnStar and after-market components span multiple models and multiple suppliers. This is not internet specific, but truly remote control would require either the cellular or internet connectivity (WiFi and Blue tooth, which are also likely “bells and whistles” are proximity limited.)
This does not require purchasing a car… they do rent cars you know. And to the best of my knowledge no automobile manufacturers have licensed software engineers reviewing and confirming a “can’t be done” — even if they did patch the flaw that the U.S. DoD/DARPA folks exploited for Sixty Minutes. — Until 9/11 no one had hijacked a commercial jet to destroy a major landmark before, so the lack of examples is not a valid argument. We have multiple proofs of concept at this point, that significantly reduces the cost and time required to duplicate this. There are substantial motives, from blackmail to terrorism (a batch of cars, any cars – terrorists don’t need to select, going off the road after a short prior notice from a terrorist organization would get the front page coverage that such folks desire.) The issues here, including additional considerations on privacy, etc. are ongoing discussions in the IEEE Society for the Social Implications of Technology … the worlds largest technical professional society (IEEE)’s forum for such considerations. see http://ieeessit.org/?p=1364 for related postings”

I’m not sure the editors will “get it” … but hopefully our colleagues involved in developing the cars and after-market devices can start implementing some real protections.

A question for a broader audience: “How do cell phone or internet based services (such as On-Star) affect your potential car buying?”

Employee Cell Phone Tracking

An employee in California was allegedly fired for removing a tracking APP from her cell phone that was used to track her on-the-job and after-hours travel and locations.  The APP used was XORA (now part of Clicksoft).
Here are some relevant, interesting points.

  • Presumably the cell phone was provided by her employer.  It may be that she was not required to have it turned on when she was off hours.
    (but it is easy to envision jobs where 24 hour on-call is expected)
  • There are clear business uses for the tracking app, which determined time of arrival/departure from customer sites, route taken, etc.
  • There are more intrusive aspects, which stem into the objectionable when off-hours uses are considered: tracking locations, time spent there, routes, breaks, etc. — presumably such logs could be of value in divorce suits, legal actions, etc.

Consider some variations of the scenario —

  1. Employee fired for inappropriate after hours activities
  2. Detection of employees interviewing for other jobs,
    (or a whistle blower, reporting their employer to authorities)
  3. Possible “blackmail” using information about an employees off hour activities.
  4. What responsibility does employer have for turning over records in various legal situations?
  5. What are the record retention policies required?  Do various privacy notifications, policies, laws apply?
  6. What if the employer required the APP to be on a personal phone, not one that was supplied?

When is this type of tracking appropriate, when is it not appropriate?

I’ve marked this with “Internet of Things” as a tag as well — while the example is a cell phone, similar activities occur with in-car (and in-truck) monitoring devices, medical monitoring devices, employer provided tablet/laptop, and no doubt new devices not yet on the market.

FTC, NoMi and opting out

The U.S. Federal Trade Commission (FTC) settled charges with Nomi Technologies over it’s opt-out policy on April 23rd. Nomi’s business is putting devices in retail stores that track MAC addresses.  A MAC unique MAC address is associated with every device that can use WiFi –it is the key to communicating with your device (cell phone, tablet, laptop, etc.) as opposed to someone elses device.  Nomi apparently performs a hash ‘encryption’ on this (which is still unique, just not usable for WiFi communications) and tracks your presence near or in participating retail stores world wide.

The question the FTC was addressing is does Nomi adhere to it’s privacy policy, which indicates you can opt out in store, and would know what stores are using the technology. Nomi’s privacy policy (as of April 24) indicates they will never collect any personally identifiable information without a consumer’s explicit opt in — of course since you do not know where they are active, nor that they even exist it would appear that they have no consumer’s opting in.  Read that again closely — “personally identifiable information” … it is a MAC address, not your name, and at least one dissenting FTC commissioner asserted that “It is important to note that, as a third party contractor collecting no personally identifiable information, Nomi had no obligation to offer consumers an opt out.”  In other words, as long as Nomi is not selling something to the public, they should have no-holds-barred ability to use your private data anyway they like. The second dissenting commissioner asserts “Nomi does not track individual consumers – that is, Nomi’s technology records whether individuals are unique or repeat visitors, but it does not identify them.” Somehow this commissioner assumes that the unique hash code for a MAC address that can be used to distinguish if a visitor is a repeat, is less of a individual identifier than the initial MAC address (which he notes is not stored.) This is sort of like saying your social security number backwards (a simplistic hash) is not an identifier whereas the number in normal order is.  Clearly the data is a unique identifier and is stored.  Nomi offers the service (according to their web site) to “increase customer engagement by delivering highly relevant mobile campaigns in real time through your mobile app” So, with the data the store (at it’s option) chooses to collect from customers (presumably by their opting in via downloading an app) is the point where your name, address, and credit card information are tied into the hashed MAC address.  Both dissenting commissioners somehow feel that consumers are quite nicely covered by the ability to go to the web site of a company you never heard of, and enter all of your device MAC addresses (which you no doubt have memorized) to opt-out of a collecting data about you that you do not know is being collected for purposes that even that company does not know (since it is the retailer that actually makes use of the data.)  There may be a need to educate some of the folks at the FTC.

If you want to opt out of this one (of many possible) vendors of individual tracking devices you can do so at http://www.nomi.com/homepage/privacy/ .Good Luck.