Your TV might be Binge watching you!

VIZIO is reportedly paying fines for using users TVs to track their viewing patterns in significant detail as well as associating this with  IP address data including age, sex, income, marital status, household size, education level, home ownership, and home values.

Presumably this might have been avoided if VIZIO had presented the users with a “privacy statement” or “terms of use” when they installed their TV.  But failure to have obtained the appearance of consent put them in this situation.

It has been clear that all “free” media (and many paid channels), for TV, Cable, Radio, Internet streaming, etc. all want to track this information.  On one hand they can use it to provide “a better user experience” (show you the ads/suggested programs that match your demographics) … and of course the flip side is also true, selling your data to 3rd parties (a.k.a. ‘trusted business partners’)  so they can be more effective at interacting with you is part of the game.

Now lets step it up a notch.  Your TV (or remote controller) may use voice recognition, often using the “mother ship’ resources for the AI analysis if what you have requested. That is, your voice is sent back to servers that interpret and respond.  This leads to another level of monitoring … some of your characteristics might be infered from your voice, and others from background sounds or voices, and even more if the recording device just happens to track you all the time.  “Seri are you listening in again?” — and then add a camera … now the fun can really start.

To GO or Not to GO?

Pokemon Go has become a delightful and disturbing experiment in the social impact of technology. This new “Free” software for smart phones implements an augmented reality, overlaying the popular game on the real world. Fans wander the streets, byways, public, and in some cases private spaces following the illusive characters on their smart phone to capture them, or “in world”, or to collect virtual items.  The uptake has been amazing, approaching Twitter in terms of user-hours in just days after introduction. It has also added $12 billion to Nintendo’s stock value (almost double).

Let’s start with “Free”, and $12 billion dollars. The trick is having a no-holds barred privacy policy. Not surprising, the game knows who you are and where you are. It also can access/use your camera, storage, email/phone contacts, and potentially your full Google account (email contents, Drive contents, etc.)  Them money comes because all of this is for sale, in real time. (“While you track Pokemon, Pokemon Go tracks you”, USA Today, 12 July 16) Minimally you can expect to see “Luremodules” (a game component) used to bring well vetted (via browser history, email, call history, disk content, etc.) customers into stores that then combine ad-promotions with in-store characters. Perhaps offering your favorite flavor ice cream, or draw you into a lawyer’s office that specializes in the issues you have been discussing on email, or a medical office that …well you get the picture, and those are just the legitimate businesses.  Your emails from your bank may encourage less honest folks to lure you into a back alley near an ATM machine .. a genre of crime that has only been rumored so far.

The July 13th issue of USA Today outlines an additional set of considerations. Users are being warned by police, property owners, and various web sites for various reasons. The potential for wandering into traffic is non-trivial while pursuing an illusive virtual target, or a sidewalk obstruction, or over the edge of the cliff (is there a murder plot hiding in here?) Needless to say playing while driving creates a desperate need for self-driving cars. Since the targets change with time of day, folks are out at all hours, in all places, doing suspicious things. This triggers calls to police. Some memorial sites, such as Auschwitz and the Washington DC Holocaust Memorial Museum have asked to be exluded from the play-map. There are clearly educational opportunities that could be built into the game — tracing Boston’s “freedom trail”, and requiring player engagement with related topics is a possible example. However, lacking the explicit consideration of the educational context, there are areas where gaming is inappropriate. Also, some public areas are closed after dark, and the game may result in players trespassing in ways not envisioned by the creators, which may create unhealthy interactions with the owners, residents, etc. of the area.

One USA Today article surfaces a concern that very likely was missed by Nintendo, and is exacerbated by the recent deaths of black men in US cities, and the shooting of police in Dallas. “For the most part, Pokemon is all fun and games. Yet for many African Americans, expecially men, their enjoyment is undercut by fears they may raise suspicion with potentially lethal consequences.”  Change the countries and communities involved and similar concerns may emerge in other countries as well. This particular piece ends with an instance of a black youth approaching a policeman who was also playing the game, with a positive moment of interaction as they helped each other pursue in-game objectives.

It is said every technology cuts both ways.  We can hope that experience, and consideration will lead both players and Nintendo to evolve the positive potential for augmented reality, and perhaps with a bit greater respect for user privacy.

It’s 10PM do you know what your model is doing?

“Customers like you have also …”  This concept appears explicitly, or implicitly at many points in the web-of-our-lives, aka the Internet. Specific corporations, and aggregate operations are building increasingly sophisticated models of individuals.  Not just “like you”, but “you”! Prof. Pedro Domingos at UW  in his book “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World” suggests this model of you may become a key factor of your ‘public‘ interactions.

Examples include having Linked-in add a “find me a job” button that will conduct interviews with relevant open positions and provide you a list of the best.  Or perhaps locating a house, a car, a spouse, …well, maybe somethings are better done face-2-face.

Apparently a Asian firm, “Deep Knowledge” has appointed a virtual director to their Board. In this case it is a construct designed to detect trends that the human directors might miss.  However, one suspects that Apple might want a model of Steve Jobs around for occasional consultation, if not back in control again.

Predictive Analytics – Rhinos, Elephants, Donkeys and Minority Report

The  IEEE Computer Society published “Saving Rhinos with Predictive Analytics” in both IEEE Intelligent Systems, and in the more widely distributed ‘Computing Edge‘ (a compendium of interesting papers taken from 13 of the CS publications and provided to members and technologists at no cost.  The article describes how data based analysis of both rhino and poacher activity in concert with AI algorithms can focus enforcement activities in terms of timing and location and hopefully save rhinos.

For those outside of the U.S., the largest population of elephants (Republicans) and donkeys (Democrats) are in the U.S.– these animals being symbols for the respective political parties, and now on the brink of the 2016 presidential primaries, these critters are being aggressively hunted — ok, actually sought after for their votes.  Not surprisingly the same tools are used to locate, identify and predict the behaviour of these persons.   When I was young (1964) I read a book called The 480, which described the capabilities of that timeframe for computer based political analysis and targeting of “groups” required to win an election. (480 was the number of groupings of the 68 million voters in 1960 to identify which groups you needed to attract to win the election.)   21st century analytics are a bit more sophisticated — with as many as 235 million groups, or one per potential voter (and over 130 million voters likely to vote.).  A recent kerfuffle between the Sanders and Clinton campaign over “ownership/access” to voter records stored on a computer system operated by the Democratic National Committee reflects the importance of this data.  By cross connecting (data mining) registered voter information with external sources such as web searches, credit card purchases, etc. the candidates can mine this data for cash (donations) and later votes.  A few percentage point change in delivering voters to the polls (both figuratively, and by providing rides where needed) in key states can impact the outcome. So knowing each individual is a significant benefit.

Predictive Analytics is saving rhinos, and affecting the leadership of super powers. But wait, there’s more.  Remember the movie “Minority Report” (2002). This movie started on the surface with apparent computer technology able to predict future crimes by specific individuals — who were arrested to prevent the crimes.  (Spoiler alert) the movie actually proposes a group of psychics were the real source of insight.  This was consistent with the original story (Philip K Dick) in 1956, prior to The 480, and the emergence of the computer as a key predictive device.  Here’s the catch, we don’t need the psychics, just the data and the computers.  Just as the probability of a specific individual voting for a specific candidate or a specific rhino getting poached in a specific territory can be assigned a specific probability, we are reaching the point where aspects of the ‘Minority Report’ predictions can be realized.

Oddly, in the U.S., governmental collection and use of this level of Big Data is difficult due to privacy illusions, and probably bureaucratic stove pipes and fiefdoms.   These problems do not exist in the private sector.  Widespread data collection on everybody at every opportunity is the norm, and the only limitation on sharing is determining the price.  The result is that your bank or insurance company is more likely to be able to predict your likely hood of being a criminal, terrorist, or even a victim of a crime than the government.  Big Data super-powers like Google, Amazon, Facebook and Acxiom have even more at their virtual fingertips.

Let’s assume that sufficient data can be obtained, and robust AI techniques applied to be able to identify a specific individual with a high probability of a problematic event — initiating or victim of a crime in the next week.  And this data is implicit or even explicit in the hands of some corporate entity.  Now what?  What actions should said corporation take? What probability is needed to trigger such actions? What liability exists for failure to take such actions (or should exist)?

These are issues that the elephants, and donkeys will need to consider over the next few years — we can’t expect the rhinos to do the work for us.  We technologists may also have a significant part to play.

FTC, NoMi and opting out

The U.S. Federal Trade Commission (FTC) settled charges with Nomi Technologies over it’s opt-out policy on April 23rd. Nomi’s business is putting devices in retail stores that track MAC addresses.  A MAC unique MAC address is associated with every device that can use WiFi –it is the key to communicating with your device (cell phone, tablet, laptop, etc.) as opposed to someone elses device.  Nomi apparently performs a hash ‘encryption’ on this (which is still unique, just not usable for WiFi communications) and tracks your presence near or in participating retail stores world wide.

The question the FTC was addressing is does Nomi adhere to it’s privacy policy, which indicates you can opt out in store, and would know what stores are using the technology. Nomi’s privacy policy (as of April 24) indicates they will never collect any personally identifiable information without a consumer’s explicit opt in — of course since you do not know where they are active, nor that they even exist it would appear that they have no consumer’s opting in.  Read that again closely — “personally identifiable information” … it is a MAC address, not your name, and at least one dissenting FTC commissioner asserted that “It is important to note that, as a third party contractor collecting no personally identifiable information, Nomi had no obligation to offer consumers an opt out.”  In other words, as long as Nomi is not selling something to the public, they should have no-holds-barred ability to use your private data anyway they like. The second dissenting commissioner asserts “Nomi does not track individual consumers – that is, Nomi’s technology records whether individuals are unique or repeat visitors, but it does not identify them.” Somehow this commissioner assumes that the unique hash code for a MAC address that can be used to distinguish if a visitor is a repeat, is less of a individual identifier than the initial MAC address (which he notes is not stored.) This is sort of like saying your social security number backwards (a simplistic hash) is not an identifier whereas the number in normal order is.  Clearly the data is a unique identifier and is stored.  Nomi offers the service (according to their web site) to “increase customer engagement by delivering highly relevant mobile campaigns in real time through your mobile app” So, with the data the store (at it’s option) chooses to collect from customers (presumably by their opting in via downloading an app) is the point where your name, address, and credit card information are tied into the hashed MAC address.  Both dissenting commissioners somehow feel that consumers are quite nicely covered by the ability to go to the web site of a company you never heard of, and enter all of your device MAC addresses (which you no doubt have memorized) to opt-out of a collecting data about you that you do not know is being collected for purposes that even that company does not know (since it is the retailer that actually makes use of the data.)  There may be a need to educate some of the folks at the FTC.

If you want to opt out of this one (of many possible) vendors of individual tracking devices you can do so at http://www.nomi.com/homepage/privacy/ .Good Luck.

 

Police Cameras

My daughter is attending a citizen police academy. They discussed the challenges that police cameras (body, squad car, interview rooms, traffic monitoring, etc.) present — and these related, in part, to the objectives of having such cameras.

1) When an officer is apprehending a suspect, a video of the sequence covers a topic that is very likely to be raised in court (in the  U.S. where fairly specific procedures need to be followed during an arrest.)  Evidence related to this has to follow very specific rules to be admissible.  An example of this concept is in the Fort Collins Colorado police FAQ where they provide some specifics. This process requires managed documentation trails by qualified experts to assure the evidence can be used.  There are real expenses here beyond just having a camera and streaming/or transferring the sequences to the web. Web storage has been created that is designed to facilitate this management challenge. Note that even if the prosecution does not wish to use this material, the defense may do so, and if it is not managed correctly, seek that charges be dismissed. (For culture’s where defendants are not innocent until proven guilty and/or there is not a body of case or statutory defendants rights this may sound odd, but in the U.S. it is possible for a blatantly guilty perpetrator to have charges against him dropped due to a failure to respect his rights.)

2) There are situations where a police officer is suspected of criminal actions. For real time situations (like those in the news recently), the same defendants rights need to be respected for the officer(s) involved. Again close management is needed.

Note that in these cases, there are clear criminal activities that the police suspect at the time when the video is captured, and managing the ‘trail of evidence’ is a well defined activity with a cost and benefit that is not present without the cameras.

The vast majority of recorded data does not require the chain-of-evidence treatment. If a proper request for specific data not associated with an arrest results in data that is used in court, it is most likely to be by a defendant, and the prosecutor is unlikely to challenge the validity of the data since it deprecates their own system.

Of course there are other potential uses of the data.  It might contain information relevant to a divorce actions (the couple in the car stopped for the ticket – one’s spouse wants to know why the other person was in the car); or the images of bystanders at a site might impact the apparent privacy of such persons. (Although in general no right of privacy is recognized in the U.S. for persons in public.)

The Seattle police are putting some video on YouTube, after applying automated redaction software to protect the privacy of individuals captured in the frame. Just the presence of the video cameras can reduce both use of force and citizen complaints.

There are clearly situations where either the police, or the citizens involved, or both would find a video recording to be of value, even if it did not meet evidentiary rules.  Of course the concern related to such rules is the potential for in-appropriate editing of the video to transform it from an “objective” witness to bias it in one direction or another.

We have the technology— should we use it?  An opinion piece by Jay Stanley in SSIT’s Technology and Society journal outlines some of these issues in more detail.

Emoti Con’s

I’m not talking about little smiley faces :^( ,,, but how automation can evaluate your emotions, and as is the trend of this blog – how that information may be abused.

Your image is rather public.  From your Facebook page, to the pictures posted from that wedding you were at, to the myriad of cameras capturing data in every store, street corner, ATM machine, etc. And, as you (should) know, facial recognition is already there to connect your name to that face.  Your image can also be used to evaluate your emotions, automatically with tools described in a recent Wall St Journal article (The  Technology That Unmasks Your Hidden Emotions.)  These tools can be used in real time as well as evaluation of static images.

So wandering though the store, it may be that those cameras are not just picking up shop-lifters, but lifting shopper responses to displays, products and other aspects of the store.  Having identified you (via facial recognition, or the RFID constellation you carry)  the store can correlate your personal response to specific items.  The next email you get may be promoting something you liked when you were at the store, or an well researched-in-near-real-time evaluation of what ‘persons like you’ seem to like.

The same type of analysis can be used analysing and responding to your responses in some political context — candidate preferences, messages that seem to be effective. Note, this is no longer the ‘applause-meter’ model to decide how the audience responds, but personalized to you, as a face-recognized person observing that event. With cameras getting images though front windshields posted on political posters/billboards it may be possible to collect this data on a very wide basis, not just for those who chose to attend an event.

Another use of real time emotional tracking could play out in situations such as interviews, interrogations, sales show rooms, etc.  The person conducting the situation may be getting feedback from automated analysis that informs the direction they lead the interaction. The result might be a job offer, arrest warrant or focused sales pitch in these particular cases.

The body-language of lying is also being translated.  Presumably a next step here is for automated analysis of your interactions. For those of us who never, ever lie, that may not be a problem. And of course, being a resident of New Hampshire where the 2016 presidential season has officially opened, it would be nice to have some of these tools in the hands of the citizens as we seek to narrow down the field of candidates.

 

Eavesdropping Barbie?

So should children have toys that can combine speech recognition, wi-fi connection to capture and respond to them and potentially recording their conversations as well as feeding them “messages”.  Welcome to the world of Hello Barbie.

Perhaps I spend too much time thinking about technology abuse … but let’s see.  There are political/legal environments (think 1984 and it’s current variants) where capturing voice data from a doll/toy/IoT device could be used as a basis for arrest and jail (or worse) — can  Barbie be called as a witness in court? And of course there are the “right things to say” to a child, like “I like you”  (dolls with pull strings do that), and things you may not want to have your doll telling your child (“You know I just love that new outfit” or “Wouldn’t I look good in that new Barbie-car?”) or worse (“your parents aren’t going to vote for that creep are they?)

What does a Hello Barbie doll do when a child is clearly being abused by a parent?  Can it contact 9-1-1?  Are the recordings available for prosecution?  What is abuse that warrants action?  And what liability exists for failure to report abuse?

Update: Hello Barbie is covered in the NY Times 29 March 2015 Sunday Business section Wherein it is noted that children under 13 have to get parental permission to enable the conversation system — assuming they understand the implications. Apparently children need to “press a microphone button on the app” to start interaction. Also, “parents.. have access to.. recorded conversations and can .. delete them.”  Which confirms that a permanent record is being kept until parental action triggers deletion. Finally we are assured “safeguards to ensure that stored data is secure and can’t be accessed by unauthorized users.”  Apparently Mattel and ToyTalk (the technology providers)  have better software engineers than Home Depot, Target and Anthem.

Who is Driving My Car (revisited)

Apparently my auto insurance company was not reading my recent blog entry.  They introduced a device, “In-Drive” that will monitor my driving habits and provide a discount (or increase) in my insurance rates.

There are a few small problems. The device connects into the diagnostic port of the car, allowing it to take control of the car (brakes, acceleration, etc.) or a hacker to do this (see prior Blog entry). It is connected to the mothership (ET phones home), and that channel can be used both ways, so the hacker that takes over my car can be anywhere in the world.  I can think of three scenarios where this is actually feasible.

  1. Someone wants to kill the driver (very focused, difficult to detect).
  2. Blackmail – where bad guys decide to crash a couple of cars, or threaten to, and demand payment to avoid mayhem (what would the insurance company CEO say to such a demand?)  (Don’t they have insurance for this?)
  3. Terrorism – while many cyber attacks do not yield the requisite “blood on the front page” impact that terrorists seek, this path can do that — imagine ten thousand cars all accelerating and losing brakes at the same time … it will probably get the desired coverage.

As previously mentioned, proper software engineering (now a licensable profession in the U.S.) could minimize this security risk.

Then there is privacy.  The  insurance company’s privacy policy does not allow them to collect the data that their web page claims this device will collect — so clearly privacy is an after thought in this case.  The data collected is unclear – they have a statement about the type of data collected, and a few FAQ’s later, have a contradictory indication that the location data is only accurate within a forty square mile area, except maybe when it is more accurate.  What is stored, for what period of time, accessible to what interested parties (say a divorce lawyer) or with what protections is unclear.  A different insurance company, Anthem, encountered a major attack that compromises identity information (at least) for a large number of persons.  I’m just a bit skeptical that my auto insurance company has done their analysis of that situation and upgraded their systems to avoid similar breaches and loss of data. For those wondering what types of privacy policies might make sense, I encourage you to view the OECD policy principles and examples.  Organizations that actually are concerned with privacy  would be covering all of these bases at least in their privacy statements. (Of course they can do this and still have highly objectionable policies, or change their policies without notice.)

Your DNA into Your Picture

A recent Wall St Journal interview with J. Craig Venter indicates his company is currently working on translating DNA data into a ‘photo of you’, or the sound of your voice. The logic of course is that genetics (including epigenetic elements) include the parts list, assembly instructions and many of the finishing details for building an individual.  So it may not come as a surprise that a DNA sample can identify you as an individual (even distinct from your identical twin — considering mutations and epigenetic variations) — or perhaps even to create a clone.  But having a sample of your DNA translated into a picture of your face (presumably at different ages) or an imitation of your voice is not something that had been in my  genomic awareness.

The DNA sample from the crime scene may do more than identify the Perp, it may be the basis for generating a ‘police sketch’ of her face.

The movie Gattaca projected a society where genetic evaluation was a make/break factor in selecting a mate, getting a job, and other social decisions.  But it did not venture into the possibility of not just the evaluation of genetic desirability of a mate, but perhaps projecting their picture some years into the future.  “Will you still need me .. when I’m sixty four?

The interview also considers some of the ethical issues surrounding insurance, medical treatment and extended life spans … What other non-obvious applications can you see from analyzing the genomes and data of a few million persons?