Your TV might be Binge watching you!

VIZIO is reportedly paying fines for using users TVs to track their viewing patterns in significant detail as well as associating this with  IP address data including age, sex, income, marital status, household size, education level, home ownership, and home values.

Presumably this might have been avoided if VIZIO had presented the users with a “privacy statement” or “terms of use” when they installed their TV.  But failure to have obtained the appearance of consent put them in this situation.

It has been clear that all “free” media (and many paid channels), for TV, Cable, Radio, Internet streaming, etc. all want to track this information.  On one hand they can use it to provide “a better user experience” (show you the ads/suggested programs that match your demographics) … and of course the flip side is also true, selling your data to 3rd parties (a.k.a. ‘trusted business partners’)  so they can be more effective at interacting with you is part of the game.

Now lets step it up a notch.  Your TV (or remote controller) may use voice recognition, often using the “mother ship’ resources for the AI analysis if what you have requested. That is, your voice is sent back to servers that interpret and respond.  This leads to another level of monitoring … some of your characteristics might be infered from your voice, and others from background sounds or voices, and even more if the recording device just happens to track you all the time.  “Seri are you listening in again?” — and then add a camera … now the fun can really start.

Big Brother/Data 2016

The power of big data, AI/analytics, and subtle data collection are converging to a future only hinted at in Orwell’s 1984.  With the rapid developments on many fronts, it is not surprising that those of us who are only moderately paranoid have not been tracking it all. So here’s an update on some of the recent information on who is watching you and why:

Facebook (no surprise here) has been running personality quizzes that evaluate how your OCEAN score lines up.  That is Openness, Conscientiousness, Extroversion, Agreeableness and Neuroticism.  These “Free” evaluations are provided by Cambridge Analytica. The applications of this data to political election influence is documented by the NY Times (subscription required) and quoted in part by others.  The short take is that your Facebook profile (name, etc.) is combined with your personality data, and “onboarding” data from other sources such as age, income, debt, purchases, health concerns, car, gun  and home ownership and more.  Cambridge Analytica is reported to have records with 3 to 5 thousand data points on each of 230 million adult Americans. — which is most of us.

How to they use this data?  Psycho-graphic micro-targeted advertising is the recent target, seeking to influence voting in the U.S. Election.  They only support Republican candidates, so other parties will have to develop their own doomsday books.  There is no requirement that the use of the quizzes be disclosed, nor that the “ads” be identified as political or approved by any candidate.  The ads might not appear to have any specific political agenda, they might just point out news (or fake news) stories that play to your specific personality and have been test-marketed to validate the influence they will have on the targeted voter(s).  This may inspire you to get out and vote, or to stay-home and not bother — depending on what candidate(s) you support (based on social media streams, or more generalize characteristics if you personally have not declared your preferences.)  — Impact — quite possibly the U.S. Presidency.

But wait, that’s not all.

The U.K is expanding their surveillance powers, requiring Internet Companies to retain interactions/transactions for a year, including every web site you have accessed. This apparently is partially in response to the assertions by France that similar powers had foiled an ISIS attack in France. The range of use (abuse) that might be applied by the UK government and their allies remains to be seen (or more likely will remain hidden.)

But, consider what China is doing to encourage residents to be “sincere”. [Here is a serious limitation of my linguistic and cultural skills — no doubt there is a Mandarin word that is being used and translated to “sincere”, and that it carries cultural implications that may not be evident in translation.]  Data collected to determine your “social credibility rating”. includes: tax, loan, bill, and other payments (on time?), adherence to traffic rules, family planning limits, academic record, purchasing, online interactions, nature of information you post online, volunteer activity, and even “filial piety” (respect for elders/ancestors). And the applications of such data?  So far 4.9 million airline tickets have been refused. Your promotion, or even job opportunities can be limited with “sensitive” jobs being subject to review — judges, teachers, accountants, etc. A high score will open doors — possible faster access to government services.  By letting citizens see their score, they can be encouraged to ‘behave themselves better’.  By not disclosing all of the data collected, nor all of the implications the state can bully citizens into far greater sincerity than they might adopt if they were just trying to not break the law.

Your comments, thoughts and responses are encouraged, but remember — they are being recorded by others for reasons you may never know.  … Sincerely yours, Jim

Is RFID Getting Under Your Skin?

Technology & Society has touched on this a few times… RFID implants in people.  WSJ has an update worth noting. My new car uses RFID chips to open doors and start the ignition.  Having these “embedded” could be of value… but what if I buy a different car?   The article lists electronic locks as one application, and also embedding medical history, contact information, etc.   Your “RFID” constellation (credit cards, ID cards, keys, etc.) can identify you uniquely — for example as you enter a store, etc.  So the ‘relationship’ between your RFID and the intended devices goes beyond that one-to-one application.

An ethical issue raised was that of consent associated with embedding RFID in a person who may not be able to provide consent, but would benefit from the ID potential, lock access (or denial), etc.  An obvious example is tracking a dementia patient if they leave the facility.  Of course we already put on wrist bands that are difficult to remove, and these might contain RFID or other locating devices.

What applications might cause you to embed a device under your skin? What concerns do you have about possible problems/issues?

Predictive Analytics – Rhinos, Elephants, Donkeys and Minority Report

The  IEEE Computer Society published “Saving Rhinos with Predictive Analytics” in both IEEE Intelligent Systems, and in the more widely distributed ‘Computing Edge‘ (a compendium of interesting papers taken from 13 of the CS publications and provided to members and technologists at no cost.  The article describes how data based analysis of both rhino and poacher activity in concert with AI algorithms can focus enforcement activities in terms of timing and location and hopefully save rhinos.

For those outside of the U.S., the largest population of elephants (Republicans) and donkeys (Democrats) are in the U.S.– these animals being symbols for the respective political parties, and now on the brink of the 2016 presidential primaries, these critters are being aggressively hunted — ok, actually sought after for their votes.  Not surprisingly the same tools are used to locate, identify and predict the behaviour of these persons.   When I was young (1964) I read a book called The 480, which described the capabilities of that timeframe for computer based political analysis and targeting of “groups” required to win an election. (480 was the number of groupings of the 68 million voters in 1960 to identify which groups you needed to attract to win the election.)   21st century analytics are a bit more sophisticated — with as many as 235 million groups, or one per potential voter (and over 130 million voters likely to vote.).  A recent kerfuffle between the Sanders and Clinton campaign over “ownership/access” to voter records stored on a computer system operated by the Democratic National Committee reflects the importance of this data.  By cross connecting (data mining) registered voter information with external sources such as web searches, credit card purchases, etc. the candidates can mine this data for cash (donations) and later votes.  A few percentage point change in delivering voters to the polls (both figuratively, and by providing rides where needed) in key states can impact the outcome. So knowing each individual is a significant benefit.

Predictive Analytics is saving rhinos, and affecting the leadership of super powers. But wait, there’s more.  Remember the movie “Minority Report” (2002). This movie started on the surface with apparent computer technology able to predict future crimes by specific individuals — who were arrested to prevent the crimes.  (Spoiler alert) the movie actually proposes a group of psychics were the real source of insight.  This was consistent with the original story (Philip K Dick) in 1956, prior to The 480, and the emergence of the computer as a key predictive device.  Here’s the catch, we don’t need the psychics, just the data and the computers.  Just as the probability of a specific individual voting for a specific candidate or a specific rhino getting poached in a specific territory can be assigned a specific probability, we are reaching the point where aspects of the ‘Minority Report’ predictions can be realized.

Oddly, in the U.S., governmental collection and use of this level of Big Data is difficult due to privacy illusions, and probably bureaucratic stove pipes and fiefdoms.   These problems do not exist in the private sector.  Widespread data collection on everybody at every opportunity is the norm, and the only limitation on sharing is determining the price.  The result is that your bank or insurance company is more likely to be able to predict your likely hood of being a criminal, terrorist, or even a victim of a crime than the government.  Big Data super-powers like Google, Amazon, Facebook and Acxiom have even more at their virtual fingertips.

Let’s assume that sufficient data can be obtained, and robust AI techniques applied to be able to identify a specific individual with a high probability of a problematic event — initiating or victim of a crime in the next week.  And this data is implicit or even explicit in the hands of some corporate entity.  Now what?  What actions should said corporation take? What probability is needed to trigger such actions? What liability exists for failure to take such actions (or should exist)?

These are issues that the elephants, and donkeys will need to consider over the next few years — we can’t expect the rhinos to do the work for us.  We technologists may also have a significant part to play.

FTC, NoMi and opting out

The U.S. Federal Trade Commission (FTC) settled charges with Nomi Technologies over it’s opt-out policy on April 23rd. Nomi’s business is putting devices in retail stores that track MAC addresses.  A MAC unique MAC address is associated with every device that can use WiFi –it is the key to communicating with your device (cell phone, tablet, laptop, etc.) as opposed to someone elses device.  Nomi apparently performs a hash ‘encryption’ on this (which is still unique, just not usable for WiFi communications) and tracks your presence near or in participating retail stores world wide.

The question the FTC was addressing is does Nomi adhere to it’s privacy policy, which indicates you can opt out in store, and would know what stores are using the technology. Nomi’s privacy policy (as of April 24) indicates they will never collect any personally identifiable information without a consumer’s explicit opt in — of course since you do not know where they are active, nor that they even exist it would appear that they have no consumer’s opting in.  Read that again closely — “personally identifiable information” … it is a MAC address, not your name, and at least one dissenting FTC commissioner asserted that “It is important to note that, as a third party contractor collecting no personally identifiable information, Nomi had no obligation to offer consumers an opt out.”  In other words, as long as Nomi is not selling something to the public, they should have no-holds-barred ability to use your private data anyway they like. The second dissenting commissioner asserts “Nomi does not track individual consumers – that is, Nomi’s technology records whether individuals are unique or repeat visitors, but it does not identify them.” Somehow this commissioner assumes that the unique hash code for a MAC address that can be used to distinguish if a visitor is a repeat, is less of a individual identifier than the initial MAC address (which he notes is not stored.) This is sort of like saying your social security number backwards (a simplistic hash) is not an identifier whereas the number in normal order is.  Clearly the data is a unique identifier and is stored.  Nomi offers the service (according to their web site) to “increase customer engagement by delivering highly relevant mobile campaigns in real time through your mobile app” So, with the data the store (at it’s option) chooses to collect from customers (presumably by their opting in via downloading an app) is the point where your name, address, and credit card information are tied into the hashed MAC address.  Both dissenting commissioners somehow feel that consumers are quite nicely covered by the ability to go to the web site of a company you never heard of, and enter all of your device MAC addresses (which you no doubt have memorized) to opt-out of a collecting data about you that you do not know is being collected for purposes that even that company does not know (since it is the retailer that actually makes use of the data.)  There may be a need to educate some of the folks at the FTC.

If you want to opt out of this one (of many possible) vendors of individual tracking devices you can do so at http://www.nomi.com/homepage/privacy/ .Good Luck.

 

Police Cameras

My daughter is attending a citizen police academy. They discussed the challenges that police cameras (body, squad car, interview rooms, traffic monitoring, etc.) present — and these related, in part, to the objectives of having such cameras.

1) When an officer is apprehending a suspect, a video of the sequence covers a topic that is very likely to be raised in court (in the  U.S. where fairly specific procedures need to be followed during an arrest.)  Evidence related to this has to follow very specific rules to be admissible.  An example of this concept is in the Fort Collins Colorado police FAQ where they provide some specifics. This process requires managed documentation trails by qualified experts to assure the evidence can be used.  There are real expenses here beyond just having a camera and streaming/or transferring the sequences to the web. Web storage has been created that is designed to facilitate this management challenge. Note that even if the prosecution does not wish to use this material, the defense may do so, and if it is not managed correctly, seek that charges be dismissed. (For culture’s where defendants are not innocent until proven guilty and/or there is not a body of case or statutory defendants rights this may sound odd, but in the U.S. it is possible for a blatantly guilty perpetrator to have charges against him dropped due to a failure to respect his rights.)

2) There are situations where a police officer is suspected of criminal actions. For real time situations (like those in the news recently), the same defendants rights need to be respected for the officer(s) involved. Again close management is needed.

Note that in these cases, there are clear criminal activities that the police suspect at the time when the video is captured, and managing the ‘trail of evidence’ is a well defined activity with a cost and benefit that is not present without the cameras.

The vast majority of recorded data does not require the chain-of-evidence treatment. If a proper request for specific data not associated with an arrest results in data that is used in court, it is most likely to be by a defendant, and the prosecutor is unlikely to challenge the validity of the data since it deprecates their own system.

Of course there are other potential uses of the data.  It might contain information relevant to a divorce actions (the couple in the car stopped for the ticket – one’s spouse wants to know why the other person was in the car); or the images of bystanders at a site might impact the apparent privacy of such persons. (Although in general no right of privacy is recognized in the U.S. for persons in public.)

The Seattle police are putting some video on YouTube, after applying automated redaction software to protect the privacy of individuals captured in the frame. Just the presence of the video cameras can reduce both use of force and citizen complaints.

There are clearly situations where either the police, or the citizens involved, or both would find a video recording to be of value, even if it did not meet evidentiary rules.  Of course the concern related to such rules is the potential for in-appropriate editing of the video to transform it from an “objective” witness to bias it in one direction or another.

We have the technology— should we use it?  An opinion piece by Jay Stanley in SSIT’s Technology and Society journal outlines some of these issues in more detail.

Emoti Con’s

I’m not talking about little smiley faces :^( ,,, but how automation can evaluate your emotions, and as is the trend of this blog – how that information may be abused.

Your image is rather public.  From your Facebook page, to the pictures posted from that wedding you were at, to the myriad of cameras capturing data in every store, street corner, ATM machine, etc. And, as you (should) know, facial recognition is already there to connect your name to that face.  Your image can also be used to evaluate your emotions, automatically with tools described in a recent Wall St Journal article (The  Technology That Unmasks Your Hidden Emotions.)  These tools can be used in real time as well as evaluation of static images.

So wandering though the store, it may be that those cameras are not just picking up shop-lifters, but lifting shopper responses to displays, products and other aspects of the store.  Having identified you (via facial recognition, or the RFID constellation you carry)  the store can correlate your personal response to specific items.  The next email you get may be promoting something you liked when you were at the store, or an well researched-in-near-real-time evaluation of what ‘persons like you’ seem to like.

The same type of analysis can be used analysing and responding to your responses in some political context — candidate preferences, messages that seem to be effective. Note, this is no longer the ‘applause-meter’ model to decide how the audience responds, but personalized to you, as a face-recognized person observing that event. With cameras getting images though front windshields posted on political posters/billboards it may be possible to collect this data on a very wide basis, not just for those who chose to attend an event.

Another use of real time emotional tracking could play out in situations such as interviews, interrogations, sales show rooms, etc.  The person conducting the situation may be getting feedback from automated analysis that informs the direction they lead the interaction. The result might be a job offer, arrest warrant or focused sales pitch in these particular cases.

The body-language of lying is also being translated.  Presumably a next step here is for automated analysis of your interactions. For those of us who never, ever lie, that may not be a problem. And of course, being a resident of New Hampshire where the 2016 presidential season has officially opened, it would be nice to have some of these tools in the hands of the citizens as we seek to narrow down the field of candidates.

 

Your DNA into Your Picture

A recent Wall St Journal interview with J. Craig Venter indicates his company is currently working on translating DNA data into a ‘photo of you’, or the sound of your voice. The logic of course is that genetics (including epigenetic elements) include the parts list, assembly instructions and many of the finishing details for building an individual.  So it may not come as a surprise that a DNA sample can identify you as an individual (even distinct from your identical twin — considering mutations and epigenetic variations) — or perhaps even to create a clone.  But having a sample of your DNA translated into a picture of your face (presumably at different ages) or an imitation of your voice is not something that had been in my  genomic awareness.

The DNA sample from the crime scene may do more than identify the Perp, it may be the basis for generating a ‘police sketch’ of her face.

The movie Gattaca projected a society where genetic evaluation was a make/break factor in selecting a mate, getting a job, and other social decisions.  But it did not venture into the possibility of not just the evaluation of genetic desirability of a mate, but perhaps projecting their picture some years into the future.  “Will you still need me .. when I’m sixty four?

The interview also considers some of the ethical issues surrounding insurance, medical treatment and extended life spans … What other non-obvious applications can you see from analyzing the genomes and data of a few million persons?

Genomics, Big Data and Google

Google is offering cloud storage and genomic specific services for genome data bases.  It is unclear (to this blogger) what levels of anonymity can be assured with such data.  Presumably a full sequencing (perhaps 100 GB of data) is unique to a given person (or set of identical twins since this does not, yet, include epigenetic data) providing a specific personal identifier — even if it lacks name or social security number. Researchers can share data sets with team members, colleagues or the public.  The National Cancer Institute has moved thousands of patient datasets to both Google and Amazon cloud storage.

So here are some difficult questions:

If the police have a DNA sample from a “perp”, and search the public genome records, and find a match, or parent, or … how does this relate to U.S. (or other jurisdiction) legal rights?  Can Google (or the researcher) be forced to identify the related individual?

Who “owns” your DNA dataset? The lab that analyses it,  the researcher, you?  And what can these various interests do with that data?  In the U.S. there are laws that prohibit discrimination for health insurance based on this data, but not long term care insurance, life insurance or employment decisions.

Presumably for a cost of $1000 or so I can have any DNA sample sequenced.  Off of a glass from a restaurant, or some other source that was “left behind”.  Now what rights, limits, etc. are implicit in this collection and the resulting dataset?  Did you leave a coffee cup at that last staff meeting?

The technology is running well ahead of our understanding of the implications here — it will be interesting.

Privacy Matters

Alessandro Acquisti’s TED talk, Why Privacy Matters.lays out some key privacy issues and revealing research of what is possible with online data  In one project they were able to locate student identities via face recognition in the few minutes needed to fill out a survey …. and potentially locate their Facebook page using that search.  In a second project they were able to deduce persons social security numbers (a key U.S. personal identifier) from their Facebook page data. This opens the possibility that any image of you can lead to both identifying you, and also locating significant private information about you.

There is a parallel discussion sponsored by the IEEE Standards Association on “The Right to be Forgotten“.  This was triggered by a recent European court case where an individual did not want information about his past to be discoverable via search engines. These two concepts collide when an individual seeking to be “forgotten” has their image captured by any range of sources (store cameras, friends posting photos, even just being “in the picture” that someone else is taking.)  If that can be translated into diverse personal information, then even the efforts of the search engine providers to block the searches will be futile.

Alessandro identifies some tools that can help:  The Electronic  Freedom Foundation’s anonymous internet portal, and Pretty Good Privacy (PGP) which can deliver a level of encryption that is very expensive to crack, with variations being adopted by Google, Yahoo and maybe even Apple to protect the content of their devices and email exchanges. There are issues with the PGP model and perhaps some better approaches. There  is also government push back against too strong of encryption — which is perhaps one of the best endorsements for the capability of such systems.

Behind all this is the real question of how seriously we choose to protect our privacy. It is a concept given greater consideration in Europe than in the U.S. — perhaps because the deeper European history has proven that abuse by governments or other entities can be horrific — an experience that has not engaged the “Youth of America”, nor discouraged the advertising/commercial driven culture that dominates the Internet.

Alessandro observes that an informed public that understands the potential issues is a critical step towards developing policy, tools and the discipline needed to climb back up this slippery slope.