Privacy and Security

Guest Post from: Marvi Islam

Let me start it with privacy and link it to security. Well, all of us know about the privacy settings on Facebook and we like them so much as we can hide from our family members, the things we do and the people we’re with. But wait, what about security? How is privacy linked to security?

Let’s leave the digital platform and move our focus towards our daily lives. We need security in our banks, schools, public places and even in our homes and parks. But have you ever wondered what price we pay for this non-existent blanket of security? Privacy.  Let me reiterate –  security at the price of privacy. Those cute little things we see on the ceilings of our school corridors; we call them “CCTV” –  they are installed for our security. But security from? No one bothers to ask. Maybe they (the authorities) want to tape everything in case something bad happens so that they can go through the tapes and catch perps red-handed. But they are taping every single thing and we don’t take this as them breaching our privacy?

A number of times these tapes have been misused causing niggling unpleasantries and yet it’s ok. There’s a famous proverb in Hindi that translates to this,“You have to sacrifice one thing to get another”. Here we sacrifice our privacy to get security. With self-driving cars grabbing all the attention, there goes more data to stay connected and apparently, “secure”.

Similarly, some companies check what their employees are up to and what they are doing on their computers while they are at work. This, from the company’s perspective is to avoid plausible breach of sensitive data but is such constant monitoring even ethical? So, does it really have to be a tradeoff? Security for privacy and vice versa?

Marvi Islam is from Islamabad, Pakistan and studies at Capital University of Science and Technology, Islamabad. https://www.facebook.com/marvi.islam

Police Cameras

My daughter is attending a citizen police academy. They discussed the challenges that police cameras (body, squad car, interview rooms, traffic monitoring, etc.) present — and these related, in part, to the objectives of having such cameras.

1) When an officer is apprehending a suspect, a video of the sequence covers a topic that is very likely to be raised in court (in the  U.S. where fairly specific procedures need to be followed during an arrest.)  Evidence related to this has to follow very specific rules to be admissible.  An example of this concept is in the Fort Collins Colorado police FAQ where they provide some specifics. This process requires managed documentation trails by qualified experts to assure the evidence can be used.  There are real expenses here beyond just having a camera and streaming/or transferring the sequences to the web. Web storage has been created that is designed to facilitate this management challenge. Note that even if the prosecution does not wish to use this material, the defense may do so, and if it is not managed correctly, seek that charges be dismissed. (For culture’s where defendants are not innocent until proven guilty and/or there is not a body of case or statutory defendants rights this may sound odd, but in the U.S. it is possible for a blatantly guilty perpetrator to have charges against him dropped due to a failure to respect his rights.)

2) There are situations where a police officer is suspected of criminal actions. For real time situations (like those in the news recently), the same defendants rights need to be respected for the officer(s) involved. Again close management is needed.

Note that in these cases, there are clear criminal activities that the police suspect at the time when the video is captured, and managing the ‘trail of evidence’ is a well defined activity with a cost and benefit that is not present without the cameras.

The vast majority of recorded data does not require the chain-of-evidence treatment. If a proper request for specific data not associated with an arrest results in data that is used in court, it is most likely to be by a defendant, and the prosecutor is unlikely to challenge the validity of the data since it deprecates their own system.

Of course there are other potential uses of the data.  It might contain information relevant to a divorce actions (the couple in the car stopped for the ticket – one’s spouse wants to know why the other person was in the car); or the images of bystanders at a site might impact the apparent privacy of such persons. (Although in general no right of privacy is recognized in the U.S. for persons in public.)

The Seattle police are putting some video on YouTube, after applying automated redaction software to protect the privacy of individuals captured in the frame. Just the presence of the video cameras can reduce both use of force and citizen complaints.

There are clearly situations where either the police, or the citizens involved, or both would find a video recording to be of value, even if it did not meet evidentiary rules.  Of course the concern related to such rules is the potential for in-appropriate editing of the video to transform it from an “objective” witness to bias it in one direction or another.

We have the technology— should we use it?  An opinion piece by Jay Stanley in SSIT’s Technology and Society journal outlines some of these issues in more detail.

Emoti Con’s

I’m not talking about little smiley faces :^( ,,, but how automation can evaluate your emotions, and as is the trend of this blog – how that information may be abused.

Your image is rather public.  From your Facebook page, to the pictures posted from that wedding you were at, to the myriad of cameras capturing data in every store, street corner, ATM machine, etc. And, as you (should) know, facial recognition is already there to connect your name to that face.  Your image can also be used to evaluate your emotions, automatically with tools described in a recent Wall St Journal article (The  Technology That Unmasks Your Hidden Emotions.)  These tools can be used in real time as well as evaluation of static images.

So wandering though the store, it may be that those cameras are not just picking up shop-lifters, but lifting shopper responses to displays, products and other aspects of the store.  Having identified you (via facial recognition, or the RFID constellation you carry)  the store can correlate your personal response to specific items.  The next email you get may be promoting something you liked when you were at the store, or an well researched-in-near-real-time evaluation of what ‘persons like you’ seem to like.

The same type of analysis can be used analysing and responding to your responses in some political context — candidate preferences, messages that seem to be effective. Note, this is no longer the ‘applause-meter’ model to decide how the audience responds, but personalized to you, as a face-recognized person observing that event. With cameras getting images though front windshields posted on political posters/billboards it may be possible to collect this data on a very wide basis, not just for those who chose to attend an event.

Another use of real time emotional tracking could play out in situations such as interviews, interrogations, sales show rooms, etc.  The person conducting the situation may be getting feedback from automated analysis that informs the direction they lead the interaction. The result might be a job offer, arrest warrant or focused sales pitch in these particular cases.

The body-language of lying is also being translated.  Presumably a next step here is for automated analysis of your interactions. For those of us who never, ever lie, that may not be a problem. And of course, being a resident of New Hampshire where the 2016 presidential season has officially opened, it would be nice to have some of these tools in the hands of the citizens as we seek to narrow down the field of candidates.

 

Privacy Matters

Alessandro Acquisti’s TED talk, Why Privacy Matters.lays out some key privacy issues and revealing research of what is possible with online data  In one project they were able to locate student identities via face recognition in the few minutes needed to fill out a survey …. and potentially locate their Facebook page using that search.  In a second project they were able to deduce persons social security numbers (a key U.S. personal identifier) from their Facebook page data. This opens the possibility that any image of you can lead to both identifying you, and also locating significant private information about you.

There is a parallel discussion sponsored by the IEEE Standards Association on “The Right to be Forgotten“.  This was triggered by a recent European court case where an individual did not want information about his past to be discoverable via search engines. These two concepts collide when an individual seeking to be “forgotten” has their image captured by any range of sources (store cameras, friends posting photos, even just being “in the picture” that someone else is taking.)  If that can be translated into diverse personal information, then even the efforts of the search engine providers to block the searches will be futile.

Alessandro identifies some tools that can help:  The Electronic  Freedom Foundation’s anonymous internet portal, and Pretty Good Privacy (PGP) which can deliver a level of encryption that is very expensive to crack, with variations being adopted by Google, Yahoo and maybe even Apple to protect the content of their devices and email exchanges. There are issues with the PGP model and perhaps some better approaches. There  is also government push back against too strong of encryption — which is perhaps one of the best endorsements for the capability of such systems.

Behind all this is the real question of how seriously we choose to protect our privacy. It is a concept given greater consideration in Europe than in the U.S. — perhaps because the deeper European history has proven that abuse by governments or other entities can be horrific — an experience that has not engaged the “Youth of America”, nor discouraged the advertising/commercial driven culture that dominates the Internet.

Alessandro observes that an informed public that understands the potential issues is a critical step towards developing policy, tools and the discipline needed to climb back up this slippery slope.

 

Too Close for Comfort? Detecting your presence.

A group of authors in the August 2014 issue of IEEE Computer outline some pros, cons and examples of proximity sensing technology that initiates advertising, action and may report your presence to some data collection process. The article is called The Dark Patterns of Proxemic Sensing.

There are simple examples which most folks have encountered: the faucet that turns on when you put your hands near it, followed by the automated hand dryer or paper towel dispenser.  This paper Identifies some current examples that many of us may not have encountered: the mirror that presents advertising, a wall of virtual “paparazzi” that flash cameras at you accompanied by cheering sounds, and urinals that incorporate video gaming. Some of these systems are networked, even connected to the internet.  Some interact anonymously, others are at least capable of face or other forms of recognition.

The article identifies eight “dark” aspects of this proximity interaction:

  1. Captive Audience – there is a concern of unexpected/undesired interactions in situations where the individual must go for other reasons.
  2. Attention Grabbing – detection and interaction allows these systems to distract the target individual.  Which may be problematic, or just annoying.
  3. Bait and Switch – initiating interaction with an attractive first impression, then switching to a quite different agenda.
  4. Making personal information public — for example, displaying or announcing your name upon recognition.
  5. We never forget – tracking an individual from one encounter to the next, even spanning locations for networked systems.
  6. Disguised data collection – providing (personalized) data back to some central aggregation.
  7. Unintended relationships – is that person next to you related in some way — oh, there she is again next to you at a different venue…
  8. Milk factor – forcing a person to go through a specific interaction (move to a location, provide information …) to obtain the promised service.

Most of these are traditional marketing/advertising concepts, now made more powerful by automation and ubiquitous networked systems.  The specific emerging technologies are one potentially  disturbing area of social impact.  A second is the more general observation that the activities we consider innocuous or even desirable historically may become more problematic with automation and de-personalization.  The store clerk might know you by name, but do you feel the same way when the cash register or the automatic door knows you?

Issues in this area area also discussed in the Summer 2014 issue of Technology and Society – Omnipresent Cameras and Personal Safety Devices being relevant articles in that issue.

Soft Biometrics

Karl Ricanek has an article in the Sept. Computer Magazine, “Beyond Recognition: The Promise of Biometric Analytics“.  He points out a range of possible applications for biometric analysis beyond identifying specific individuals.   Many of these are ‘grist’ for the social impact mill.  Karl defines Biometric Analytics as the discovery of potentially interesting information about a person other than identity using biometric signal patterns. He includes in these emotional state, longevity, aliveness (if you a reading this, you are alive), continuous authentication, ethnicity, gender, age (demographics in general), honesty, concentration, mood, attitude, and even frustration with automated phone systems (‘dial 1 if you like talking to robots, dial 2 if you would like to toss your phone out the window, …’) A few specific examples include:

  1. Audience reaction research – detecting smiles, confusion, boredom, or distress. This could help editing movies, or developing higher impact advertising.
  2. Karl’s own research is on detection of age and longevity.  He has a web site, FaceMyAge that uses facial photos for this. Apparently if you look older than you are, you are likely to die younger, insight life insurance companies might value. Also cosmetic companies in terms of helping you to look younger (and maybe reduce your life insurance premiums?)

Karl anticipates answers to everyday questions  such as: “is the speaker on TV being honest?” (not needed for QVC, politicians, … or even many news programs now days); “How much money will I need for retirement?” (a discrete way of asking ‘how much time do I have left?’);”Will using this cosmetic really make me look younger?” — and the most dangerous question of all “does this outfit make me look fat?” (ok, Karl does not include this one.) Engineers and autistic persons are reputedly poor at reading the emotional state of others. Perhaps a possible Google Glass app would provide a some clues. Some devices for the improved transmission  of biometric signals have been developed as well.  My granddaughter just obtained a set of Brainwave Cat Ears which are supposed to indicate your state (focused, in-the-zone, relaxed) …. and, ‘ur, ah no you look just great in those cat ears, not fat at all’ (or at least that is what my Glass app suggested I say.) What biometric analytics would you encourage? What unanticipated consequences do you envision?

Private Cameras vs State Cameras

A recent opinion piece in Technology and Society by Jay Stanley (ACLU) questions the impact of Omnipresent Cameras — every cell phone is a video device, potentially streaming to the net live. Drones, private and government have their eyes open. Streets are monitored at traffic lights (and elsewhere) by government cameras as are many buildings via private cameras. The next article by Steve Mann talks about the “black bubbles” that are used to obscure cameras — and includes delightful images of Steve and friends with similar bubbles on their heads.  Steve points to lighting devices that incorporate cameras that can recognize faces and read license plates Jay points out that today we expect significant events in the public space to be recorded. The aftermath of the Ferguson shooting was captured by a cell phone camera, but the police car recordings (if any) have not been released.

All of this leads to  cultural questions  on the appropriate expectations of privacy, possible restrictions on public recording of government activities (such as police at a traffic stop, or the evolution of a demonstration in the streets of (pick your favorite city). It does not take much to demonstrate that eye witnesses are poor recorders of events (see Dan Simmons research on selective attention) — this makes the availability of “recorded” evidence quite useful. With more cameras on cars (backup cameras), on person (Glass), on buildings, on planes/drones, light-bulbs, and yes-the increasing image quality of the cameras that turn on/off devices in the bathroom (Steve points out these are up to  1024 pixels) the expectations of privacy “in public” are diminishing, and the potential for photographic evidence are increasing. Jay suggests that both police and the folks they interact with act differently when officers are equipped with body cameras.

So is this good?  What ethical issues, or even rules of evidence apply? How does it vary from culture to culture?

Culture vs Technology

Freedom of Speech vs the Right to be Forgotten …. Technology and Society, but whose society?  A recent European Court ruled that Google (also Bing, Yahoo, etc.) might be required to remove links that might be “accurate” but objectionable to the affected individual(s).  It is easy in a world with a dominating culture (U.S.A.) and particularly for technologists working in that culture (Google, et al) to adopt and apply the values of that culture (Free speech) without being aware of alternative cultural norms.

Apparently in Europe, particularly in Germany and France, have some precedents that suggest that prior offences, actions, public knowledge should become unaccessible in the present and future.  This is being considered as part of new E.U. Privacy legislation, and not just a court finding.

It is easy (particularly for those of us in the USA) to hold up the sacred right of free speech (as written in the book of Constitution, verse 1:1) and ignore the concerns, and abuses associated with this.  Some folks on-line are surprised that Facebook (or other) postings of their pictures/activities may result in them being expelled from college, fired, or fail to get a job. This “long tail” left by all of us in the exponentially growing web may contain many issues of concern.  For example, if I mention diabetes in this posting might I lose health insurance? Or if someone with a very similar name is leading a quite different life style, might I suffer some of the consequences?  And of course if I advocate an issue or candidate or religious affiliation could I find that I am persecuted in the media, or worse by police showing up  at my door (consider the recent transitions in Egypt… ops, there I go).

Consider one example, the widespread “sex offender” registration required by many US states.  This has  been a topic of non-academic discussion (Dear Abby) recently but presents an interesting reference. Note that the distinction between an individual found guilty of molesting children many times and a eighteen year old’s indiscretions with a seventeen year old can be indistinguishable in this context.  The public “right to know” would seem to apply in one case, and the chances of recurrence seems unlikely in the other —  yet both may lead to loss of job opportunities, shunning by neighbors, etc.

Facilitating the oppression of religious groups, political dissidents, or even un-informed misuse of the failings of youth seems a good rationale for a “Right to be Forgotten”.  At the same time,  and almost in the same breath, we can hear the need to know a political candidate’s racist remarks, the series of lawsuits brought against a used car dealer (the U.S. stereotype for a shady business),  or perhaps that my fiance has three divorces in the last five years. (This is hypothetical!)  The “Right to be Forgotten” may also be countered with the “Right to Never Be Forgotten”.  The Internet has created a global village — with all of the gossip and “everyone knows” implications of the spotlight of a small town.

This challenge is just beginning.  With face recognition and public web-cams, and many other sources of personal data being captured explicitly or implicitly how we deal with the diversity of cultural norms is non-trivial.

What are the issues you see?

Candid Camera “Relationship Test”

The Jan 22 Wall St. Journal has an article by Geoffrey A. Fowler evaluating two new very small cameras designed to do “life tracking” (taking a picture every 30 seconds or as your environment changes) … like 2000 pictures a day.  This echos some of the discussion from the ISTAS 13 conference where the implications of this technology were a major consideration.

Geoffrey presents a benefit of having a fairly complete picture of your day, when he scans back over the day to find where he left his glasses (keys, whatever).  And of course the question of civility raised if you are taking everyone’s photo just by being there. (He added a camera icon to the outside of one of his devices to make this more obvious to folks.)

As this technology shrinks and gets integrated into things, the idea of “photo free” zones may be impractical. The implications will be widespread … can your device be used as a witness against you? — in say that car accident, or “who was that woman I saw you with last night?”  Witnesses to events will have more than just their memory to draw upon in trying to recall the details. (Some of these devices have built in GPS units so they capture location and time as well as the ‘view’).

The article also points out that during a hiking trip with the family he obtained a dozen good candid pictures along with 1000 for the trash bin. So either most of the content will be “write only” — never to actually be used/viewed/curated …. or will take up significant time to sort the wheat from the chaff.

Geoffrey provides a useful way to evaluate the technologies that may require an extra amount of consideration, his “relationship test: How does this piece of technology change not just my life, but how I interact with you?” — A useful question to add to the SSIT lexicon.

ICDP 2013 — Imaging for Crime Prevention and Detection

5th International Conference on Imaging for Crime Prevention and Detection was held at Kingston University’s Penrhyn Road Campus, London, UK on 16-17 December 2013.

The Conference aimed to create an important networking forum in which participants can discuss the present and future of image-based technologies for crime detection and prevention.

For details, please visit: http:dipersec.king.ac.uk/icdp2013