IoT and Healthcare

The July/August Issue of IEEE Internet Computing is focused on applications in Heath care for the Internet of Things (IoT).  This morning, when I hit the Google.com home page, it had a birthday cake — and on “hover” – it wished me a “Happy Birthday Jim” — just in case you were wondering if your Google entry page might be customized for you — the answer is “yes”.   How do these two statements intersect? In some (near term?) future, that page may have suggested I needed to visit a doctor – either because I was searching a combination of symptoms, or because the sensors surrounding me (my watch, cell phone, etc.) indicated problematic changes in my health (or some combination of data from such diverse sources.)

Of course this might be followed by a message that my health insurance was being canceled, or my life insurance.

As this Internet Computing issue points out, there are many benefits to be gained from having a network of sensors that can continuously monitor and provide feedback on health data. The first paper addresses barriers — legal, policy, interoperability, user perspectives, and technological.  The second paper focuses on “encouraging physical activity” and the third paper considers “quality of life (QoL)” (physical health, psychological, social relationships and environment (financial, safety, freedom, …)) It is evident that IoT and health care have many points of overlap – some intended (monitoring devices) and some unintended (search analysis) — and all with significant personal and social impact considerations.

Besides my ingrained paranoia (will Google automatically apply for my retirement beneifts and direct the checks to their accounts?) and delusional optimism (“Your financial QoL is below acceptable norms, we have transferred $1 million into your accounts to normalize this situation – have a good day”) there are pros and cons that will emerge.

What issues and opportunities do you see?

Employee Cell Phone Tracking

An employee in California was allegedly fired for removing a tracking APP from her cell phone that was used to track her on-the-job and after-hours travel and locations.  The APP used was XORA (now part of Clicksoft).
Here are some relevant, interesting points.

  • Presumably the cell phone was provided by her employer.  It may be that she was not required to have it turned on when she was off hours.
    (but it is easy to envision jobs where 24 hour on-call is expected)
  • There are clear business uses for the tracking app, which determined time of arrival/departure from customer sites, route taken, etc.
  • There are more intrusive aspects, which stem into the objectionable when off-hours uses are considered: tracking locations, time spent there, routes, breaks, etc. — presumably such logs could be of value in divorce suits, legal actions, etc.

Consider some variations of the scenario —

  1. Employee fired for inappropriate after hours activities
  2. Detection of employees interviewing for other jobs,
    (or a whistle blower, reporting their employer to authorities)
  3. Possible “blackmail” using information about an employees off hour activities.
  4. What responsibility does employer have for turning over records in various legal situations?
  5. What are the record retention policies required?  Do various privacy notifications, policies, laws apply?
  6. What if the employer required the APP to be on a personal phone, not one that was supplied?

When is this type of tracking appropriate, when is it not appropriate?

I’ve marked this with “Internet of Things” as a tag as well — while the example is a cell phone, similar activities occur with in-car (and in-truck) monitoring devices, medical monitoring devices, employer provided tablet/laptop, and no doubt new devices not yet on the market.

T&S Magazine Winter 2014 Contents

T&S Winter 2014 cover low res

VOL. 33, NO. 4, WINTER 2014

DEPARTMENTS
4 PRESIDENT’S MESSAGE
Dear SSIT Members…
Laura Jacob

5 EDITORIAL
Enslaved
Katina Michael

9 LETTERS TO THE EDITOR
Enslavement by Technology? Reflections on the IQ2 Debate on Big Ideas

11 OPINION
Are we Enslaved by Technology?
Michael Eldred

12 LETTER TO THE EDITOR
Excessive Conference Fees

13 BOOK REVIEWS
Lonely Ideas: Can Russia Compete?
Hedy’s Folly: The Life and Breakthrough inventions of Hedy Lamarr, the Most Beautiful Woman in the World
User Unfriendly

21 OPINION
Remotely Piloted Airborne Vehicles
Philip Hall

22 COMMENTARY
Recommendations for Future Development of Artificial Agents
Deborah G. Johnson and Merel Noorman

29 COMMENTARY
Channeling Digital Convergence in Education for Societal Benefit
Arturo Serrano-Santoyo and Mayer R. Cabrera-Flores

32 TRENDS
Influential Engineers: Where Do They Come From and Where Do They Go?
J. Panaretos and C.C. Malesios

35 LEADING EDGE
Videoconferencing for Civil Commitment: Preserving Dignity
Muaid Ithman, Ganesh Gopalakrishna, Bruce Harry, and Deepti Bahl

37 COMMENTARY
Snowden’s Lessons for Whistleblowers
Brian Martin

39 OPINION
How and Why to Keep the NSA Out of Your Private Stuff – Even If You’ve “Got Nothing to Hide”
Katherine Albrecht and Liz McIntyre

42 LEADING EDGE
Using Data to Combat Human Rights Abuses
Felicity Gerry

 FEATURES

44 Leaning on the Ethical Crutch: A Critique of Codes of Ethics*
Jathan Sadowski

48 User Understanding of Privacy in Emerging Mobile Markets*
Cormac Callanan and Borka Jerman-Blazic

57 Questioning Professional Autonomy in Qualitative Inquiry*
R. Varma

65 Cell Phone Use While Driving: Risk Implications for Organizations*
S. Yang and R. Parry

73 Building Trust in the Human—Internet of Things Relationship*
Ioannis Kounelis, Gianmarco Baldini, Ricardo Neisse, Gary Steri, Mariachiara Tallacchini, and Ângela Guimarães Pereira

*Refereed articles.

Cover Image: ISTOCK.

Emoti Con’s

I’m not talking about little smiley faces :^( ,,, but how automation can evaluate your emotions, and as is the trend of this blog – how that information may be abused.

Your image is rather public.  From your Facebook page, to the pictures posted from that wedding you were at, to the myriad of cameras capturing data in every store, street corner, ATM machine, etc. And, as you (should) know, facial recognition is already there to connect your name to that face.  Your image can also be used to evaluate your emotions, automatically with tools described in a recent Wall St Journal article (The  Technology That Unmasks Your Hidden Emotions.)  These tools can be used in real time as well as evaluation of static images.

So wandering though the store, it may be that those cameras are not just picking up shop-lifters, but lifting shopper responses to displays, products and other aspects of the store.  Having identified you (via facial recognition, or the RFID constellation you carry)  the store can correlate your personal response to specific items.  The next email you get may be promoting something you liked when you were at the store, or an well researched-in-near-real-time evaluation of what ‘persons like you’ seem to like.

The same type of analysis can be used analysing and responding to your responses in some political context — candidate preferences, messages that seem to be effective. Note, this is no longer the ‘applause-meter’ model to decide how the audience responds, but personalized to you, as a face-recognized person observing that event. With cameras getting images though front windshields posted on political posters/billboards it may be possible to collect this data on a very wide basis, not just for those who chose to attend an event.

Another use of real time emotional tracking could play out in situations such as interviews, interrogations, sales show rooms, etc.  The person conducting the situation may be getting feedback from automated analysis that informs the direction they lead the interaction. The result might be a job offer, arrest warrant or focused sales pitch in these particular cases.

The body-language of lying is also being translated.  Presumably a next step here is for automated analysis of your interactions. For those of us who never, ever lie, that may not be a problem. And of course, being a resident of New Hampshire where the 2016 presidential season has officially opened, it would be nice to have some of these tools in the hands of the citizens as we seek to narrow down the field of candidates.

 

Eavesdropping Barbie?

So should children have toys that can combine speech recognition, wi-fi connection to capture and respond to them and potentially recording their conversations as well as feeding them “messages”.  Welcome to the world of Hello Barbie.

Perhaps I spend too much time thinking about technology abuse … but let’s see.  There are political/legal environments (think 1984 and it’s current variants) where capturing voice data from a doll/toy/IoT device could be used as a basis for arrest and jail (or worse) — can  Barbie be called as a witness in court? And of course there are the “right things to say” to a child, like “I like you”  (dolls with pull strings do that), and things you may not want to have your doll telling your child (“You know I just love that new outfit” or “Wouldn’t I look good in that new Barbie-car?”) or worse (“your parents aren’t going to vote for that creep are they?)

What does a Hello Barbie doll do when a child is clearly being abused by a parent?  Can it contact 9-1-1?  Are the recordings available for prosecution?  What is abuse that warrants action?  And what liability exists for failure to report abuse?

Update: Hello Barbie is covered in the NY Times 29 March 2015 Sunday Business section Wherein it is noted that children under 13 have to get parental permission to enable the conversation system — assuming they understand the implications. Apparently children need to “press a microphone button on the app” to start interaction. Also, “parents.. have access to.. recorded conversations and can .. delete them.”  Which confirms that a permanent record is being kept until parental action triggers deletion. Finally we are assured “safeguards to ensure that stored data is secure and can’t be accessed by unauthorized users.”  Apparently Mattel and ToyTalk (the technology providers)  have better software engineers than Home Depot, Target and Anthem.

Who is Driving My Car (revisited)

Apparently my auto insurance company was not reading my recent blog entry.  They introduced a device, “In-Drive” that will monitor my driving habits and provide a discount (or increase) in my insurance rates.

There are a few small problems. The device connects into the diagnostic port of the car, allowing it to take control of the car (brakes, acceleration, etc.) or a hacker to do this (see prior Blog entry). It is connected to the mothership (ET phones home), and that channel can be used both ways, so the hacker that takes over my car can be anywhere in the world.  I can think of three scenarios where this is actually feasible.

  1. Someone wants to kill the driver (very focused, difficult to detect).
  2. Blackmail – where bad guys decide to crash a couple of cars, or threaten to, and demand payment to avoid mayhem (what would the insurance company CEO say to such a demand?)  (Don’t they have insurance for this?)
  3. Terrorism – while many cyber attacks do not yield the requisite “blood on the front page” impact that terrorists seek, this path can do that — imagine ten thousand cars all accelerating and losing brakes at the same time … it will probably get the desired coverage.

As previously mentioned, proper software engineering (now a licensable profession in the U.S.) could minimize this security risk.

Then there is privacy.  The  insurance company’s privacy policy does not allow them to collect the data that their web page claims this device will collect — so clearly privacy is an after thought in this case.  The data collected is unclear – they have a statement about the type of data collected, and a few FAQ’s later, have a contradictory indication that the location data is only accurate within a forty square mile area, except maybe when it is more accurate.  What is stored, for what period of time, accessible to what interested parties (say a divorce lawyer) or with what protections is unclear.  A different insurance company, Anthem, encountered a major attack that compromises identity information (at least) for a large number of persons.  I’m just a bit skeptical that my auto insurance company has done their analysis of that situation and upgraded their systems to avoid similar breaches and loss of data. For those wondering what types of privacy policies might make sense, I encourage you to view the OECD policy principles and examples.  Organizations that actually are concerned with privacy  would be covering all of these bases at least in their privacy statements. (Of course they can do this and still have highly objectionable policies, or change their policies without notice.)

Who is Driving Your Car?

A recent CBS Sixty Minutes program interviewed folks at DARPA, including a demonstration of how a recent computer-laden car could be hacked and controlled.

Computers in cars are not a new thing, even the dozens that we see in new models, and they have been interconnected for some time as well.  Connecting your car to the network is a more recent advance — “On Star” is one variation that has been on-board for a while.  The ads for this have suggested the range of capabilities — unlock your car for you, turn on your ignition, detect that you may have been in an accident (air bag deployed, but maybe  monitoring capabilities) and of course, they know where your car is — if it is stolen they can disable it. Presumably a hacker can do all of these as well — and the DARPA demonstration shows some of the implications of this — stopping the car, acceleration, etc.  Criminals have already acquired armies of zombie computers to use in attacking their targets, blackmail, etc.  Imagine having a few hundred zombie cars in a major city like LA — enabling both terror or blackmail.

An additional sequence on SIxty Minutes shows the hacking of a drone.  And perhaps equally important, a re-programmed drone that is not (as easily) accessed/hacked.  Behind this is an issue of software engineering and awareness.   The folks making drones, cars, and other Internet of Things (IoT) objects are not ‘building security in’.  What is needed is an awareness for each IoT enabled device of the security risks involved — not just for abuse o f that particular device, but also how that might impact other devices in the network or the health and safety of the user and public.

A recent dialog with some IEEE-USA colleagues surfaced a question of where software engineering licensing (professional engineers) might be required … and we used video games as an example of a point where it did not seem appropriate … of course, that all breaks down if your video game can take over your car or your pace maker.

 

 

Your DNA into Your Picture

A recent Wall St Journal interview with J. Craig Venter indicates his company is currently working on translating DNA data into a ‘photo of you’, or the sound of your voice. The logic of course is that genetics (including epigenetic elements) include the parts list, assembly instructions and many of the finishing details for building an individual.  So it may not come as a surprise that a DNA sample can identify you as an individual (even distinct from your identical twin — considering mutations and epigenetic variations) — or perhaps even to create a clone.  But having a sample of your DNA translated into a picture of your face (presumably at different ages) or an imitation of your voice is not something that had been in my  genomic awareness.

The DNA sample from the crime scene may do more than identify the Perp, it may be the basis for generating a ‘police sketch’ of her face.

The movie Gattaca projected a society where genetic evaluation was a make/break factor in selecting a mate, getting a job, and other social decisions.  But it did not venture into the possibility of not just the evaluation of genetic desirability of a mate, but perhaps projecting their picture some years into the future.  “Will you still need me .. when I’m sixty four?

The interview also considers some of the ethical issues surrounding insurance, medical treatment and extended life spans … What other non-obvious applications can you see from analyzing the genomes and data of a few million persons?

Amazon vs Hachette – Tech Consolidation Impact on Emerging Authors

The dispute between Amazon and book publisher Hachette reached a settlement in November.  The Authors United group formed by a number of top selling authors, including Steven King, sent a letter to the Amazon Board of Directors expressing their concern with “sanctions” directed at Hachette authors including “refusing pre-orders, delaying shipping, reducing discounting, and using pop-up windows to cover authors’ pages and redirect buyers to non-Hachette books“.  This group has not yet resolved their concerns about the impact of this applied technology. There are financial and career implications from the loss of Amazon as a channel for sales, even for the months of this dispute.  These include reduced sales for proven best selling authors, and for first-time authors, reduced sales can be the end of their career.

The Bangor Daily News indicates this group is pressuring the Federal government and exploring a law suit to address some of these damages.

A key question is the monopolistic potential of having a single major channel for selling a class of products.  Amazon is reported in this article as being the source of 41% of new book sales in the U.S. And is reported by some best selling authors as having “disappeared” them — with searches for their names on Amazon yielding no results.

Data Mining makes it possible to associate authors with publishers, and manipulate their visibility via online sales channels.  There are legal and ethical issues here that span beyond the immediate “Hatchet”: case.  Apple is continuing its e-book anti-trust battle claiming a “David vs Goliath” position where Amazon holds 90%+ of e-book sales.

Both Apple and Amazon hold significant control over critical channels that authors (books, software, etc) need to both sell their products, but also to even become visible to the to potential readers/users/consumers. Both are for-profit companies that apply their market power and technology to maximize their profits (which is what capitalism and stock holders expect.)  The creative individuals producing indi or even traditional channel creations who might be expected to benefit from the global access of the Internet can get trampled when these mammoth’s charge towards their goals.

Is the Internet creating new opportunities, or consolidating to create concentrated bastions of power?  (Or both?)   Oddly this comes around to parallel issues with “net neutrality” and how the entertainment industry is relating to Internet channels — perhaps there is a broader set of principles involved.

 

US States use Big Data to Catch Big Thieves

Various states are using big data tools, such as the Lexus-Nexus database, to identify folks who are filing false tax returns.  A recent posting at the Pew Trusts, indicates that  “Indiana spotted 74,782 returns filed with stolen or manufactured identities as of the end of last month with its new identity-matching effort. Without it, the Department of Revenue caught just 1,500 cases of identity theft out of more than 3 million returns filed in all of 2013.”

The article goes on to outline other ways big data is being used by the states.  This can include the focus (e.g. tax refund validation) use of third party data sets, or can include ways to span state data sets to surface “exceptions”.  A state can cross check drivers license records, with car registrations, property tax records, court records, etc … to ultimately identify wrong-doers.

This harkens back in my own family experience when my daughter was working for a catalog sales company.  She was assigned the task of following up on ‘invalid credit cards’ to get valid entries to allow the items to ship.  She discovered via her own memory of contact data, that a number of invalid credit cards, being used with a variety of names were going to a single address.  She contacted the credit card companies to point out this likely source of fraud, only to find out that they incorporated the costs of credit fraud as part of their costs of doing business and were not interested in pursuing an apparent abuser.  Big data, appropriate queries and a willingness to pursue abuse could yield much greater results than the coincidental awareness of an alert employee.

So … here’s the question(s) that come to my mind:

  1. What are the significant opportunities for pursuing ne’er-do-well‘s with big data either by governments or by industry?
  2. What are the potential abuses that may emerge from similar approaches being applied in less desirable ways? (or with more controversial definitions of ne’er-do-well)?