Alexa called as witness?

“Alexa, tell me, in your own words, what happened on the night in question.” … actually the request is more like “Alexa, please replay the dialog that was recorded at 9:05PM for the jury”.  The case is in Bentonville Arkansas, and the charge is murder. Since an Echo unit was present, Amazon has been asked to disclose whatever information might have been captured at the time of the crime.

Amazon indicates that “Echo” keeps less than sixty seconds of recorded sound, it may not have that level of details, but presumably a larger database exists of requests and responses for the night in question as well.  Amazon has provided some data about purchase history, but is waiting for a formal court document to release any additional information.

Which begs the issue of how they might respond to apparent sounds of a crime in progress. “Alexa call 911!” is pretty clear, but “Don’t Shoot!” (or other phrases that might be ‘real’ or ‘overheard’ from a movie in the background …)  An interesting future awaits us.

Who’s Monitoring the Baby Monitors?

Guest Blog entry by Cassie Phillips

With the recent, record-breaking distributed denial of service (DDoS) attacks carried out with hijacked internet-of-things (IoT) devices, the woeful state of IoT security and privacy finally is achieving some public recognition. Just recently, distinguished security experts testified to US House of Representatives subcommittees on the dangers of connected devices, and the rationale for government regulation to address the security risks.Baby Monitor

But regulation is at best a long way off, if coming at all. It is vital that owners of these devices understand that although they may see no direct consequences of hijacked IoT devices being drafted into zombie attack networks, there are many other security and privacy issues inherent in these devices. Simply put, when we introduce connected devices into our homes and lives, we are risking our privacy and safety. Just one of the horrific risks can be seen in the use of baby monitors, nanny cams, security cameras and similar devices.

There has been a sharp increase in incidents of hijacked baby monitors. Some of these hacked devices were abused to prank families by playing strange music. But too many have been used to spy on sleeping children—so much so that websites dedicated to streaming hijacked nanny cam views have sprung up, clearly serving the frightening hunger of some deeply disturbed predators. And in one particularly twisted case, a toddler kept telling his parents that he was frightened of the bad man in his baby monitor. To their horror, his parents discovered that it was no childish nightmare; a man was tormenting their son night after night after night through the baby monitor.

These cases demonstrate that the risks are not simply cases of anonymous breaches of privacy. The safety of children and families can be entirely violated. It is certain that eventually a predator will see enough through the eyes of a baby monitor to identify, target and hunt a child in the real world, with tragic consequences. And what is perhaps more tragic, is that only then will lawmakers wise up to the risks and demand action. And only then will the manufacturers of these products promise to fix the problems (though certainly not without defending that because everyone else made insecure products, they’re in line with industry standards and not really to blame).

In short, though we may demand action from lawmakers or responsibility from manufacturers, at this point only parents reasonably can take any actions at all to protect their families. The knee-jerk solution may be to throw all of these devices out, but that would entirely ignore the benefits of these products and the ways in which they can still save lives. The best solutions today are for parents to take charge of the situation themselves. They can do this by purchasing more reputable products, changing their default passwords and using network security tools. Secure Thoughts (where Cassie is a writer) has evaluated VPN technology that can be used to minimize this abuse in the home. Parents should also remain informed and vigilant.

With the rapid development of the IoT, we’re likely to encounter new risks on a regular basis. And until there is a global (or at least national) policy regarding the security specifications of these devices, we are going to have to secure them ourselves.

About the author: Cassie Phillips is a technology blogger at Secure Thoughts who’s passionate about security. She’s very concerned about the effect the rapidly-expanding IoT will have on our privacy and safety.

 

 

Big Brother/Data 2016

The power of big data, AI/analytics, and subtle data collection are converging to a future only hinted at in Orwell’s 1984.  With the rapid developments on many fronts, it is not surprising that those of us who are only moderately paranoid have not been tracking it all. So here’s an update on some of the recent information on who is watching you and why:

Facebook (no surprise here) has been running personality quizzes that evaluate how your OCEAN score lines up.  That is Openness, Conscientiousness, Extroversion, Agreeableness and Neuroticism.  These “Free” evaluations are provided by Cambridge Analytica. The applications of this data to political election influence is documented by the NY Times (subscription required) and quoted in part by others.  The short take is that your Facebook profile (name, etc.) is combined with your personality data, and “onboarding” data from other sources such as age, income, debt, purchases, health concerns, car, gun  and home ownership and more.  Cambridge Analytica is reported to have records with 3 to 5 thousand data points on each of 230 million adult Americans. — which is most of us.

How to they use this data?  Psycho-graphic micro-targeted advertising is the recent target, seeking to influence voting in the U.S. Election.  They only support Republican candidates, so other parties will have to develop their own doomsday books.  There is no requirement that the use of the quizzes be disclosed, nor that the “ads” be identified as political or approved by any candidate.  The ads might not appear to have any specific political agenda, they might just point out news (or fake news) stories that play to your specific personality and have been test-marketed to validate the influence they will have on the targeted voter(s).  This may inspire you to get out and vote, or to stay-home and not bother — depending on what candidate(s) you support (based on social media streams, or more generalize characteristics if you personally have not declared your preferences.)  — Impact — quite possibly the U.S. Presidency.

But wait, that’s not all.

The U.K is expanding their surveillance powers, requiring Internet Companies to retain interactions/transactions for a year, including every web site you have accessed. This apparently is partially in response to the assertions by France that similar powers had foiled an ISIS attack in France. The range of use (abuse) that might be applied by the UK government and their allies remains to be seen (or more likely will remain hidden.)

But, consider what China is doing to encourage residents to be “sincere”. [Here is a serious limitation of my linguistic and cultural skills — no doubt there is a Mandarin word that is being used and translated to “sincere”, and that it carries cultural implications that may not be evident in translation.]  Data collected to determine your “social credibility rating”. includes: tax, loan, bill, and other payments (on time?), adherence to traffic rules, family planning limits, academic record, purchasing, online interactions, nature of information you post online, volunteer activity, and even “filial piety” (respect for elders/ancestors). And the applications of such data?  So far 4.9 million airline tickets have been refused. Your promotion, or even job opportunities can be limited with “sensitive” jobs being subject to review — judges, teachers, accountants, etc. A high score will open doors — possible faster access to government services.  By letting citizens see their score, they can be encouraged to ‘behave themselves better’.  By not disclosing all of the data collected, nor all of the implications the state can bully citizens into far greater sincerity than they might adopt if they were just trying to not break the law.

Your comments, thoughts and responses are encouraged, but remember — they are being recorded by others for reasons you may never know.  … Sincerely yours, Jim

Who do you want listening in at your home?

The Wall St. Journal has a note today comparing Amazon’s Echo and Google Home as voice activated, in-home assistants.   This space is fraught with impacts on technology and society — from services that can benefit house-bound individuals, to serious opportunities for abuse by hacking, for commercial purposes, or governmental ones. To put it in a simple form: you are being asked to “bug your house” with a device that listens to every noise in the house.  Of course you may have already bugged your pocket with  a device that is listening for the magic words “hey, Siri” (or the person next to you in the office, train, or restaurant may be carrying that “wire”.)  Robots that respond to “OK Google” or “Alexa” are expanding into our monitored domains. (What to folks named Alexa or Siri have to look forward to in this world?) (Would you name your child “OK Google”?)

The immediate use cases seem to be a cross between control of the “Internet of Things”, and the specific business models of the suppliers; online sales for Amazon Alexa, and more invasive advertising for Google. Not only can these devices turn on and off your lights, they can order new bulbs …ones that blink subliminal advertising messages (uh oh, now I’ve given someone a bad idea.)

From our technology and society perspective we need to look forward to the pros and cons of these devices. What high benefit services might be offered?  What risks do we run?  Are there policy or other guidelines that should be established? …. Please add your thoughts to the list …

Meanwhile I’m trying to find out why my new car’s navigation system keeps trying to take me to Scotland when I ask “Find McDonald’s”.

 

Robot Friends

The Wall St. Journal has a piece “Your Next Friend Could Be a Robot“, which is talking about a device in your home, not a disembodied associate on Facebook. The initial example is “embodied” as a speaker/microphone in Amazon’s Echo Dot, but also includes similar devices from Google, cell phones and even Toyota.  So what?

Folks, the article focuses on a 69 year old woman living alone, have a relationship with the devices.  These are 24/7 connected to the Internet, with a back end AI voice recognition/response system. (The article asserts it’s not AI because it’s not conscious, which is a different consideration.) … Apparently “double digit” percentages of interactions with Alexa (Amazon’s non-AI personality) are “non-utilitarian” presumably not triggering orders for Amazon products.

The good news: folks feel less lonely, more connected, and have “someone” there 24/7 — and responding to queries (with pre-programed answers) such as “what are the laws of robotics” — see Reddit’s list of fun questions.  … but

The bad news — it’s not clear what happens when you tell Alexa to call 911, or that you have fallen down and can’t get up, etc.  While there are “wakeup” and “sleep” words you can use, just the fact that a wakeup word can be recognized indicates that a level of 24/7 monitoring is in place.  No doubt this can be hacked, and tapped, and otherwise abused.

What is Amazon’s liability if you tell Alexa you need help and no effective response occurs?  No doubt time and lawsuits will tell.

It’s 10PM do you know what your model is doing?

“Customers like you have also …”  This concept appears explicitly, or implicitly at many points in the web-of-our-lives, aka the Internet. Specific corporations, and aggregate operations are building increasingly sophisticated models of individuals.  Not just “like you”, but “you”! Prof. Pedro Domingos at UW  in his book “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World” suggests this model of you may become a key factor of your ‘public‘ interactions.

Examples include having Linked-in add a “find me a job” button that will conduct interviews with relevant open positions and provide you a list of the best.  Or perhaps locating a house, a car, a spouse, …well, maybe somethings are better done face-2-face.

Apparently a Asian firm, “Deep Knowledge” has appointed a virtual director to their Board. In this case it is a construct designed to detect trends that the human directors might miss.  However, one suspects that Apple might want a model of Steve Jobs around for occasional consultation, if not back in control again.

Privacy and Security

Guest Post from: Marvi Islam

Let me start it with privacy and link it to security. Well, all of us know about the privacy settings on Facebook and we like them so much as we can hide from our family members, the things we do and the people we’re with. But wait, what about security? How is privacy linked to security?

Let’s leave the digital platform and move our focus towards our daily lives. We need security in our banks, schools, public places and even in our homes and parks. But have you ever wondered what price we pay for this non-existent blanket of security? Privacy.  Let me reiterate –  security at the price of privacy. Those cute little things we see on the ceilings of our school corridors; we call them “CCTV” –  they are installed for our security. But security from? No one bothers to ask. Maybe they (the authorities) want to tape everything in case something bad happens so that they can go through the tapes and catch perps red-handed. But they are taping every single thing and we don’t take this as them breaching our privacy?

A number of times these tapes have been misused causing niggling unpleasantries and yet it’s ok. There’s a famous proverb in Hindi that translates to this,“You have to sacrifice one thing to get another”. Here we sacrifice our privacy to get security. With self-driving cars grabbing all the attention, there goes more data to stay connected and apparently, “secure”.

Similarly, some companies check what their employees are up to and what they are doing on their computers while they are at work. This, from the company’s perspective is to avoid plausible breach of sensitive data but is such constant monitoring even ethical? So, does it really have to be a tradeoff? Security for privacy and vice versa?

Marvi Islam is from Islamabad, Pakistan and studies at Capital University of Science and Technology, Islamabad. https://www.facebook.com/marvi.islam

Health App Standards Needed

Guest Blog from: John Torous MD, Harvard

Last year, the British National Health Service (NHS) thought it was showing the world how healthcare systems can best utilize smartphone apps – but instead provided a catastrophic example of a failure to consider the social implications of technology. The demise of the NHS ‘App Library’ now serves as a warning of the perils of neglecting the technical aspects of mobile healthcare solutions – and serves as a call for the greater involvement of IEEE members at the this evolving intersection of healthcare and technology.

The NHS App Library offered a tool where patients could look online to find safe, secure, and effective smartphone apps to assist with their medical conditions. From major depressive disorder to diabetes, app developers submitted apps that were screened, reviewed, and evaluated by the NHS before being either approved or rejected for inclusion in the App Library. Millions of patients came to trust the App Library as a source for high quality and secure apps. Until one day in October 2015 the App Library was gone. Researchers had uncovered serious privacy and security vulnerabilities, with these approved apps actually leaving patient data unprotected and exposed. Further data highlighting that many approved apps also lacked any clinical evidence added to the damage. Overnight the NHS quietly removed the website (http://www.nhs.uk/pages/healthappslibrary.aspx) although the national press caught on and there was a public outcry.

As an IEEE member and a MD, I see both the potential and peril of mobile technologies like apps for healthcare. Mobile technologies like smartphone apps offer the promise of connecting millions of patients to immediate care, revolutionizing how we collect real time symptom data, and in many cases offering on the go and live health monitoring and support. But mobile technologies also offer serious security vulnerabilities, leaving sensitive patient medical information potentially in the public sphere. And without standards to guide development, the world of medical apps has become a chaotic and treacherous space. Simply go to Apple or Android app stores and type in ‘depression’ and observe what that search returns. A sea of snake oils, apps that have no security or data standards as well as no clinical evidence are being marketed directly to those who are ill.

The situation is especially concerning for mental illnesses. Many mental illnesses may be thought of in part as behavioral disorders and mobile technologies like smartphones have the potential to objectively record these behavioral symptoms. Smartphones also have to potential to offer real time interventions via various forms of e-therapy. Thus mobile technology holds the potential to transform how we diagnose, monitor, and even treat mental illnesses. But mental health data is also some of the most sensitive healthcare data that can quickly ruin lives if improperly disclosed or released. And the clinical evidence for the efficacy of smartphone apps for mental illness is still nascent. Yet this has not held back a sea of commercial apps that are today directly available for download and directly marketed to those whose illness may at times impair clear thinking and optimal decision making.

If there is one area where the societal and social implications of technology are actively in motion and needing guidance, mobile technology for mental healthcare is it. There is an immediate need for education and standards regarding consumer facing mobile collection, transmission, and storage of healthcare data. There is also a broader need for tools to standardize healthcare apps so that data is more unified and there is greater interoperability. Apple and Android each have their own healthcare app / device standards via Apple’s ReseachKit and Android’s Research Stalk – but there is a need for more fundamental standards. For mobile mental health to reach its promised potential of transforming healthcare, it first needs an internal transformation. A transformation led in part by the IEEE Society on Social Implications of Technology, global mental health campaigns (changedirections.org), forward thinking engineers, dedicated clinicians, and of course diverse patients.

If you are interested in tracking standards and developments in this area, please join the LinkedIn Mobile Mental Health App Standards group at: http://is.gd/MHealthAppGroup


 

John Torous MD is an IEEE member and currently a clinical fellow in psychiatry at Harvard Medical School. He has a BS in electrical engineering and computer sciences from UC Berkeley and medical degree from UC San Diego. He serves as editor-in-chief for the leading academic journal on technology and mental health, JMIR Mental Health (http://mental.jmir.org/), currently leads the American Psychiatric Association’s task force on the evaluation of commercial smartphone apps, co-chairs the Massachusetts Psychiatric Society’s Health Information Technology Committee.

T&S Magazine December 2015

cover dec 15

IEEE Technology and Society Magazine

Volume 34, Number 4, December 2015

Departments

President’s Message
3 Improving Our “Engineering-Crazed” Image
Greg Adamson

Book Reviews
4 Hackers, Geniuses, and Geeks
6 Faxed: The Rise and Fall of the Fax Machine

Editorial
9 Reflecting on the Contribution of T&S Magazine to the IEEE
Katina Michael

Open Letter
15 Technology and Change

Interview
16 On the Road with Rick Sare… and Google Glass

Viewpoint
17 Shakespeare, Social Media and Social Networks
Fernando A. Crespo, Sigifredo Laengle, Paula Baldwin Lind and Víctor Hugo Masías

Leading Edge
20 Corporate Individualism – Changing the Face of Capitalism
László G. Lovászy

23 Multimedia and Gaming Technologies for Telerehabilitation of Motor Disabilities
Andrea Proietti, Marco Paoloni, Massimo Panella, Luca Liparulo and Rosa Altilio

31 MoodTrek – A New App to Improve Mental HealthCare
Ganesh Gopalakrishna and Srivam Chellappan

33 Alternative Planning and Land Administration for Future Smart Cities
Soheil Sabri, Abbas Rajabifard, Serene Ho, Mohammad-Reza Namazi-Rad, and Christopher Pettit

Commentary
36 Pharmaco-Electronics Emerge
Joseph R. Carvalko

41 Blockchain Thinking*
Melanie Swan

63 Information Paradox*
Levent V. Orman

Fiction
54 Held Captive in the Cyberworld
Michael Eldred

Last Word
104 Digitus Secundus: The Swipe
Christine Perakslis

Features
74_ The Value of Accountability in the Cloud*
Wouter M.P. Steijn and Maartje G.H. Niezen

83_ Shaping Our Technological Futures*
Reihana Mohideen and Rob Evans

88_ Driver Distraction from Dashboard and Wearable Interfaces*
Robert Rosenberger

100_ Are Technologies Innocent?*
Michael Arnold and Christopher Pearce

*Refereed articles.

On the cover: Blockchain Thinking. English Wikipedia/The Opte Project/Creative Commons Attribution 2.5 Generic license.

Predictive Analytics – Rhinos, Elephants, Donkeys and Minority Report

The  IEEE Computer Society published “Saving Rhinos with Predictive Analytics” in both IEEE Intelligent Systems, and in the more widely distributed ‘Computing Edge‘ (a compendium of interesting papers taken from 13 of the CS publications and provided to members and technologists at no cost.  The article describes how data based analysis of both rhino and poacher activity in concert with AI algorithms can focus enforcement activities in terms of timing and location and hopefully save rhinos.

For those outside of the U.S., the largest population of elephants (Republicans) and donkeys (Democrats) are in the U.S.– these animals being symbols for the respective political parties, and now on the brink of the 2016 presidential primaries, these critters are being aggressively hunted — ok, actually sought after for their votes.  Not surprisingly the same tools are used to locate, identify and predict the behaviour of these persons.   When I was young (1964) I read a book called The 480, which described the capabilities of that timeframe for computer based political analysis and targeting of “groups” required to win an election. (480 was the number of groupings of the 68 million voters in 1960 to identify which groups you needed to attract to win the election.)   21st century analytics are a bit more sophisticated — with as many as 235 million groups, or one per potential voter (and over 130 million voters likely to vote.).  A recent kerfuffle between the Sanders and Clinton campaign over “ownership/access” to voter records stored on a computer system operated by the Democratic National Committee reflects the importance of this data.  By cross connecting (data mining) registered voter information with external sources such as web searches, credit card purchases, etc. the candidates can mine this data for cash (donations) and later votes.  A few percentage point change in delivering voters to the polls (both figuratively, and by providing rides where needed) in key states can impact the outcome. So knowing each individual is a significant benefit.

Predictive Analytics is saving rhinos, and affecting the leadership of super powers. But wait, there’s more.  Remember the movie “Minority Report” (2002). This movie started on the surface with apparent computer technology able to predict future crimes by specific individuals — who were arrested to prevent the crimes.  (Spoiler alert) the movie actually proposes a group of psychics were the real source of insight.  This was consistent with the original story (Philip K Dick) in 1956, prior to The 480, and the emergence of the computer as a key predictive device.  Here’s the catch, we don’t need the psychics, just the data and the computers.  Just as the probability of a specific individual voting for a specific candidate or a specific rhino getting poached in a specific territory can be assigned a specific probability, we are reaching the point where aspects of the ‘Minority Report’ predictions can be realized.

Oddly, in the U.S., governmental collection and use of this level of Big Data is difficult due to privacy illusions, and probably bureaucratic stove pipes and fiefdoms.   These problems do not exist in the private sector.  Widespread data collection on everybody at every opportunity is the norm, and the only limitation on sharing is determining the price.  The result is that your bank or insurance company is more likely to be able to predict your likely hood of being a criminal, terrorist, or even a victim of a crime than the government.  Big Data super-powers like Google, Amazon, Facebook and Acxiom have even more at their virtual fingertips.

Let’s assume that sufficient data can be obtained, and robust AI techniques applied to be able to identify a specific individual with a high probability of a problematic event — initiating or victim of a crime in the next week.  And this data is implicit or even explicit in the hands of some corporate entity.  Now what?  What actions should said corporation take? What probability is needed to trigger such actions? What liability exists for failure to take such actions (or should exist)?

These are issues that the elephants, and donkeys will need to consider over the next few years — we can’t expect the rhinos to do the work for us.  We technologists may also have a significant part to play.