Who’s Monitoring the Baby Monitors?

Guest Blog entry by Cassie Phillips

With the recent, record-breaking distributed denial of service (DDoS) attacks carried out with hijacked internet-of-things (IoT) devices, the woeful state of IoT security and privacy finally is achieving some public recognition. Just recently, distinguished security experts testified to US House of Representatives subcommittees on the dangers of connected devices, and the rationale for government regulation to address the security risks.Baby Monitor

But regulation is at best a long way off, if coming at all. It is vital that owners of these devices understand that although they may see no direct consequences of hijacked IoT devices being drafted into zombie attack networks, there are many other security and privacy issues inherent in these devices. Simply put, when we introduce connected devices into our homes and lives, we are risking our privacy and safety. Just one of the horrific risks can be seen in the use of baby monitors, nanny cams, security cameras and similar devices.

There has been a sharp increase in incidents of hijacked baby monitors. Some of these hacked devices were abused to prank families by playing strange music. But too many have been used to spy on sleeping children—so much so that websites dedicated to streaming hijacked nanny cam views have sprung up, clearly serving the frightening hunger of some deeply disturbed predators. And in one particularly twisted case, a toddler kept telling his parents that he was frightened of the bad man in his baby monitor. To their horror, his parents discovered that it was no childish nightmare; a man was tormenting their son night after night after night through the baby monitor.

These cases demonstrate that the risks are not simply cases of anonymous breaches of privacy. The safety of children and families can be entirely violated. It is certain that eventually a predator will see enough through the eyes of a baby monitor to identify, target and hunt a child in the real world, with tragic consequences. And what is perhaps more tragic, is that only then will lawmakers wise up to the risks and demand action. And only then will the manufacturers of these products promise to fix the problems (though certainly not without defending that because everyone else made insecure products, they’re in line with industry standards and not really to blame).

In short, though we may demand action from lawmakers or responsibility from manufacturers, at this point only parents reasonably can take any actions at all to protect their families. The knee-jerk solution may be to throw all of these devices out, but that would entirely ignore the benefits of these products and the ways in which they can still save lives. The best solutions today are for parents to take charge of the situation themselves. They can do this by purchasing more reputable products, changing their default passwords and using network security tools. Secure Thoughts (where Cassie is a writer) has evaluated VPN technology that can be used to minimize this abuse in the home. Parents should also remain informed and vigilant.

With the rapid development of the IoT, we’re likely to encounter new risks on a regular basis. And until there is a global (or at least national) policy regarding the security specifications of these devices, we are going to have to secure them ourselves.

About the author: Cassie Phillips is a technology blogger at Secure Thoughts who’s passionate about security. She’s very concerned about the effect the rapidly-expanding IoT will have on our privacy and safety.

 

 

Big Brother/Data 2016

The power of big data, AI/analytics, and subtle data collection are converging to a future only hinted at in Orwell’s 1984.  With the rapid developments on many fronts, it is not surprising that those of us who are only moderately paranoid have not been tracking it all. So here’s an update on some of the recent information on who is watching you and why:

Facebook (no surprise here) has been running personality quizzes that evaluate how your OCEAN score lines up.  That is Openness, Conscientiousness, Extroversion, Agreeableness and Neuroticism.  These “Free” evaluations are provided by Cambridge Analytica. The applications of this data to political election influence is documented by the NY Times (subscription required) and quoted in part by others.  The short take is that your Facebook profile (name, etc.) is combined with your personality data, and “onboarding” data from other sources such as age, income, debt, purchases, health concerns, car, gun  and home ownership and more.  Cambridge Analytica is reported to have records with 3 to 5 thousand data points on each of 230 million adult Americans. — which is most of us.

How to they use this data?  Psycho-graphic micro-targeted advertising is the recent target, seeking to influence voting in the U.S. Election.  They only support Republican candidates, so other parties will have to develop their own doomsday books.  There is no requirement that the use of the quizzes be disclosed, nor that the “ads” be identified as political or approved by any candidate.  The ads might not appear to have any specific political agenda, they might just point out news (or fake news) stories that play to your specific personality and have been test-marketed to validate the influence they will have on the targeted voter(s).  This may inspire you to get out and vote, or to stay-home and not bother — depending on what candidate(s) you support (based on social media streams, or more generalize characteristics if you personally have not declared your preferences.)  — Impact — quite possibly the U.S. Presidency.

But wait, that’s not all.

The U.K is expanding their surveillance powers, requiring Internet Companies to retain interactions/transactions for a year, including every web site you have accessed. This apparently is partially in response to the assertions by France that similar powers had foiled an ISIS attack in France. The range of use (abuse) that might be applied by the UK government and their allies remains to be seen (or more likely will remain hidden.)

But, consider what China is doing to encourage residents to be “sincere”. [Here is a serious limitation of my linguistic and cultural skills — no doubt there is a Mandarin word that is being used and translated to “sincere”, and that it carries cultural implications that may not be evident in translation.]  Data collected to determine your “social credibility rating”. includes: tax, loan, bill, and other payments (on time?), adherence to traffic rules, family planning limits, academic record, purchasing, online interactions, nature of information you post online, volunteer activity, and even “filial piety” (respect for elders/ancestors). And the applications of such data?  So far 4.9 million airline tickets have been refused. Your promotion, or even job opportunities can be limited with “sensitive” jobs being subject to review — judges, teachers, accountants, etc. A high score will open doors — possible faster access to government services.  By letting citizens see their score, they can be encouraged to ‘behave themselves better’.  By not disclosing all of the data collected, nor all of the implications the state can bully citizens into far greater sincerity than they might adopt if they were just trying to not break the law.

Your comments, thoughts and responses are encouraged, but remember — they are being recorded by others for reasons you may never know.  … Sincerely yours, Jim

Who do you want listening in at your home?

The Wall St. Journal has a note today comparing Amazon’s Echo and Google Home as voice activated, in-home assistants.   This space is fraught with impacts on technology and society — from services that can benefit house-bound individuals, to serious opportunities for abuse by hacking, for commercial purposes, or governmental ones. To put it in a simple form: you are being asked to “bug your house” with a device that listens to every noise in the house.  Of course you may have already bugged your pocket with  a device that is listening for the magic words “hey, Siri” (or the person next to you in the office, train, or restaurant may be carrying that “wire”.)  Robots that respond to “OK Google” or “Alexa” are expanding into our monitored domains. (What to folks named Alexa or Siri have to look forward to in this world?) (Would you name your child “OK Google”?)

The immediate use cases seem to be a cross between control of the “Internet of Things”, and the specific business models of the suppliers; online sales for Amazon Alexa, and more invasive advertising for Google. Not only can these devices turn on and off your lights, they can order new bulbs …ones that blink subliminal advertising messages (uh oh, now I’ve given someone a bad idea.)

From our technology and society perspective we need to look forward to the pros and cons of these devices. What high benefit services might be offered?  What risks do we run?  Are there policy or other guidelines that should be established? …. Please add your thoughts to the list …

Meanwhile I’m trying to find out why my new car’s navigation system keeps trying to take me to Scotland when I ask “Find McDonald’s”.

 

Bond Doesn’t make the Ethics Cut

For those of us who have been enjoying the antics of 007, aka James Bond — and those of us in the real world who have been providing technology that helps our covert entities to accomplish their missions…. it is worthwhile to note that Alex Younger, head of UK’s MI6 agency (which of course does not exist), indicates Bond’s personality and activities do not meet their ethical standards.

It’s safe to say that James Bond wouldn’t get through our recruitment process and, whilst we share his qualities of patriotism, energy and tenacity, an intelligence officer in the real MI6 has a high degree of emotional intelligence, values teamwork and always has respect for the law… unlike Mr Bond.

27 Oct 2016 UK Telegraph article

A number of technologists are called upon to support covert, military or police organizations in their countries.  There is some comfort in thinking that such entities, including MI6 (yes it is real), have some level of ethical standards they apply.  Which does not exempt an individual from applying their own professional and other standards as well in their work.

AI Ethics

A growing area reflecting the impact of technology on society is ethics and AI.  This has a few variations… one is what is ethical in terms of developing or applying AI, the second is what is ethical for AI’s.  (Presumably for an AI to select an ethical vs unethical course of action either it must be programmed that way, or it must learn what is ethical as part of it’s education/awareness.)

Folks playing in the AI Ethics domain include a recent consortia of industry players (IBM, Google, Facebook, Amazon and Microsoft), the IEEE Standards folks, and the White House (with a recent white paper).

This is a great opportunity for learning about the issues in the classroom, to develop deep background for policy and press folks — concerns will emerge here — consider self driving cars, robots in warfare or police work, etc.  and of course the general public where misconceptions and misinformation are likely.  We see many movies where evil technology is a key plot device, and get many marketing messages on the advantages of progress.  Long term challenges for informed evolution in this area will require less simplistic perspectives of the opportunities and risks.

There is a one day event in Brussels, Nov. 15, 2016 that will provide a current view on some of the issues, and discussions.

 

Robot Friends

The Wall St. Journal has a piece “Your Next Friend Could Be a Robot“, which is talking about a device in your home, not a disembodied associate on Facebook. The initial example is “embodied” as a speaker/microphone in Amazon’s Echo Dot, but also includes similar devices from Google, cell phones and even Toyota.  So what?

Folks, the article focuses on a 69 year old woman living alone, have a relationship with the devices.  These are 24/7 connected to the Internet, with a back end AI voice recognition/response system. (The article asserts it’s not AI because it’s not conscious, which is a different consideration.) … Apparently “double digit” percentages of interactions with Alexa (Amazon’s non-AI personality) are “non-utilitarian” presumably not triggering orders for Amazon products.

The good news: folks feel less lonely, more connected, and have “someone” there 24/7 — and responding to queries (with pre-programed answers) such as “what are the laws of robotics” — see Reddit’s list of fun questions.  … but

The bad news — it’s not clear what happens when you tell Alexa to call 911, or that you have fallen down and can’t get up, etc.  While there are “wakeup” and “sleep” words you can use, just the fact that a wakeup word can be recognized indicates that a level of 24/7 monitoring is in place.  No doubt this can be hacked, and tapped, and otherwise abused.

What is Amazon’s liability if you tell Alexa you need help and no effective response occurs?  No doubt time and lawsuits will tell.

Internet Resilience

The Internet is a widespread tool reflecting, to some degree, free speech and freedom of the ‘press’.  As such, it is a threat to entities that wish to suppress these, or make them subservient to other priorities. A recent report on DefenseOne.com outlines the ways in which some countries have been able to put an “on-off” switch in place, and use this.  The trick is having all or most of the traffic going though a small number of (authorized) intermediate nodes where the pug can be pulled.

Countries like Egypt and China have such bottlenecks.  Countries with large numbers of intermediate nodes connected outside the country include Canada, Germany and the Netherlands.  Surprisingly Russia has a very large number of such connections — explained by the article as a complexity designed to make tracking cyber-crime nearly impossible.

Hacking Medical Devices

Johnson & Johnson recently disclosed that one if its insulin pumps might be subject to hacking.   This follows assertions about pacemakers and implanted defibrillators might also be subject to attack.  No doubt some wireless medical devices will have security vulnerabilities with at least software if not hardware attack vectors.

The motives for attack are perhaps equally important in any case. Hacking a fleet of cars can have widespread visibility and will be associated with a different set of motives than a personal attack via a medical device.  However, murder or assassination are potential uses for these types of flaws.

“No instances of medical-device hacking have been disclosed.” according to the related WSJ article. Of course, when a diabetic dies of an insulin excess or deficit, murder by hacking might not be on the post mortum evaluation list.  The abuses here are (hopefully) rare, but the lack of disclosure does not imply the lack of a successful attack.

Killing Mosquitoes

The elimination of Malaria (438,000 deaths per year) and a number of other deadly/debilitating diseases (Zika, dengue fever, yellow fever, etc.) is often a war against the mosquitoes that carry these diseases.   Bill Gates has designated the mosquito “the deadliest animal in the world“, and fighting these diseases is a top priority for the Gates Foundation.  Another wealthy ExMicrosoft wizard, Nathan Myhrvold, has developed a prototype laser to zap the bugs selectively. And a recent Wall St. Journal article suggests a variety of genetic engineering attacks that are in development. With the spread of these diseases beyond their traditional “range”, their impact will increase as will the needs of a broader range of countries.

There are a number of Technology/Society impacts of interest here.  First, any objective for which there are multiple, diverse approaches that are likely to reach the objective are likely to be accomplished — don’t bet on the bugs here (I know, “Jurassic Park” seeks to make the point that “Nature will Find a Way” … and that is often true, but humans have been very effective at driving the extinction of so many species that geologists have declared this a new age, the Anthropocene.)

Second, if anyone wonders how to change the world, the answer clearly is technology — from DDT impregnated sleeping nets, lasers and genetic engineering we are talking tech— and “engineering thinking”. (My granddaughter has a T shirt: front side “A-Stounding”, back side “Stoundings: persons who like to solve problems rather than cause them.” )  I call those folks Technologists.  Bugs Beware — you have a whole generation of Robotics Competition and Mindcraft modders headed your way.

Third — is this a good idea?  Note, there are significant variations.  Some approaches target just one species (Aedes aegypti, at least outside of it’s forest habitat origin), others target a wider range of species, others focused areas.)  One recurrent human failure is anticipating consequences of our actions.  What animals depend on these critters for dinner, and so forth up the food chain. What plants depend on these for pollination?  We abound in ignorance on such matters, and we find it easier to fund the research for eradication than for understanding.

So .. should we eliminate the deadliest animal on earth?  (let me qualify, other than Homo Sapiens.)

 

To GO or Not to GO?

Pokemon Go has become a delightful and disturbing experiment in the social impact of technology. This new “Free” software for smart phones implements an augmented reality, overlaying the popular game on the real world. Fans wander the streets, byways, public, and in some cases private spaces following the illusive characters on their smart phone to capture them, or “in world”, or to collect virtual items.  The uptake has been amazing, approaching Twitter in terms of user-hours in just days after introduction. It has also added $12 billion to Nintendo’s stock value (almost double).

Let’s start with “Free”, and $12 billion dollars. The trick is having a no-holds barred privacy policy. Not surprising, the game knows who you are and where you are. It also can access/use your camera, storage, email/phone contacts, and potentially your full Google account (email contents, Drive contents, etc.)  Them money comes because all of this is for sale, in real time. (“While you track Pokemon, Pokemon Go tracks you”, USA Today, 12 July 16) Minimally you can expect to see “Luremodules” (a game component) used to bring well vetted (via browser history, email, call history, disk content, etc.) customers into stores that then combine ad-promotions with in-store characters. Perhaps offering your favorite flavor ice cream, or draw you into a lawyer’s office that specializes in the issues you have been discussing on email, or a medical office that …well you get the picture, and those are just the legitimate businesses.  Your emails from your bank may encourage less honest folks to lure you into a back alley near an ATM machine .. a genre of crime that has only been rumored so far.

The July 13th issue of USA Today outlines an additional set of considerations. Users are being warned by police, property owners, and various web sites for various reasons. The potential for wandering into traffic is non-trivial while pursuing an illusive virtual target, or a sidewalk obstruction, or over the edge of the cliff (is there a murder plot hiding in here?) Needless to say playing while driving creates a desperate need for self-driving cars. Since the targets change with time of day, folks are out at all hours, in all places, doing suspicious things. This triggers calls to police. Some memorial sites, such as Auschwitz and the Washington DC Holocaust Memorial Museum have asked to be exluded from the play-map. There are clearly educational opportunities that could be built into the game — tracing Boston’s “freedom trail”, and requiring player engagement with related topics is a possible example. However, lacking the explicit consideration of the educational context, there are areas where gaming is inappropriate. Also, some public areas are closed after dark, and the game may result in players trespassing in ways not envisioned by the creators, which may create unhealthy interactions with the owners, residents, etc. of the area.

One USA Today article surfaces a concern that very likely was missed by Nintendo, and is exacerbated by the recent deaths of black men in US cities, and the shooting of police in Dallas. “For the most part, Pokemon is all fun and games. Yet for many African Americans, expecially men, their enjoyment is undercut by fears they may raise suspicion with potentially lethal consequences.”  Change the countries and communities involved and similar concerns may emerge in other countries as well. This particular piece ends with an instance of a black youth approaching a policeman who was also playing the game, with a positive moment of interaction as they helped each other pursue in-game objectives.

It is said every technology cuts both ways.  We can hope that experience, and consideration will lead both players and Nintendo to evolve the positive potential for augmented reality, and perhaps with a bit greater respect for user privacy.