Predictive Fiction

A recent anthology of “climate fiction”, Loosed Upon the World, projects climate change forward some years into dystopian scenarios.  The editor, John Joseph Adams, asserts “Fiction is a powerful tool … perhaps [we can] humanize and illuminate the issue in ways that aren’t as easy to to with only science and cold equations.

I have been an advocate of near-term science fiction, which I refer to as predictive fiction, as a tool to explore the “what if” scenarios that may result from technology, hopefully allowing us to avoid the negative impacts. Unfortunately this particular anthology is dealing with a current trajectory that is more an exploration of “when, what then?”

But some of the basic issues that we technologists face enter the spotlight, albeit one we may not like.  In the forward, Paolo Bacigalupi has a painful message for us techies (many of whom fall into his category of “Techno-optimists”): “Engineers don’t grow up thinking about building a healthy soil eco-system, or trying to restore some estuary, … to turn people into better long-term planners, or better educated and informed citizens, or creating better civic societies.”   I don’t fully agree with Paolo — it is more accurate to state that “engineers don’t get paid to …” and perhaps “the project requirements do not address …” And occasionally, we have technologists that resist the corporate momentum and try to get their employer to “do the right thing”.  SSIT seeks to honor such courage with the “Carl Barus Award for Outstanding Service in the Public Interest” (nominations always welcomed.)

But back to the future, I mean the fiction. Paolo also observes “..imaginative literature is mythic. The kinds of stories we build, the way we encourage people to live into those myths and dream the future — those stories have power. Once we build this myth that the rocket-ship and the techno-fix is the solve for all our plights and problems, that’s when we get ourselves in danger. It’s the one fantasy that almost certainly guarantees our eventual self-destruction.” 

I suspect we need a good dose of reality, perhaps in the guise of predictive fiction.

Your TV might be Binge watching you!

VIZIO is reportedly paying fines for using users TVs to track their viewing patterns in significant detail as well as associating this with  IP address data including age, sex, income, marital status, household size, education level, home ownership, and home values.

Presumably this might have been avoided if VIZIO had presented the users with a “privacy statement” or “terms of use” when they installed their TV.  But failure to have obtained the appearance of consent put them in this situation.

It has been clear that all “free” media (and many paid channels), for TV, Cable, Radio, Internet streaming, etc. all want to track this information.  On one hand they can use it to provide “a better user experience” (show you the ads/suggested programs that match your demographics) … and of course the flip side is also true, selling your data to 3rd parties (a.k.a. ‘trusted business partners’)  so they can be more effective at interacting with you is part of the game.

Now lets step it up a notch.  Your TV (or remote controller) may use voice recognition, often using the “mother ship’ resources for the AI analysis if what you have requested. That is, your voice is sent back to servers that interpret and respond.  This leads to another level of monitoring … some of your characteristics might be infered from your voice, and others from background sounds or voices, and even more if the recording device just happens to track you all the time.  “Seri are you listening in again?” — and then add a camera … now the fun can really start.

Tele-Kiss … hmmm

London haptic researchers have developed a device to add to a cell phone that will allow remote persons kiss. As described in an IEEE Spectrum article. And since “a picture is worth a thousand words”:

A woman kisses a plastic pad attached to her smartphone to send a virtual kiss to the person she's video chatting with.

No doubt a wider range of haptic appliances will follow. A major US phone company used to have the slogan “reach out and touch someone”, perhaps our mobile devices are headed that way.

Online physical attack

It should be noted that an early, if not first, instance of a physical attack on a person has been carried out by the use of online means, in particular social media used to trigger an epileptic seizure.  This concept has surfaced in science fiction, notably in Neil Stephenson’s Snow Crash (which also inspired the creation of Google Earth).  In that case, persons are exposed to an attack while in virtual reality that causes them to become comatose.

With the Internet of Things, and the potential for projecting “force” (or at least damage causing light/sound ) over the network a new level of abuse and need for protection is emerging.  One key in this particular case, and into the future, might be to have true identity disclosed, or as a criteria for accepting content over the net.

Alexa called as witness?

“Alexa, tell me, in your own words, what happened on the night in question.” … actually the request is more like “Alexa, please replay the dialog that was recorded at 9:05PM for the jury”.  The case is in Bentonville Arkansas, and the charge is murder. Since an Echo unit was present, Amazon has been asked to disclose whatever information might have been captured at the time of the crime.

Amazon indicates that “Echo” keeps less than sixty seconds of recorded sound, it may not have that level of details, but presumably a larger database exists of requests and responses for the night in question as well.  Amazon has provided some data about purchase history, but is waiting for a formal court document to release any additional information.

Which begs the issue of how they might respond to apparent sounds of a crime in progress. “Alexa call 911!” is pretty clear, but “Don’t Shoot!” (or other phrases that might be ‘real’ or ‘overheard’ from a movie in the background …)  An interesting future awaits us.

Who’s Monitoring the Baby Monitors?

Guest Blog entry by Cassie Phillips

With the recent, record-breaking distributed denial of service (DDoS) attacks carried out with hijacked internet-of-things (IoT) devices, the woeful state of IoT security and privacy finally is achieving some public recognition. Just recently, distinguished security experts testified to US House of Representatives subcommittees on the dangers of connected devices, and the rationale for government regulation to address the security risks.Baby Monitor

But regulation is at best a long way off, if coming at all. It is vital that owners of these devices understand that although they may see no direct consequences of hijacked IoT devices being drafted into zombie attack networks, there are many other security and privacy issues inherent in these devices. Simply put, when we introduce connected devices into our homes and lives, we are risking our privacy and safety. Just one of the horrific risks can be seen in the use of baby monitors, nanny cams, security cameras and similar devices.

There has been a sharp increase in incidents of hijacked baby monitors. Some of these hacked devices were abused to prank families by playing strange music. But too many have been used to spy on sleeping children—so much so that websites dedicated to streaming hijacked nanny cam views have sprung up, clearly serving the frightening hunger of some deeply disturbed predators. And in one particularly twisted case, a toddler kept telling his parents that he was frightened of the bad man in his baby monitor. To their horror, his parents discovered that it was no childish nightmare; a man was tormenting their son night after night after night through the baby monitor.

These cases demonstrate that the risks are not simply cases of anonymous breaches of privacy. The safety of children and families can be entirely violated. It is certain that eventually a predator will see enough through the eyes of a baby monitor to identify, target and hunt a child in the real world, with tragic consequences. And what is perhaps more tragic, is that only then will lawmakers wise up to the risks and demand action. And only then will the manufacturers of these products promise to fix the problems (though certainly not without defending that because everyone else made insecure products, they’re in line with industry standards and not really to blame).

In short, though we may demand action from lawmakers or responsibility from manufacturers, at this point only parents reasonably can take any actions at all to protect their families. The knee-jerk solution may be to throw all of these devices out, but that would entirely ignore the benefits of these products and the ways in which they can still save lives. The best solutions today are for parents to take charge of the situation themselves. They can do this by purchasing more reputable products, changing their default passwords and using network security tools. Secure Thoughts (where Cassie is a writer) has evaluated VPN technology that can be used to minimize this abuse in the home. Parents should also remain informed and vigilant.

With the rapid development of the IoT, we’re likely to encounter new risks on a regular basis. And until there is a global (or at least national) policy regarding the security specifications of these devices, we are going to have to secure them ourselves.

About the author: Cassie Phillips is a technology blogger at Secure Thoughts who’s passionate about security. She’s very concerned about the effect the rapidly-expanding IoT will have on our privacy and safety.

 

 

Big Brother/Data 2016

The power of big data, AI/analytics, and subtle data collection are converging to a future only hinted at in Orwell’s 1984.  With the rapid developments on many fronts, it is not surprising that those of us who are only moderately paranoid have not been tracking it all. So here’s an update on some of the recent information on who is watching you and why:

Facebook (no surprise here) has been running personality quizzes that evaluate how your OCEAN score lines up.  That is Openness, Conscientiousness, Extroversion, Agreeableness and Neuroticism.  These “Free” evaluations are provided by Cambridge Analytica. The applications of this data to political election influence is documented by the NY Times (subscription required) and quoted in part by others.  The short take is that your Facebook profile (name, etc.) is combined with your personality data, and “onboarding” data from other sources such as age, income, debt, purchases, health concerns, car, gun  and home ownership and more.  Cambridge Analytica is reported to have records with 3 to 5 thousand data points on each of 230 million adult Americans. — which is most of us.

How to they use this data?  Psycho-graphic micro-targeted advertising is the recent target, seeking to influence voting in the U.S. Election.  They only support Republican candidates, so other parties will have to develop their own doomsday books.  There is no requirement that the use of the quizzes be disclosed, nor that the “ads” be identified as political or approved by any candidate.  The ads might not appear to have any specific political agenda, they might just point out news (or fake news) stories that play to your specific personality and have been test-marketed to validate the influence they will have on the targeted voter(s).  This may inspire you to get out and vote, or to stay-home and not bother — depending on what candidate(s) you support (based on social media streams, or more generalize characteristics if you personally have not declared your preferences.)  — Impact — quite possibly the U.S. Presidency.

But wait, that’s not all.

The U.K is expanding their surveillance powers, requiring Internet Companies to retain interactions/transactions for a year, including every web site you have accessed. This apparently is partially in response to the assertions by France that similar powers had foiled an ISIS attack in France. The range of use (abuse) that might be applied by the UK government and their allies remains to be seen (or more likely will remain hidden.)

But, consider what China is doing to encourage residents to be “sincere”. [Here is a serious limitation of my linguistic and cultural skills — no doubt there is a Mandarin word that is being used and translated to “sincere”, and that it carries cultural implications that may not be evident in translation.]  Data collected to determine your “social credibility rating”. includes: tax, loan, bill, and other payments (on time?), adherence to traffic rules, family planning limits, academic record, purchasing, online interactions, nature of information you post online, volunteer activity, and even “filial piety” (respect for elders/ancestors). And the applications of such data?  So far 4.9 million airline tickets have been refused. Your promotion, or even job opportunities can be limited with “sensitive” jobs being subject to review — judges, teachers, accountants, etc. A high score will open doors — possible faster access to government services.  By letting citizens see their score, they can be encouraged to ‘behave themselves better’.  By not disclosing all of the data collected, nor all of the implications the state can bully citizens into far greater sincerity than they might adopt if they were just trying to not break the law.

Your comments, thoughts and responses are encouraged, but remember — they are being recorded by others for reasons you may never know.  … Sincerely yours, Jim

Who do you want listening in at your home?

The Wall St. Journal has a note today comparing Amazon’s Echo and Google Home as voice activated, in-home assistants.   This space is fraught with impacts on technology and society — from services that can benefit house-bound individuals, to serious opportunities for abuse by hacking, for commercial purposes, or governmental ones. To put it in a simple form: you are being asked to “bug your house” with a device that listens to every noise in the house.  Of course you may have already bugged your pocket with  a device that is listening for the magic words “hey, Siri” (or the person next to you in the office, train, or restaurant may be carrying that “wire”.)  Robots that respond to “OK Google” or “Alexa” are expanding into our monitored domains. (What to folks named Alexa or Siri have to look forward to in this world?) (Would you name your child “OK Google”?)

The immediate use cases seem to be a cross between control of the “Internet of Things”, and the specific business models of the suppliers; online sales for Amazon Alexa, and more invasive advertising for Google. Not only can these devices turn on and off your lights, they can order new bulbs …ones that blink subliminal advertising messages (uh oh, now I’ve given someone a bad idea.)

From our technology and society perspective we need to look forward to the pros and cons of these devices. What high benefit services might be offered?  What risks do we run?  Are there policy or other guidelines that should be established? …. Please add your thoughts to the list …

Meanwhile I’m trying to find out why my new car’s navigation system keeps trying to take me to Scotland when I ask “Find McDonald’s”.

 

Bond Doesn’t make the Ethics Cut

For those of us who have been enjoying the antics of 007, aka James Bond — and those of us in the real world who have been providing technology that helps our covert entities to accomplish their missions…. it is worthwhile to note that Alex Younger, head of UK’s MI6 agency (which of course does not exist), indicates Bond’s personality and activities do not meet their ethical standards.

It’s safe to say that James Bond wouldn’t get through our recruitment process and, whilst we share his qualities of patriotism, energy and tenacity, an intelligence officer in the real MI6 has a high degree of emotional intelligence, values teamwork and always has respect for the law… unlike Mr Bond.

27 Oct 2016 UK Telegraph article

A number of technologists are called upon to support covert, military or police organizations in their countries.  There is some comfort in thinking that such entities, including MI6 (yes it is real), have some level of ethical standards they apply.  Which does not exempt an individual from applying their own professional and other standards as well in their work.

AI Ethics

A growing area reflecting the impact of technology on society is ethics and AI.  This has a few variations… one is what is ethical in terms of developing or applying AI, the second is what is ethical for AI’s.  (Presumably for an AI to select an ethical vs unethical course of action either it must be programmed that way, or it must learn what is ethical as part of it’s education/awareness.)

Folks playing in the AI Ethics domain include a recent consortia of industry players (IBM, Google, Facebook, Amazon and Microsoft), the IEEE Standards folks, and the White House (with a recent white paper).

This is a great opportunity for learning about the issues in the classroom, to develop deep background for policy and press folks — concerns will emerge here — consider self driving cars, robots in warfare or police work, etc.  and of course the general public where misconceptions and misinformation are likely.  We see many movies where evil technology is a key plot device, and get many marketing messages on the advantages of progress.  Long term challenges for informed evolution in this area will require less simplistic perspectives of the opportunities and risks.

There is a one day event in Brussels, Nov. 15, 2016 that will provide a current view on some of the issues, and discussions.