Alexa called as witness?

“Alexa, tell me, in your own words, what happened on the night in question.” … actually the request is more like “Alexa, please replay the dialog that was recorded at 9:05PM for the jury”.  The case is in Bentonville Arkansas, and the charge is murder. Since an Echo unit was present, Amazon has been asked to disclose whatever information might have been captured at the time of the crime.

Amazon indicates that “Echo” keeps less than sixty seconds of recorded sound, it may not have that level of details, but presumably a larger database exists of requests and responses for the night in question as well.  Amazon has provided some data about purchase history, but is waiting for a formal court document to release any additional information.

Which begs the issue of how they might respond to apparent sounds of a crime in progress. “Alexa call 911!” is pretty clear, but “Don’t Shoot!” (or other phrases that might be ‘real’ or ‘overheard’ from a movie in the background …)  An interesting future awaits us.

Who’s Monitoring the Baby Monitors?

Guest Blog entry by Cassie Phillips

With the recent, record-breaking distributed denial of service (DDoS) attacks carried out with hijacked internet-of-things (IoT) devices, the woeful state of IoT security and privacy finally is achieving some public recognition. Just recently, distinguished security experts testified to US House of Representatives subcommittees on the dangers of connected devices, and the rationale for government regulation to address the security risks.Baby Monitor

But regulation is at best a long way off, if coming at all. It is vital that owners of these devices understand that although they may see no direct consequences of hijacked IoT devices being drafted into zombie attack networks, there are many other security and privacy issues inherent in these devices. Simply put, when we introduce connected devices into our homes and lives, we are risking our privacy and safety. Just one of the horrific risks can be seen in the use of baby monitors, nanny cams, security cameras and similar devices.

There has been a sharp increase in incidents of hijacked baby monitors. Some of these hacked devices were abused to prank families by playing strange music. But too many have been used to spy on sleeping children—so much so that websites dedicated to streaming hijacked nanny cam views have sprung up, clearly serving the frightening hunger of some deeply disturbed predators. And in one particularly twisted case, a toddler kept telling his parents that he was frightened of the bad man in his baby monitor. To their horror, his parents discovered that it was no childish nightmare; a man was tormenting their son night after night after night through the baby monitor.

These cases demonstrate that the risks are not simply cases of anonymous breaches of privacy. The safety of children and families can be entirely violated. It is certain that eventually a predator will see enough through the eyes of a baby monitor to identify, target and hunt a child in the real world, with tragic consequences. And what is perhaps more tragic, is that only then will lawmakers wise up to the risks and demand action. And only then will the manufacturers of these products promise to fix the problems (though certainly not without defending that because everyone else made insecure products, they’re in line with industry standards and not really to blame).

In short, though we may demand action from lawmakers or responsibility from manufacturers, at this point only parents reasonably can take any actions at all to protect their families. The knee-jerk solution may be to throw all of these devices out, but that would entirely ignore the benefits of these products and the ways in which they can still save lives. The best solutions today are for parents to take charge of the situation themselves. They can do this by purchasing more reputable products, changing their default passwords and using network security tools. Secure Thoughts (where Cassie is a writer) has evaluated VPN technology that can be used to minimize this abuse in the home. Parents should also remain informed and vigilant.

With the rapid development of the IoT, we’re likely to encounter new risks on a regular basis. And until there is a global (or at least national) policy regarding the security specifications of these devices, we are going to have to secure them ourselves.

About the author: Cassie Phillips is a technology blogger at Secure Thoughts who’s passionate about security. She’s very concerned about the effect the rapidly-expanding IoT will have on our privacy and safety.

 

 

Privacy and Security

Guest Post from: Marvi Islam

Let me start it with privacy and link it to security. Well, all of us know about the privacy settings on Facebook and we like them so much as we can hide from our family members, the things we do and the people we’re with. But wait, what about security? How is privacy linked to security?

Let’s leave the digital platform and move our focus towards our daily lives. We need security in our banks, schools, public places and even in our homes and parks. But have you ever wondered what price we pay for this non-existent blanket of security? Privacy.  Let me reiterate –  security at the price of privacy. Those cute little things we see on the ceilings of our school corridors; we call them “CCTV” –  they are installed for our security. But security from? No one bothers to ask. Maybe they (the authorities) want to tape everything in case something bad happens so that they can go through the tapes and catch perps red-handed. But they are taping every single thing and we don’t take this as them breaching our privacy?

A number of times these tapes have been misused causing niggling unpleasantries and yet it’s ok. There’s a famous proverb in Hindi that translates to this,“You have to sacrifice one thing to get another”. Here we sacrifice our privacy to get security. With self-driving cars grabbing all the attention, there goes more data to stay connected and apparently, “secure”.

Similarly, some companies check what their employees are up to and what they are doing on their computers while they are at work. This, from the company’s perspective is to avoid plausible breach of sensitive data but is such constant monitoring even ethical? So, does it really have to be a tradeoff? Security for privacy and vice versa?

Marvi Islam is from Islamabad, Pakistan and studies at Capital University of Science and Technology, Islamabad. https://www.facebook.com/marvi.islam

T&S Magazine September 2015 Contents

cover 1

Volume 34, Number 3, September 2015

4 President’s Message
Coping with Machines
Greg Adamson
Book Reviews
5 Marketing the Moon: The Selling of the Apollo Lunar Mission
7 Alan Turing: The Enigma
10 Editorial
Resistance is Not Futile, nil desperandum
MG Michael and Katina Michael
13 Letter to the Editor
Technology and Change
Kevin Hu
14 Opinion
Privacy Nightmare: When Baby Monitors Go Bad
Katherine Albrecht and Liz Mcintyre
15 From the Editor’s Desk
Robots Don’t Pray
Eugenio Guglielmelli
17 Leading Edge
Unmanned Aircraft: The Rising Risk of Hostile Takeover
Donna A. Dulo
20 Opinion
Automatic Tyranny, Re-Theism, and the Rise of the Reals
Sand Sheff
23 Creating “The Norbert Wiener Media Project”
J. Mitchell Johnson
25 Interview
A Conversation with Lazar Puhalo
88 Last Word
Technological Expeditions and Cognitive Indolence
Christine Perakslis

SPECIAL ISSUE: Norbert Wiener in the 21st Century

33_ Guest Editorial
Philip Hall, Heather A. Love and Shiro Uesugi
35_ Norbert Wiener: Odd Man Ahead
Mary Catherine Bateson
37_ The Next Macy Conference: A New Interdisciplinary Synthesis
Andrew Pickering
39_ Ubiquitous Surveillance and Security
Bruce Schneier
41_ Reintroducing Wiener: Channeling Norbert in the 21st Century
Flo Conway and Jim Siegelman
44_ Securing the Exocortex*
Tamara Bonaci, Jeffrey Herron, Charles Matlack, and Howard Jay Chizeck
52_ Wiener’s Prefiguring of a Cybernetic Design Theory*
Thomas Fischer
60_ Norbert Wiener and the Counter-Tradition to the Dream of Mastery
D. Hill
64_ Down the Rabbit Hole*
Laura Moorhead

Features

74_ Opening Pandora’s 3D Printed Box
Phillip Olla
81_ Application Areas of Additive Manufacturing
N.J.R. Venekamp and H.Th. Le Fever

*Refereed article.

T&S Magazine June 2015 Contents

cover 1

Volume 34, Number 2, June 2015

3 ISTAS 2015 – Dublin
4 President’s Message
Deterministic and Statistical Worlds
Greg Adamson
5 Editorial
Mental Health, Implantables, and Side Effects
Katina Michael
8 Book Reviews
Reality Check: How Science Deniers Threaten Our Future
Stealing Cars: Technology & Society from the Model T to the Gran Torino
13 Leading Edge
“Ich liebe Dich UBER alles in der Welt” (I love you more than anything else in the world)
Sally Applin
Opinion
16 Tools for the Vision Impaired
Molly Hartman
18 Learning from Delusions
Brian Martin
21 Commentary
Nanoelectronics Research Gaps and Recommendations*
Kosmas Galatsis, Paolo Gargini, Toshiro Hiramoto, Dirk Beernaert, Roger DeKeersmaecker, Joachim Pelka, and Lothar Pfitzner
80 Last Word
Father’s Day Algorithms or Malgorithms?
Christine Perakslis

SPECIAL ISSUE—Ethics 2014/ISTAS 2014

31_ Guest Editorial
Keith Miller and Joe Herkert
32_ App Stores for the Brain: Privacy and Security in Brain-Computer Interfaces*
Tamara Bonaci, Ryan Calo, and Howard Jay Chizeck
40_ The Internet Census 2012 Dataset: An Ethical Examination*
David Dittrich, Katherine Carpenter, and Manish Karir
47_ Technology as Moral Proxy: Autonomy and Paternalism by Design*
Jason Millar
56_ Teaching Engineering Ethics: A Phenomenological Approach*
Valorie Troesch
64_ Informed Consent for Deep Brain Stimulation: Increasing Transparency for Psychiatric Neurosurgery Patients*
Andrew Koivuniemi
71_ Robotic Prosthetics: Moving Beyond Technical Performance*
N. Jarrassé, M. Maestrutti, G. Morel, and A. Roby-Brami

*Refereed Articles

 

Toys, Terrorism and Technology

Recent attacks on citizens in all too many countries have raised the question of creating back-doors in encrypted communications technology.  A November 22 NY Times article by Zeynep Tufekci: “The WhatsApp Theory of Terrorism“, does a good job of explaining some of the flaws in the “simplistic” – government mandated back-doors. The short take: bad guys have access to tools that do not need to follow any government regulations, and bad guys who want to hack your systems can use any backdoor that governments do mandate — no win for protection, big loss of protection.

Toys? The Dec. 1 Wall Street Journal covered: “Toy Maker Says Hack Accessed Customer Information“.  While apparently no social security or credit card data was obtained, there is value in having names – birthdates – etc for creating false credentials.  How does this relate to the Terrorist Threat?  — two ways actually:

  1. there are few, if any, systems that hackers won’t target — so a good working assumption is someone will try to ‘crack’ it.
  2. technologists, in particular software developers, need to be aware, consider and incorporate appropriate security requirements into EVERY online system design.

We are entering the era of the Internet of Things (IoT), with many objects now participating in a globally connected environment.  There are no doubt some advantages (at least for marketing spin) with each such object.  There will be real advantages for some objects.  New insight may be discovered though the massive amount of data available  – for example, can we track global warming via the use of IoT connected heating/cooking devices? However, there will be potential abuses of both individual objects (toys above), and aggregations of data.  Software developers and their management need to apply worst case threat-analysis to determine the risks and requirements for EVERY connected object.

Can terrorists, or other bad guys, use toys? Of Course!  There are indications that X-Box and/or Playstations were among the networked devices used to coordinate some of the recent attacks. Any online environment that allows users to share data/objects can be used as a covert communications channel.  Combining steganography and ShutterFly,  Instagram, Minecraft,  or any other site where you can upload or manipulate a shareable image is a channel.  Pretending we can protect them all is a dangerous delusion.

Is your employer considering IoT security?  Is your school teaching about these issues?

 

Killer Robots (again?)

The International Joint Conference on Artificial Intelligence in July announced an open letter on Autonomous Weapons: an Open Letter from AI & Robotics Researchers which has probably broken the 20,000 signatures mark by now. (Wouldn’t you like your name on a letter signed by Stephan Hawking and Elon Musk, among other impressive figures?)    This touches on the cover topic of SSIT’s Technology and Society Magazine article in Spring of 2009 whose cover image just about says it all:spg09cov

The topic of this issue is Lethal Robots.  The letter suggests that letting AI software decide when to initiate fatal actions was not a good idea.  Specifically, “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.”

Unfortunately, I can’t exactly think of any way to actually prevent the development of such systems by organizations that would like to pursue the items listed above for which the killer robots are ideally suited.  Perhaps you have some thoughts?  How can we make these not just “not beneficial” but discourage their development? Or is that possible?

SSIT is a sponsor of a new IEEE Collaboratec community on Collaboratec CyberEthics and CyberPeace. I encourage you to join this community (which is not limited to IEEE members) and contribute to a discussion there.

Technologists who Give a Damn?

I’ve been using Zen and the Art of Motorcycle Maintenance in classes for a while now.  One key message of the book is that professionals (well everybody) needs to care about their work.  Perhaps more in Zen terms, to be mindful while they work.  The author asserts that one of the reasons technology is so alienating now-a-days is that the lack of care is evident in the workmanship, robustness, etc. I’ve also been working on an update of the SSIT Strategic Plan, and one element of that discussion has been what catchphrase should we use?… Like on business cards.  IEEE’s is “Advancing technology for humanity” which is a good one.  Currently we are using “Where Technology and Society Talk” … but it is tempting to use: “Technologists that Give a Damn” … a bit demeaning to imply that some (many?) don’t, but unfortunately this is at least occasionally true. There are at least two levels of caring.  The obvious one for SSIT is paying attention to the social impact of inventions and products (the “should we make it” as opposed to the “how we make it“).  There is a lower level that is also critical, in software we might ask “is this code elegant?”  Oddly, there seems to be a relationship between underlying elegance and quality.  Clean, simple design often works better than a ‘hack’, and it takes both a level of mastery, and a level of mindfulness to accomplish.  Some number of cyber security holes are a result of code where folks didn’t care enough to do it right. No doubt many “blue screen of death” displays and other failures and frustrations emerge from this same source.  Often management is under pressure, or lack of awareness, and is satisfied with shipping the product rather than making sure it is done well.  I’m not aware of any equivalent in most development facilities of the Japanese “line stop buttons” that make quality a ubiquitous responsibility.  The reality is we need technologists who invent and produce products that are right socially, done right technically — technologists who embrace “care” at all levels. A retired career counselor from the engineering school at one of our ivy league schools in my Zen class observed that we were more focused on ‘career skills’ than ‘quality’ in our education, and may be suppressing student’s sense of care.  We then observed that this apparent lack of care, evidenced in so many consumer products, might be a factor in why girls are choosing to not enter STEM education and careers. I suppose the question  that remains is “do we care?”

Who is Driving My Car (revisited)

Apparently my auto insurance company was not reading my recent blog entry.  They introduced a device, “In-Drive” that will monitor my driving habits and provide a discount (or increase) in my insurance rates.

There are a few small problems. The device connects into the diagnostic port of the car, allowing it to take control of the car (brakes, acceleration, etc.) or a hacker to do this (see prior Blog entry). It is connected to the mothership (ET phones home), and that channel can be used both ways, so the hacker that takes over my car can be anywhere in the world.  I can think of three scenarios where this is actually feasible.

  1. Someone wants to kill the driver (very focused, difficult to detect).
  2. Blackmail – where bad guys decide to crash a couple of cars, or threaten to, and demand payment to avoid mayhem (what would the insurance company CEO say to such a demand?)  (Don’t they have insurance for this?)
  3. Terrorism – while many cyber attacks do not yield the requisite “blood on the front page” impact that terrorists seek, this path can do that — imagine ten thousand cars all accelerating and losing brakes at the same time … it will probably get the desired coverage.

As previously mentioned, proper software engineering (now a licensable profession in the U.S.) could minimize this security risk.

Then there is privacy.  The  insurance company’s privacy policy does not allow them to collect the data that their web page claims this device will collect — so clearly privacy is an after thought in this case.  The data collected is unclear – they have a statement about the type of data collected, and a few FAQ’s later, have a contradictory indication that the location data is only accurate within a forty square mile area, except maybe when it is more accurate.  What is stored, for what period of time, accessible to what interested parties (say a divorce lawyer) or with what protections is unclear.  A different insurance company, Anthem, encountered a major attack that compromises identity information (at least) for a large number of persons.  I’m just a bit skeptical that my auto insurance company has done their analysis of that situation and upgraded their systems to avoid similar breaches and loss of data. For those wondering what types of privacy policies might make sense, I encourage you to view the OECD policy principles and examples.  Organizations that actually are concerned with privacy  would be covering all of these bases at least in their privacy statements. (Of course they can do this and still have highly objectionable policies, or change their policies without notice.)

Philip Hall

Homebase location *
Ann Arbor, Michigan USA
Email philip@faerberhall.com
SSIT Roles (and years) *
2013-15 DL
2013-17 BoG Member
2015 Chair, Conferences & Events
2014-15 DL Program Chair
2014 Chapters Chair
2013-14 Chair, Australia Chapter
Relevant IEEE Roles
2014-15 SSIT Rep, IEEE-USA Committee on Transportation & Aerospace Policy (CTAP)
2015 Member, AESS & Vice Chair, AESS UAV Technical Panel

Other Related Activities/Interests
Interested in (1) National Security and societal implications of emerging technologies; (2) impact of climate variability on water, energy and food security; and (3) technologies for sustainability.
What Category/Topic TAGs should include you?
(Comma Separated)
education, policy, security, privacy, aerospace, emerging technologies, autonomous vehicles