Who is Driving Your Car?

A recent CBS Sixty Minutes program interviewed folks at DARPA, including a demonstration of how a recent computer-laden car could be hacked and controlled.

Computers in cars are not a new thing, even the dozens that we see in new models, and they have been interconnected for some time as well.  Connecting your car to the network is a more recent advance — “On Star” is one variation that has been on-board for a while.  The ads for this have suggested the range of capabilities — unlock your car for you, turn on your ignition, detect that you may have been in an accident (air bag deployed, but maybe  monitoring capabilities) and of course, they know where your car is — if it is stolen they can disable it. Presumably a hacker can do all of these as well — and the DARPA demonstration shows some of the implications of this — stopping the car, acceleration, etc.  Criminals have already acquired armies of zombie computers to use in attacking their targets, blackmail, etc.  Imagine having a few hundred zombie cars in a major city like LA — enabling both terror or blackmail.

An additional sequence on SIxty Minutes shows the hacking of a drone.  And perhaps equally important, a re-programmed drone that is not (as easily) accessed/hacked.  Behind this is an issue of software engineering and awareness.   The folks making drones, cars, and other Internet of Things (IoT) objects are not ‘building security in’.  What is needed is an awareness for each IoT enabled device of the security risks involved — not just for abuse o f that particular device, but also how that might impact other devices in the network or the health and safety of the user and public.

A recent dialog with some IEEE-USA colleagues surfaced a question of where software engineering licensing (professional engineers) might be required … and we used video games as an example of a point where it did not seem appropriate … of course, that all breaks down if your video game can take over your car or your pace maker.

 

 

Too Close for Comfort? Detecting your presence.

A group of authors in the August 2014 issue of IEEE Computer outline some pros, cons and examples of proximity sensing technology that initiates advertising, action and may report your presence to some data collection process. The article is called The Dark Patterns of Proxemic Sensing.

There are simple examples which most folks have encountered: the faucet that turns on when you put your hands near it, followed by the automated hand dryer or paper towel dispenser.  This paper Identifies some current examples that many of us may not have encountered: the mirror that presents advertising, a wall of virtual “paparazzi” that flash cameras at you accompanied by cheering sounds, and urinals that incorporate video gaming. Some of these systems are networked, even connected to the internet.  Some interact anonymously, others are at least capable of face or other forms of recognition.

The article identifies eight “dark” aspects of this proximity interaction:

  1. Captive Audience – there is a concern of unexpected/undesired interactions in situations where the individual must go for other reasons.
  2. Attention Grabbing – detection and interaction allows these systems to distract the target individual.  Which may be problematic, or just annoying.
  3. Bait and Switch – initiating interaction with an attractive first impression, then switching to a quite different agenda.
  4. Making personal information public — for example, displaying or announcing your name upon recognition.
  5. We never forget – tracking an individual from one encounter to the next, even spanning locations for networked systems.
  6. Disguised data collection – providing (personalized) data back to some central aggregation.
  7. Unintended relationships – is that person next to you related in some way — oh, there she is again next to you at a different venue…
  8. Milk factor – forcing a person to go through a specific interaction (move to a location, provide information …) to obtain the promised service.

Most of these are traditional marketing/advertising concepts, now made more powerful by automation and ubiquitous networked systems.  The specific emerging technologies are one potentially  disturbing area of social impact.  A second is the more general observation that the activities we consider innocuous or even desirable historically may become more problematic with automation and de-personalization.  The store clerk might know you by name, but do you feel the same way when the cash register or the automatic door knows you?

Issues in this area area also discussed in the Summer 2014 issue of Technology and Society – Omnipresent Cameras and Personal Safety Devices being relevant articles in that issue.

Enslaved by Technology?

A recent “formal” debate in Australia, We are Becoming Enslaved by our Technology addresses this question (90 min).  A look at the up side and down side of technological advances with three experts addressing both sides of the question.

One key point made by some of the speakers is the lopsided impact that technology may have towards government abuse.  One example is captured in the quote “a cell phone is a surveillance device that also provides communications”  (quoted by Bernard  Keene)  In this case one who benefits from continuous location, connectivity, app and search presence.

Much of the discussion focuses on the term “enslave” … as opposed to “control”.  And also on the question of choice … to what degree do we have “choice”, or perhaps are trying to absolve our responsibility by putting the blame on technology.

Perhaps the key issue is the catchall “technology”.  There are examples of technology, vaccines for example, where the objectives and ‘obvious’ uses are beneficial (one can envision abuse by corporations/countries creating vaccines.) And then the variations in weapons, eavesdropping, big-data-analysis vs privacy, etc.  Much of technology is double-edged – with impacts both “pro and con” (and of course individuals have different views of what a good impact.)

A few things are not debatable (IMHO):
1. the technology is advancing rapidly on all fronts
2. the driving interests tend to be corporate profit, government agendas and in some cases inventor curiosity and perhaps at times altruistic benefits for humanity.
3. there exists no coherent way to anticipate the unintended consequences much less predict the abuses or discuss them in advance.

So, are we enslaved? …. YOU WILL RESPOND TO THIS QUESTION! (Oh, excuse me…)

 

SIPC ’14 — Social Implications of Pervasive Computing for Sustainable Living – 2014 March

The Third IEEE International Workshop on the Social Implications of Pervasive Computing for Sustainable Living (SIPC ’14) being organized in conjunction with the Twelfth IEEE International Conference on Pervasive Computing and Communications – PerCom 2014 (http//www.percom.org/).

The conference will be held at Budapest, Hungary on 24-28 March 2014.

The workshop aims to discuss the social implications of pervasive technology used to support or facilitate a number of multi-disciplinary areas for sustainable living.

Potential workshop attendees are invited to submit papers of up to 6 pages that address at least one relevant social implication of pervasive computing and discuss how researchers can influence the direction of development. The papers will be peer-reviewed by at least two members of the program committee, and chosen according to their relevance to the scope of the workshop, the quality and originality of the submission, and their ability to stimulate and balance discussions.

The organizers will try to consider as many submissions as possible to help assemble a large community of researchers interested in the social challenges of pervasive computing. Papers will be included and indexed in the IEEE digital libraries (Xplore), showing their affiliation with IEEE PerCom.

For details, please visit:http://www.sipc2014.blogspot.com