Predictive Fiction

A recent anthology of “climate fiction”, Loosed Upon the World, projects climate change forward some years into dystopian scenarios.  The editor, John Joseph Adams, asserts “Fiction is a powerful tool … perhaps [we can] humanize and illuminate the issue in ways that aren’t as easy to to with only science and cold equations.

I have been an advocate of near-term science fiction, which I refer to as predictive fiction, as a tool to explore the “what if” scenarios that may result from technology, hopefully allowing us to avoid the negative impacts. Unfortunately this particular anthology is dealing with a current trajectory that is more an exploration of “when, what then?”

But some of the basic issues that we technologists face enter the spotlight, albeit one we may not like.  In the forward, Paolo Bacigalupi has a painful message for us techies (many of whom fall into his category of “Techno-optimists”): “Engineers don’t grow up thinking about building a healthy soil eco-system, or trying to restore some estuary, … to turn people into better long-term planners, or better educated and informed citizens, or creating better civic societies.”   I don’t fully agree with Paolo — it is more accurate to state that “engineers don’t get paid to …” and perhaps “the project requirements do not address …” And occasionally, we have technologists that resist the corporate momentum and try to get their employer to “do the right thing”.  SSIT seeks to honor such courage with the “Carl Barus Award for Outstanding Service in the Public Interest” (nominations always welcomed.)

But back to the future, I mean the fiction. Paolo also observes “..imaginative literature is mythic. The kinds of stories we build, the way we encourage people to live into those myths and dream the future — those stories have power. Once we build this myth that the rocket-ship and the techno-fix is the solve for all our plights and problems, that’s when we get ourselves in danger. It’s the one fantasy that almost certainly guarantees our eventual self-destruction.” 

I suspect we need a good dose of reality, perhaps in the guise of predictive fiction.

Tele-Kiss … hmmm

London haptic researchers have developed a device to add to a cell phone that will allow remote persons kiss. As described in an IEEE Spectrum article. And since “a picture is worth a thousand words”:

A woman kisses a plastic pad attached to her smartphone to send a virtual kiss to the person she's video chatting with.

No doubt a wider range of haptic appliances will follow. A major US phone company used to have the slogan “reach out and touch someone”, perhaps our mobile devices are headed that way.

Who do you want listening in at your home?

The Wall St. Journal has a note today comparing Amazon’s Echo and Google Home as voice activated, in-home assistants.   This space is fraught with impacts on technology and society — from services that can benefit house-bound individuals, to serious opportunities for abuse by hacking, for commercial purposes, or governmental ones. To put it in a simple form: you are being asked to “bug your house” with a device that listens to every noise in the house.  Of course you may have already bugged your pocket with  a device that is listening for the magic words “hey, Siri” (or the person next to you in the office, train, or restaurant may be carrying that “wire”.)  Robots that respond to “OK Google” or “Alexa” are expanding into our monitored domains. (What to folks named Alexa or Siri have to look forward to in this world?) (Would you name your child “OK Google”?)

The immediate use cases seem to be a cross between control of the “Internet of Things”, and the specific business models of the suppliers; online sales for Amazon Alexa, and more invasive advertising for Google. Not only can these devices turn on and off your lights, they can order new bulbs …ones that blink subliminal advertising messages (uh oh, now I’ve given someone a bad idea.)

From our technology and society perspective we need to look forward to the pros and cons of these devices. What high benefit services might be offered?  What risks do we run?  Are there policy or other guidelines that should be established? …. Please add your thoughts to the list …

Meanwhile I’m trying to find out why my new car’s navigation system keeps trying to take me to Scotland when I ask “Find McDonald’s”.

 

Humans, Machines, and the Future of Work

De Lange Conference X on Humans, Machines, and the Future of Work
December 5-6, 2016 at Rice University, Houston, TX
For details, registration, etc. See  http://delange.rice.edu/

 

  • What advances in artificial intelligence, robotics and automation are expected over the Next 25 years?
  • What will be the impact of these advances on job creation, job destruction and wages in the labor market?
  • What skills are required for the job market of the future?
  • Can education prepare workers for that job market?
  • What educational changes are needed?
  • What economic and social policies are required to integrate people who are left out of future labor markets?
  • How can we preserve and increase social mobility in such an environment?

 

“Remaining Human”

CLICK HERE for the must-watch short film:

VIMEO.COM|BY J.MITCHELLJOHNSON
 
produced with a small IEEE grant on the work of Norbert Wiener.
Launched October 21, 2016, at the IEEE ISTAS 2016 conference in Kerala, India. EXCLUSIVE. #norbert#wiener #cybernetics #communications #ethics #feedback #brain#machines #automation

For more see www.norbertwiener.org and www.norbertwiener.com

 

IEEE T&S Magazine, September 2016

 


ON THE COVER:

DON ADAMS AS MAXWELL SMART, WITH INFAMOUS “SHOE PHONE.” GENERAL ARTISTS CORPORATION- GAC-MANAGEMENT/PUBLIC DOMAIN/WIKIMEDIA.

Moving ICTD Research Beyond Bungee Jumping Lost in Translation Smartphones, Biometrics, and a Brave New World

 


Conference Announcement
ISTAS 2016, Kerala, India icon.access.free


President’s Message
Is Ethics an Emerging Property? icon.access.free
Greg Adamson


Editorial
Can Good Standards Propel Unethical Technologies?  icon.access.free
Katina Michael


News and Notes
IEEE Technology and Society icon.access.freeMagazine Seeks Editor-in-Chief


Book Reviews
How Not to Network a Nation icon.access.free
Loren Graham


Leading Edge
Internet Governance, Security,
Privacy and the Ethical Dimension of ICTs in 2030 icon.access.free

Vladimir Radunovic


Commentary
The Paradox of the Uberveillance Equation   icon.access.free
MG Michael

Can We Trust For-Profit Corporations to Protect Our Privacy?*icon.access.locked
Wilhelm E.J. Klein

Are Technologies Innocent?*icon.access.locked
Michael Arnold and Christopher Pearce


Opinion
ICTs and Small Holder Farming icon.access.free
Janet Achora


Industry Perspective
Smart Cities: A Golden Age for Control Theory? icon.access.free
Emanuele Crisostomi, Robert Shorten, and Fabian Wirth


Last Word
An Ounce of Steel: Crucial Alignmentsicon.access.free
Christine Perakslis

*Refereed articles

 

 

 

SPECIAL ISSUE ON ISTAS 2015 – 

TECHNOLOGY, CULTURE, AND ETHICS

Guest Editorial – Future of Sustainable Developmenticon.access.free
Paul M. Cunningham

Technology-Enhanced Learning in Kenyan Universities*icon.access.locked
Miriam Cunningham

Moving ICTD Research Beyond Bungee Jumping*icon.access.locked
Andy Dearden and William D. Tucker

Expanding the Design Horizon for Self-Driving Vehicles*icon.access.locked
Pascale-L. Blyth, Miloš N. Mladenovic´, Bonnie A. Nardi,
Hamid R. Ekbia, and Norman Makoto Su

SPECIAL READER ACCESS FOR THIS ARTICLE – CLICK HERE:

Lost in Translation – Building a Common Language for Regulating
Autonomous Weapons*
icon.access.free

Marc Canellas and Rachel Haga

Smartphones, Biometrics, and a Brave New World*icon.access.locked
Peter Corcoran and Claudia Costache

Ethics, Children, and Biometric Technology*icon.access.locked
Darelle van Greunen

Intelligent Subcutaneous Body Area Networks*icon.access.locked
P.A. Catherwood, D.D. Finlay, and J.A.D. McLaughlin


Humanitarian Cyber Operations*icon.access.locked
Jan Kallberg

 

 

 

*Refereed articles.

To GO or Not to GO?

Pokemon Go has become a delightful and disturbing experiment in the social impact of technology. This new “Free” software for smart phones implements an augmented reality, overlaying the popular game on the real world. Fans wander the streets, byways, public, and in some cases private spaces following the illusive characters on their smart phone to capture them, or “in world”, or to collect virtual items.  The uptake has been amazing, approaching Twitter in terms of user-hours in just days after introduction. It has also added $12 billion to Nintendo’s stock value (almost double).

Let’s start with “Free”, and $12 billion dollars. The trick is having a no-holds barred privacy policy. Not surprising, the game knows who you are and where you are. It also can access/use your camera, storage, email/phone contacts, and potentially your full Google account (email contents, Drive contents, etc.)  Them money comes because all of this is for sale, in real time. (“While you track Pokemon, Pokemon Go tracks you”, USA Today, 12 July 16) Minimally you can expect to see “Luremodules” (a game component) used to bring well vetted (via browser history, email, call history, disk content, etc.) customers into stores that then combine ad-promotions with in-store characters. Perhaps offering your favorite flavor ice cream, or draw you into a lawyer’s office that specializes in the issues you have been discussing on email, or a medical office that …well you get the picture, and those are just the legitimate businesses.  Your emails from your bank may encourage less honest folks to lure you into a back alley near an ATM machine .. a genre of crime that has only been rumored so far.

The July 13th issue of USA Today outlines an additional set of considerations. Users are being warned by police, property owners, and various web sites for various reasons. The potential for wandering into traffic is non-trivial while pursuing an illusive virtual target, or a sidewalk obstruction, or over the edge of the cliff (is there a murder plot hiding in here?) Needless to say playing while driving creates a desperate need for self-driving cars. Since the targets change with time of day, folks are out at all hours, in all places, doing suspicious things. This triggers calls to police. Some memorial sites, such as Auschwitz and the Washington DC Holocaust Memorial Museum have asked to be exluded from the play-map. There are clearly educational opportunities that could be built into the game — tracing Boston’s “freedom trail”, and requiring player engagement with related topics is a possible example. However, lacking the explicit consideration of the educational context, there are areas where gaming is inappropriate. Also, some public areas are closed after dark, and the game may result in players trespassing in ways not envisioned by the creators, which may create unhealthy interactions with the owners, residents, etc. of the area.

One USA Today article surfaces a concern that very likely was missed by Nintendo, and is exacerbated by the recent deaths of black men in US cities, and the shooting of police in Dallas. “For the most part, Pokemon is all fun and games. Yet for many African Americans, expecially men, their enjoyment is undercut by fears they may raise suspicion with potentially lethal consequences.”  Change the countries and communities involved and similar concerns may emerge in other countries as well. This particular piece ends with an instance of a black youth approaching a policeman who was also playing the game, with a positive moment of interaction as they helped each other pursue in-game objectives.

It is said every technology cuts both ways.  We can hope that experience, and consideration will lead both players and Nintendo to evolve the positive potential for augmented reality, and perhaps with a bit greater respect for user privacy.

Ethics of Killing with Robots

The recent murder of police officers in Dallas, finally terminated by the lethal use of a robot to kill the shooter has triggered an awareness of related ethical issues. First, it must be understood that the robot was under human control during it’s entire mission, so in this particular case it reflects a sophisticated “projection of power” with no autonomous capability. The device might have been a drone controlled remotely and simply paralleled the current use of drones (and no doubt other devices) as weapons.

We already have examples of “autonomous” devices as well. Mines, both land and ocean (and eventually space) all reflect devices “programmed to kill” that operate with no human at the trigger. If anything, a lethal frustration with these devices is that they are too dumb, killing long after their intended use.

I saw a comment in one online discussion implying that robots are or would be programed with Asimov’s first law: “Do not harm humans”. But of course this is neither viable at this time (it takes an AI to evaluate that concept) nor is it a directive that is likely to be implemented in actual systems. Military and police applications are among the most likely for robotic systems of this kind, and harming humans may be a key objective.

Projecting lethal force at a distance may be one of the few remaining characteristics of humans (since we have found animals innovating tools, using language and so forth). Ever since Homo Whomever (pre Sapians as I understand it), tossed a rock to get dinner we have been on this slippery slope. The ability to kill a target from a ‘position of safety’ is essentially the basic design criteria for many weapon systems.  Homo Whomever may have also crossed the autonomous Rubicon with the first snare or pit-fall trap.

Our challenge is to make sure our systems designers and those acquiring the systems have some serious ethical training with practical application.  Building in the safeguards, expiration dates, decision criteria, etc. should be essential aspects of lethal autonomous systems design. “Should” is unfortunately the case, it is unlikely in many scenarios.

Teaching Computers to Lie

A recent article on the limitations of computer “players” in online games is that they don’t know about lying.   No doubt this is true.  Both the detection of lies (which means anticipating them, and in some sense understanding the value of mis-representation to the other party) and the ability to use this capability are factors in ‘gaming’.  This can be both entertainment games, and ‘gaming the system’ — in sales, tax evasion, excusing failures, whatever.

So here is a simple question: Should we teach computers to lie?
(unfortunately, I don’t expect responses to this question will alter the likely path of game creators, or others who might see value in computers that can lie.)   I will also differentiate this from using computers to lie.  I can program a computer so that it overstates sales, understates losses, and many other forms of fraud.  But in this case it is my ethical/legal lapse, not a “decision” on the part of the computer.

It’s 10PM do you know what your model is doing?

“Customers like you have also …”  This concept appears explicitly, or implicitly at many points in the web-of-our-lives, aka the Internet. Specific corporations, and aggregate operations are building increasingly sophisticated models of individuals.  Not just “like you”, but “you”! Prof. Pedro Domingos at UW  in his book “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World” suggests this model of you may become a key factor of your ‘public‘ interactions.

Examples include having Linked-in add a “find me a job” button that will conduct interviews with relevant open positions and provide you a list of the best.  Or perhaps locating a house, a car, a spouse, …well, maybe somethings are better done face-2-face.

Apparently a Asian firm, “Deep Knowledge” has appointed a virtual director to their Board. In this case it is a construct designed to detect trends that the human directors might miss.  However, one suspects that Apple might want a model of Steve Jobs around for occasional consultation, if not back in control again.