Algorithm Problem

United Airlines has been having it’s problems since recently ejecting a passenger to facilitate crew members getting to their next flight.  As the Wall St. Journal article points out this is a result (in part) of employees following a fairly strict rule book — i.e. an algorithm.  In many areas from safety to passenger relations United has rules to follow, and employee (i.e. human) discretion  is reduced or eliminated.   It is somewhat ironic that the employees who made the decisions that lead up to this debacle could have been fired for not taking this course of action.  But how does this relate to Technology and Society?

There are two immediate technology considerations that become apparent.  First is the automated reporting systems.  No doubt the disposition of every seat, passenger and ticket is tracked, along with who made what decisions.  This means that employees not following the algorithm will be recorded, ,may be detected/reported.  In the good old days a supervisor could give a wink and smile to an employee who broke the ‘rules’ but did the right thing.  Now-a-days, the technology is watching and increasingly, the technology is comparing the data with history, rule books and other data.

The second aspect of this is “gate attendant 2.0” — when we automate these humans out of their jobs, or into less responsible “face-keepers”. (i.e. persons present only to provide a human face to the customer while all of the actual work/decisions are automated, akin to the term “place-keeper”.)  Obviously if there is a “rule book”, this will be asserted in the requirements for the system, and exact execution of the rules can be accomplished. It is possible that passengers will respond differently if a computerized voice/system is informing them of their potential removal — realizing there is no “appeal”. However it is also possible that an AI system spanning all of an airlines operations, aware of all flight situations, and past debacles like this one may have more informed responses.  The airline might go beyond the simple check-in, frequent flyer and TSA passenger profile to their Facebook, credit-score and other data in making the decisions on who to “bump”.  One can envision bumping passengers with lower credit ratings, or who’s Facebook psychological profiles indicate that they are mild-mannered reporters, or shall we say “meek”.

The ethics programmed into gate-attendant 2.0 are fairly important.  They will reflect the personality of the company, the prejudices of the developers, the wisdom of the deep-learning processes, and the cultural narratives of all of the above.

AI Apocalypse (not)

Presumably we will reach a tipping point when Intelligent Devices surpass humans in many key areas, quite possibly without our ability to understand what has just happened, a variation of this is called “the singularity” (coined by Vernor Vinge, and heralded by Ray Kurzweil)  How would we know we have reached such a point?  One indicator might be an increased awareness, concern and discussion about the social impact of AI’s.  There has been a significant increase in this activity in the last year, and even in the last few months.  Here are some examples for those trying to track the trend (of course Watson, Siri, Google Home, Alexa, Cortana and their colleagues already know this.)

A significant point made by Harari is that Artificial Intelligence does not require Artificial Consciousness. A range of purpose built AI systems can individually have significant impact on society without reflecting what the IEEE Ethics project recognizes to as “Artificial Generalized Intelligence”.  This means that jobs, elections, advertising, online/phone service centers, weapons systems, vehicles, book/movie recommendations, news feeds, search results, online dating connections, and so much more will be (or are being) influenced or directed by combinations of big data, personalization and AI.

What concerns/opportunities do you see in this, ‘brave new world’?

 

Predictive Fiction

A recent anthology of “climate fiction”, Loosed Upon the World, projects climate change forward some years into dystopian scenarios.  The editor, John Joseph Adams, asserts “Fiction is a powerful tool … perhaps [we can] humanize and illuminate the issue in ways that aren’t as easy to to with only science and cold equations.

I have been an advocate of near-term science fiction, which I refer to as predictive fiction, as a tool to explore the “what if” scenarios that may result from technology, hopefully allowing us to avoid the negative impacts. Unfortunately this particular anthology is dealing with a current trajectory that is more an exploration of “when, what then?”

But some of the basic issues that we technologists face enter the spotlight, albeit one we may not like.  In the forward, Paolo Bacigalupi has a painful message for us techies (many of whom fall into his category of “Techno-optimists”): “Engineers don’t grow up thinking about building a healthy soil eco-system, or trying to restore some estuary, … to turn people into better long-term planners, or better educated and informed citizens, or creating better civic societies.”   I don’t fully agree with Paolo — it is more accurate to state that “engineers don’t get paid to …” and perhaps “the project requirements do not address …” And occasionally, we have technologists that resist the corporate momentum and try to get their employer to “do the right thing”.  SSIT seeks to honor such courage with the “Carl Barus Award for Outstanding Service in the Public Interest” (nominations always welcomed.)

But back to the future, I mean the fiction. Paolo also observes “..imaginative literature is mythic. The kinds of stories we build, the way we encourage people to live into those myths and dream the future — those stories have power. Once we build this myth that the rocket-ship and the techno-fix is the solve for all our plights and problems, that’s when we get ourselves in danger. It’s the one fantasy that almost certainly guarantees our eventual self-destruction.” 

I suspect we need a good dose of reality, perhaps in the guise of predictive fiction.

Tele-Kiss … hmmm

London haptic researchers have developed a device to add to a cell phone that will allow remote persons kiss. As described in an IEEE Spectrum article. And since “a picture is worth a thousand words”:

A woman kisses a plastic pad attached to her smartphone to send a virtual kiss to the person she's video chatting with.

No doubt a wider range of haptic appliances will follow. A major US phone company used to have the slogan “reach out and touch someone”, perhaps our mobile devices are headed that way.

Who do you want listening in at your home?

The Wall St. Journal has a note today comparing Amazon’s Echo and Google Home as voice activated, in-home assistants.   This space is fraught with impacts on technology and society — from services that can benefit house-bound individuals, to serious opportunities for abuse by hacking, for commercial purposes, or governmental ones. To put it in a simple form: you are being asked to “bug your house” with a device that listens to every noise in the house.  Of course you may have already bugged your pocket with  a device that is listening for the magic words “hey, Siri” (or the person next to you in the office, train, or restaurant may be carrying that “wire”.)  Robots that respond to “OK Google” or “Alexa” are expanding into our monitored domains. (What to folks named Alexa or Siri have to look forward to in this world?) (Would you name your child “OK Google”?)

The immediate use cases seem to be a cross between control of the “Internet of Things”, and the specific business models of the suppliers; online sales for Amazon Alexa, and more invasive advertising for Google. Not only can these devices turn on and off your lights, they can order new bulbs …ones that blink subliminal advertising messages (uh oh, now I’ve given someone a bad idea.)

From our technology and society perspective we need to look forward to the pros and cons of these devices. What high benefit services might be offered?  What risks do we run?  Are there policy or other guidelines that should be established? …. Please add your thoughts to the list …

Meanwhile I’m trying to find out why my new car’s navigation system keeps trying to take me to Scotland when I ask “Find McDonald’s”.

 

Humans, Machines, and the Future of Work

De Lange Conference X on Humans, Machines, and the Future of Work
December 5-6, 2016 at Rice University, Houston, TX
For details, registration, etc. See  http://delange.rice.edu/

 

  • What advances in artificial intelligence, robotics and automation are expected over the Next 25 years?
  • What will be the impact of these advances on job creation, job destruction and wages in the labor market?
  • What skills are required for the job market of the future?
  • Can education prepare workers for that job market?
  • What educational changes are needed?
  • What economic and social policies are required to integrate people who are left out of future labor markets?
  • How can we preserve and increase social mobility in such an environment?

 

“Remaining Human”

CLICK HERE for the must-watch short film:

VIMEO.COM|BY J.MITCHELLJOHNSON
 
produced with a small IEEE grant on the work of Norbert Wiener.
Launched October 21, 2016, at the IEEE ISTAS 2016 conference in Kerala, India. EXCLUSIVE. #norbert#wiener #cybernetics #communications #ethics #feedback #brain#machines #automation

For more see www.norbertwiener.org and www.norbertwiener.com

 

IEEE T&S Magazine, September 2016

 


ON THE COVER:

DON ADAMS AS MAXWELL SMART, WITH INFAMOUS “SHOE PHONE.” GENERAL ARTISTS CORPORATION- GAC-MANAGEMENT/PUBLIC DOMAIN/WIKIMEDIA.

Moving ICTD Research Beyond Bungee Jumping Lost in Translation Smartphones, Biometrics, and a Brave New World

 


Conference Announcement
ISTAS 2016, Kerala, India icon.access.free


President’s Message
Is Ethics an Emerging Property? icon.access.free
Greg Adamson


Editorial
Can Good Standards Propel Unethical Technologies?  icon.access.free
Katina Michael


News and Notes
IEEE Technology and Society icon.access.freeMagazine Seeks Editor-in-Chief


Book Reviews
How Not to Network a Nation icon.access.free
Loren Graham


Leading Edge
Internet Governance, Security,
Privacy and the Ethical Dimension of ICTs in 2030 icon.access.free

Vladimir Radunovic


Commentary
The Paradox of the Uberveillance Equation   icon.access.free
MG Michael

Can We Trust For-Profit Corporations to Protect Our Privacy?*icon.access.locked
Wilhelm E.J. Klein

Are Technologies Innocent?*icon.access.locked
Michael Arnold and Christopher Pearce


Opinion
ICTs and Small Holder Farming icon.access.free
Janet Achora


Industry Perspective
Smart Cities: A Golden Age for Control Theory? icon.access.free
Emanuele Crisostomi, Robert Shorten, and Fabian Wirth


Last Word
An Ounce of Steel: Crucial Alignmentsicon.access.free
Christine Perakslis

*Refereed articles

 

 

 

SPECIAL ISSUE ON ISTAS 2015 – 

TECHNOLOGY, CULTURE, AND ETHICS

Guest Editorial – Future of Sustainable Developmenticon.access.free
Paul M. Cunningham

Technology-Enhanced Learning in Kenyan Universities*icon.access.locked
Miriam Cunningham

Moving ICTD Research Beyond Bungee Jumping*icon.access.locked
Andy Dearden and William D. Tucker

Expanding the Design Horizon for Self-Driving Vehicles*icon.access.locked
Pascale-L. Blyth, Miloš N. Mladenovic´, Bonnie A. Nardi,
Hamid R. Ekbia, and Norman Makoto Su

SPECIAL READER ACCESS FOR THIS ARTICLE – CLICK HERE:

Lost in Translation – Building a Common Language for Regulating
Autonomous Weapons*
icon.access.free

Marc Canellas and Rachel Haga

Smartphones, Biometrics, and a Brave New World*icon.access.locked
Peter Corcoran and Claudia Costache

Ethics, Children, and Biometric Technology*icon.access.locked
Darelle van Greunen

Intelligent Subcutaneous Body Area Networks*icon.access.locked
P.A. Catherwood, D.D. Finlay, and J.A.D. McLaughlin


Humanitarian Cyber Operations*icon.access.locked
Jan Kallberg

 

 

 

*Refereed articles.

To GO or Not to GO?

Pokemon Go has become a delightful and disturbing experiment in the social impact of technology. This new “Free” software for smart phones implements an augmented reality, overlaying the popular game on the real world. Fans wander the streets, byways, public, and in some cases private spaces following the illusive characters on their smart phone to capture them, or “in world”, or to collect virtual items.  The uptake has been amazing, approaching Twitter in terms of user-hours in just days after introduction. It has also added $12 billion to Nintendo’s stock value (almost double).

Let’s start with “Free”, and $12 billion dollars. The trick is having a no-holds barred privacy policy. Not surprising, the game knows who you are and where you are. It also can access/use your camera, storage, email/phone contacts, and potentially your full Google account (email contents, Drive contents, etc.)  Them money comes because all of this is for sale, in real time. (“While you track Pokemon, Pokemon Go tracks you”, USA Today, 12 July 16) Minimally you can expect to see “Luremodules” (a game component) used to bring well vetted (via browser history, email, call history, disk content, etc.) customers into stores that then combine ad-promotions with in-store characters. Perhaps offering your favorite flavor ice cream, or draw you into a lawyer’s office that specializes in the issues you have been discussing on email, or a medical office that …well you get the picture, and those are just the legitimate businesses.  Your emails from your bank may encourage less honest folks to lure you into a back alley near an ATM machine .. a genre of crime that has only been rumored so far.

The July 13th issue of USA Today outlines an additional set of considerations. Users are being warned by police, property owners, and various web sites for various reasons. The potential for wandering into traffic is non-trivial while pursuing an illusive virtual target, or a sidewalk obstruction, or over the edge of the cliff (is there a murder plot hiding in here?) Needless to say playing while driving creates a desperate need for self-driving cars. Since the targets change with time of day, folks are out at all hours, in all places, doing suspicious things. This triggers calls to police. Some memorial sites, such as Auschwitz and the Washington DC Holocaust Memorial Museum have asked to be exluded from the play-map. There are clearly educational opportunities that could be built into the game — tracing Boston’s “freedom trail”, and requiring player engagement with related topics is a possible example. However, lacking the explicit consideration of the educational context, there are areas where gaming is inappropriate. Also, some public areas are closed after dark, and the game may result in players trespassing in ways not envisioned by the creators, which may create unhealthy interactions with the owners, residents, etc. of the area.

One USA Today article surfaces a concern that very likely was missed by Nintendo, and is exacerbated by the recent deaths of black men in US cities, and the shooting of police in Dallas. “For the most part, Pokemon is all fun and games. Yet for many African Americans, expecially men, their enjoyment is undercut by fears they may raise suspicion with potentially lethal consequences.”  Change the countries and communities involved and similar concerns may emerge in other countries as well. This particular piece ends with an instance of a black youth approaching a policeman who was also playing the game, with a positive moment of interaction as they helped each other pursue in-game objectives.

It is said every technology cuts both ways.  We can hope that experience, and consideration will lead both players and Nintendo to evolve the positive potential for augmented reality, and perhaps with a bit greater respect for user privacy.

Ethics of Killing with Robots

The recent murder of police officers in Dallas, finally terminated by the lethal use of a robot to kill the shooter has triggered an awareness of related ethical issues. First, it must be understood that the robot was under human control during it’s entire mission, so in this particular case it reflects a sophisticated “projection of power” with no autonomous capability. The device might have been a drone controlled remotely and simply paralleled the current use of drones (and no doubt other devices) as weapons.

We already have examples of “autonomous” devices as well. Mines, both land and ocean (and eventually space) all reflect devices “programmed to kill” that operate with no human at the trigger. If anything, a lethal frustration with these devices is that they are too dumb, killing long after their intended use.

I saw a comment in one online discussion implying that robots are or would be programed with Asimov’s first law: “Do not harm humans”. But of course this is neither viable at this time (it takes an AI to evaluate that concept) nor is it a directive that is likely to be implemented in actual systems. Military and police applications are among the most likely for robotic systems of this kind, and harming humans may be a key objective.

Projecting lethal force at a distance may be one of the few remaining characteristics of humans (since we have found animals innovating tools, using language and so forth). Ever since Homo Whomever (pre Sapians as I understand it), tossed a rock to get dinner we have been on this slippery slope. The ability to kill a target from a ‘position of safety’ is essentially the basic design criteria for many weapon systems.  Homo Whomever may have also crossed the autonomous Rubicon with the first snare or pit-fall trap.

Our challenge is to make sure our systems designers and those acquiring the systems have some serious ethical training with practical application.  Building in the safeguards, expiration dates, decision criteria, etc. should be essential aspects of lethal autonomous systems design. “Should” is unfortunately the case, it is unlikely in many scenarios.