Ethics of Killing with Robots

The recent murder of police officers in Dallas, finally terminated by the lethal use of a robot to kill the shooter has triggered an awareness of related ethical issues. First, it must be understood that the robot was under human control during it’s entire mission, so in this particular case it reflects a sophisticated “projection of power” with no autonomous capability. The device might have been a drone controlled remotely and simply paralleled the current use of drones (and no doubt other devices) as weapons.

We already have examples of “autonomous” devices as well. Mines, both land and ocean (and eventually space) all reflect devices “programmed to kill” that operate with no human at the trigger. If anything, a lethal frustration with these devices is that they are too dumb, killing long after their intended use.

I saw a comment in one online discussion implying that robots are or would be programed with Asimov’s first law: “Do not harm humans”. But of course this is neither viable at this time (it takes an AI to evaluate that concept) nor is it a directive that is likely to be implemented in actual systems. Military and police applications are among the most likely for robotic systems of this kind, and harming humans may be a key objective.

Projecting lethal force at a distance may be one of the few remaining characteristics of humans (since we have found animals innovating tools, using language and so forth). Ever since Homo Whomever (pre Sapians as I understand it), tossed a rock to get dinner we have been on this slippery slope. The ability to kill a target from a ‘position of safety’ is essentially the basic design criteria for many weapon systems.  Homo Whomever may have also crossed the autonomous Rubicon with the first snare or pit-fall trap.

Our challenge is to make sure our systems designers and those acquiring the systems have some serious ethical training with practical application.  Building in the safeguards, expiration dates, decision criteria, etc. should be essential aspects of lethal autonomous systems design. “Should” is unfortunately the case, it is unlikely in many scenarios.

Teaching Computers to Lie

A recent article on the limitations of computer “players” in online games is that they don’t know about lying.   No doubt this is true.  Both the detection of lies (which means anticipating them, and in some sense understanding the value of mis-representation to the other party) and the ability to use this capability are factors in ‘gaming’.  This can be both entertainment games, and ‘gaming the system’ — in sales, tax evasion, excusing failures, whatever.

So here is a simple question: Should we teach computers to lie?
(unfortunately, I don’t expect responses to this question will alter the likely path of game creators, or others who might see value in computers that can lie.)   I will also differentiate this from using computers to lie.  I can program a computer so that it overstates sales, understates losses, and many other forms of fraud.  But in this case it is my ethical/legal lapse, not a “decision” on the part of the computer.

Ethics and Entrepreneurs

The Wall St. Journal outlined a series of the ethical issues facing start-up, and even larger tech companies: “The Ethical Challenges Facing Entrepreneurs“.  Having done time in a few similar situations, I can attest to the temptations that exist.  Here are a few of the key issues:

  • The time implications of a startup – many high-tech firms expect employees to be “there” far more than 40 hours per week. Start-ups are even more demanding, with the founders likely to have a period of their lives dominated by these necessities – families, relationships and even individual health can suffer.  What do you owe your relationships, or even yourself?
  • Not in the article, but in the news: in the U.S. many professional employees are “exempt” from overtime pay.  This means they can be expected to work “when needed” but often it seems to be needed every day and every week, yielding 60 hour work weeks (and 50% fewer employees needed to accomplish the work.)  I did this for most of my life, but also got stock options and bonus pay that allowed me to retire early … I see others in low paying jobs, penalized for not being “part of the team” as an exempt employee even when they have no work to actually perform.  Start-ups can project the “founder’s passion” onto others who may not have anywhere near the same share of potential benefit from the outcome.  This parallels a point in the article on “Who is really on the team?” — how do you share the pie when things take off?  Do you ‘stiff’ the bulk of the early employees and keep it to yourself? Or do you have some millionaire administrative assistants? It sets the personality of your company, trust me, I’ve seen it both ways.
  •  Who owns the “IP”? — it would be easy if we were talking patents and copyrights (ok, maybe not easy, technologists often get short-changed when their inventions are the foundation of corporate growth and they find they are looking for a new job.) — But there are lots of grey areas — was a spin-out idea all yours, or did it arise from the lunch table discussion? And what do you do when the company rejects your ideas (often to maintain their own focus, which is laudable).  So is your new start-up operation really free and  clear of legacy IP?
  • Mis-representation is a non-trivial temptation.  Entrepreneurs are looking for venture capital, for customers, for ongoing investors, and eventually to the business press (“xyz corporation fell short of expectations by 13% this quarter”.)  On one hand, if you are not optimistic and filled with hopeful expectations you can’t get off the ground. But ultimately, a good story will meet the test of real data, and along with this your reputation with investors, suppliers, customers, and in the worst case, the courts.  There is a difference between “of course our product has ‘abc'” (when you know it doesn’t), and “if that’s what it takes, we will make it with ‘abc'”. I’ve seen both – it’s a pain to do those overtime hours to make it do ‘abc’ because the sales person promised it. It is more of a pain to deal with the lawyers when it wasn’t ever going to be there. Been there, done that, got the t-shirt (but not the book I’m glad to say.)
  • What do you do with the data?  A simple example – I worked for a company developing semi-conductor design equipment, we often had the most secret designs from customers to work out some bug they discovered. While one aspect of this is clear (it’s their’s), there are more subtle factors like some innovative component, implicit production methods or other pieces that a competitor or even your own operation may find of value.
  • What is the company role in the community? Some startups are 24/7 focused on their own operation. Some assume employees, and even the corporation should engage beyond the workplace.  Again, early action in this area sets the personality of an organization.  Be aware that technologists are often motivated by purpose as much as money – so being socially conscious may be a winning investment.
  • What is the end game? — Now that you have yours, what do you do with it? — Here I will quote one of the persons mentioned in the article: “The same drive that made me an entrepreneur now drives me to try to save the world.”

I will suggest that this entrepreneur will apply the same ethical outlook at the start of the game as he/she does at the end of the game.

 

It’s 10PM do you know what your model is doing?

“Customers like you have also …”  This concept appears explicitly, or implicitly at many points in the web-of-our-lives, aka the Internet. Specific corporations, and aggregate operations are building increasingly sophisticated models of individuals.  Not just “like you”, but “you”! Prof. Pedro Domingos at UW  in his book “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World” suggests this model of you may become a key factor of your ‘public‘ interactions.

Examples include having Linked-in add a “find me a job” button that will conduct interviews with relevant open positions and provide you a list of the best.  Or perhaps locating a house, a car, a spouse, …well, maybe somethings are better done face-2-face.

Apparently a Asian firm, “Deep Knowledge” has appointed a virtual director to their Board. In this case it is a construct designed to detect trends that the human directors might miss.  However, one suspects that Apple might want a model of Steve Jobs around for occasional consultation, if not back in control again.

If the Computer Said it, it must be True!

Well, maybe not.  “What Happens When GPS Can’t Find You?” is a commercial concern raised by a Wall St. Journal article.  Needless to say a business in today’s world is at risk if the GPS location associated with it is wrong, or just the path that is required to get there is not correct.  Consumers at best are frustrated, and may simply write off that operation.  In this case it is often not the business’s fault, but one in the GPS location service, or route mapping.

Behind this is a more pervasive and serious problem.  Often there is no way to “fix” these problems from the perspective of the consumer or the an affected business.  You may know the data is wrong, the route doesn’t work, and correcting the error(s) is not a straight forward path, and certainly not easy enough that the “crowd-source” solution would work. That is, many people might find the error, and if there were a simple way to “report” the problem, after the “nth” report, an automated fix (or review) could be triggered.

This is not just  GPS problem. I’ve found many web sites are validating addresses against equally flawed sources (perhaps even the USPS).  I can send mail to my daughter (and she gets it), I’ve even seen the mailbox on the side of her street. By one of the web sites I used to deliver items to her location is rejecting the address as “not known”… and of course there is no way to report the error. A related problem is entering an address in “just the right way” — am I in “Unit A101” or “Apt. A 101″ or maybe Apt A101”, note that the delivery folks can handle all of these, but the online ordering system can’t.  Technology design consideration: track such ‘failures’, and after some number, check the validation process, or better have a button such as “I know this is right, so please update the database”.

Online operations are losing business, as well as brick-and-mortar activities due to online “presumptions” of correctness .. and no corrective processes available.  It’s one thing when the word processor marks your spelling as “wrong”, but lets you keep it anyway.  It is another when medications or essential services can’t reach your location because the GPS or delivery address is not in the database, or is listed incorrectly.

T&S Magazine December 2015

cover dec 15

IEEE Technology and Society Magazine

Volume 34, Number 4, December 2015

Departments

President’s Message
3 Improving Our “Engineering-Crazed” Image
Greg Adamson

Book Reviews
4 Hackers, Geniuses, and Geeks
6 Faxed: The Rise and Fall of the Fax Machine

Editorial
9 Reflecting on the Contribution of T&S Magazine to the IEEE
Katina Michael

Open Letter
15 Technology and Change

Interview
16 On the Road with Rick Sare… and Google Glass

Viewpoint
17 Shakespeare, Social Media and Social Networks
Fernando A. Crespo, Sigifredo Laengle, Paula Baldwin Lind and Víctor Hugo Masías

Leading Edge
20 Corporate Individualism – Changing the Face of Capitalism
László G. Lovászy

23 Multimedia and Gaming Technologies for Telerehabilitation of Motor Disabilities
Andrea Proietti, Marco Paoloni, Massimo Panella, Luca Liparulo and Rosa Altilio

31 MoodTrek – A New App to Improve Mental HealthCare
Ganesh Gopalakrishna and Srivam Chellappan

33 Alternative Planning and Land Administration for Future Smart Cities
Soheil Sabri, Abbas Rajabifard, Serene Ho, Mohammad-Reza Namazi-Rad, and Christopher Pettit

Commentary
36 Pharmaco-Electronics Emerge
Joseph R. Carvalko

41 Blockchain Thinking*
Melanie Swan

63 Information Paradox*
Levent V. Orman

Fiction
54 Held Captive in the Cyberworld
Michael Eldred

Last Word
104 Digitus Secundus: The Swipe
Christine Perakslis

Features
74_ The Value of Accountability in the Cloud*
Wouter M.P. Steijn and Maartje G.H. Niezen

83_ Shaping Our Technological Futures*
Reihana Mohideen and Rob Evans

88_ Driver Distraction from Dashboard and Wearable Interfaces*
Robert Rosenberger

100_ Are Technologies Innocent?*
Michael Arnold and Christopher Pearce

*Refereed articles.

On the cover: Blockchain Thinking. English Wikipedia/The Opte Project/Creative Commons Attribution 2.5 Generic license.

Ethics of Virtual Reality

The Jan. 4, 2016 Wall St Journal has an article “VR Growth Sparks Questions About Effects on Body, Mind” pointing out, as prior publications have, that 2016 is likely to be the Year of VR. The U.S. Consumer Electronics Show is starting this week in Las Vegas, where many neat, new and re-packaged concepts will be strongly promoted.

The article points to issues of physical health – nasua is one well documented potential factor. But work has been taking place on residual effects (how soon should you drive after VR?), how long to remain immersed before you ‘surface’, etc. Perhaps the key consideration is degree to which our bodies/brains accept the experiences of VR as real — altering our thinking and behaviour. (Prof. Jeremy Bailenson, director of Stanford’s Virtual Human Interaction Lab confirms this is one impact.)

All of the pundits point out that every new technology has it’s potential uses/abuses. But that does not excuse the specific considerations that might apply to VR.  A point raised in the article “Scares in VR are borderline immoral”. There is a line of technology from “watching” to “first person” to “immersion” that should be getting our attention.  The dispute over “children impacted by what they watch on TV”, moving to first-person shooter video games, to VR is sure to occur.  But in VR, you can be the victim as well. I first encountered the consideration of the after effects of rape in a video game environment at an SSIT conference some years ago.  Even with the third party perspective in that case, the victim was traumatized. No doubt VR will provide a higher impact.  There are no-doubt lesser acts that can be directed at a VR participant that will have greater impact in VR than they might with less immersive technology.

This is the time to start sorting out scenarios, possible considerations for vendors of technology, aps and content, and also to watch for the quite predictable unexpected effects.  Do you have any ‘predictions’ for 2016 and the Year of VR?

 

Predictive Analytics – Rhinos, Elephants, Donkeys and Minority Report

The  IEEE Computer Society published “Saving Rhinos with Predictive Analytics” in both IEEE Intelligent Systems, and in the more widely distributed ‘Computing Edge‘ (a compendium of interesting papers taken from 13 of the CS publications and provided to members and technologists at no cost.  The article describes how data based analysis of both rhino and poacher activity in concert with AI algorithms can focus enforcement activities in terms of timing and location and hopefully save rhinos.

For those outside of the U.S., the largest population of elephants (Republicans) and donkeys (Democrats) are in the U.S.– these animals being symbols for the respective political parties, and now on the brink of the 2016 presidential primaries, these critters are being aggressively hunted — ok, actually sought after for their votes.  Not surprisingly the same tools are used to locate, identify and predict the behaviour of these persons.   When I was young (1964) I read a book called The 480, which described the capabilities of that timeframe for computer based political analysis and targeting of “groups” required to win an election. (480 was the number of groupings of the 68 million voters in 1960 to identify which groups you needed to attract to win the election.)   21st century analytics are a bit more sophisticated — with as many as 235 million groups, or one per potential voter (and over 130 million voters likely to vote.).  A recent kerfuffle between the Sanders and Clinton campaign over “ownership/access” to voter records stored on a computer system operated by the Democratic National Committee reflects the importance of this data.  By cross connecting (data mining) registered voter information with external sources such as web searches, credit card purchases, etc. the candidates can mine this data for cash (donations) and later votes.  A few percentage point change in delivering voters to the polls (both figuratively, and by providing rides where needed) in key states can impact the outcome. So knowing each individual is a significant benefit.

Predictive Analytics is saving rhinos, and affecting the leadership of super powers. But wait, there’s more.  Remember the movie “Minority Report” (2002). This movie started on the surface with apparent computer technology able to predict future crimes by specific individuals — who were arrested to prevent the crimes.  (Spoiler alert) the movie actually proposes a group of psychics were the real source of insight.  This was consistent with the original story (Philip K Dick) in 1956, prior to The 480, and the emergence of the computer as a key predictive device.  Here’s the catch, we don’t need the psychics, just the data and the computers.  Just as the probability of a specific individual voting for a specific candidate or a specific rhino getting poached in a specific territory can be assigned a specific probability, we are reaching the point where aspects of the ‘Minority Report’ predictions can be realized.

Oddly, in the U.S., governmental collection and use of this level of Big Data is difficult due to privacy illusions, and probably bureaucratic stove pipes and fiefdoms.   These problems do not exist in the private sector.  Widespread data collection on everybody at every opportunity is the norm, and the only limitation on sharing is determining the price.  The result is that your bank or insurance company is more likely to be able to predict your likely hood of being a criminal, terrorist, or even a victim of a crime than the government.  Big Data super-powers like Google, Amazon, Facebook and Acxiom have even more at their virtual fingertips.

Let’s assume that sufficient data can be obtained, and robust AI techniques applied to be able to identify a specific individual with a high probability of a problematic event — initiating or victim of a crime in the next week.  And this data is implicit or even explicit in the hands of some corporate entity.  Now what?  What actions should said corporation take? What probability is needed to trigger such actions? What liability exists for failure to take such actions (or should exist)?

These are issues that the elephants, and donkeys will need to consider over the next few years — we can’t expect the rhinos to do the work for us.  We technologists may also have a significant part to play.

ISTAS ’15 Dublin – Irish President Michael Higgins Delivers Opening Address

President_Higgins_151111_02491

SSIT was honored with an opening address from Michael D. Higgins, President of Ireland, at the 2015 IEEE-SSIT International Symposium on Technology and Society (ISTAS ‘15) on November 11 in Dublin, Ireland. It is the first time a head of state has addressed an ISTAS event. Full coverage of the conference will appear in the January 2016 SSIT e-newsletter and on line at ieeessit.org. The President’s remarks will be published in the March 2016 issue of T&S Magazine. A Special Issue on ISTAS ‘15 will appear in the September 2016 issue of T&S, and will be guest edited by ISTAS ‘15 conference chair, Paul Cunningham.

 

T&S Magazine September 2015 Contents

cover 1

Volume 34, Number 3, September 2015

4 President’s Message
Coping with Machines
Greg Adamson
Book Reviews
5 Marketing the Moon: The Selling of the Apollo Lunar Mission
7 Alan Turing: The Enigma
10 Editorial
Resistance is Not Futile, nil desperandum
MG Michael and Katina Michael
13 Letter to the Editor
Technology and Change
Kevin Hu
14 Opinion
Privacy Nightmare: When Baby Monitors Go Bad
Katherine Albrecht and Liz Mcintyre
15 From the Editor’s Desk
Robots Don’t Pray
Eugenio Guglielmelli
17 Leading Edge
Unmanned Aircraft: The Rising Risk of Hostile Takeover
Donna A. Dulo
20 Opinion
Automatic Tyranny, Re-Theism, and the Rise of the Reals
Sand Sheff
23 Creating “The Norbert Wiener Media Project”
J. Mitchell Johnson
25 Interview
A Conversation with Lazar Puhalo
88 Last Word
Technological Expeditions and Cognitive Indolence
Christine Perakslis

SPECIAL ISSUE: Norbert Wiener in the 21st Century

33_ Guest Editorial
Philip Hall, Heather A. Love and Shiro Uesugi
35_ Norbert Wiener: Odd Man Ahead
Mary Catherine Bateson
37_ The Next Macy Conference: A New Interdisciplinary Synthesis
Andrew Pickering
39_ Ubiquitous Surveillance and Security
Bruce Schneier
41_ Reintroducing Wiener: Channeling Norbert in the 21st Century
Flo Conway and Jim Siegelman
44_ Securing the Exocortex*
Tamara Bonaci, Jeffrey Herron, Charles Matlack, and Howard Jay Chizeck
52_ Wiener’s Prefiguring of a Cybernetic Design Theory*
Thomas Fischer
60_ Norbert Wiener and the Counter-Tradition to the Dream of Mastery
D. Hill
64_ Down the Rabbit Hole*
Laura Moorhead

Features

74_ Opening Pandora’s 3D Printed Box
Phillip Olla
81_ Application Areas of Additive Manufacturing
N.J.R. Venekamp and H.Th. Le Fever

*Refereed article.