Predictive Fiction

A recent anthology of “climate fiction”, Loosed Upon the World, projects climate change forward some years into dystopian scenarios.  The editor, John Joseph Adams, asserts “Fiction is a powerful tool … perhaps [we can] humanize and illuminate the issue in ways that aren’t as easy to to with only science and cold equations.

I have been an advocate of near-term science fiction, which I refer to as predictive fiction, as a tool to explore the “what if” scenarios that may result from technology, hopefully allowing us to avoid the negative impacts. Unfortunately this particular anthology is dealing with a current trajectory that is more an exploration of “when, what then?”

But some of the basic issues that we technologists face enter the spotlight, albeit one we may not like.  In the forward, Paolo Bacigalupi has a painful message for us techies (many of whom fall into his category of “Techno-optimists”): “Engineers don’t grow up thinking about building a healthy soil eco-system, or trying to restore some estuary, … to turn people into better long-term planners, or better educated and informed citizens, or creating better civic societies.”   I don’t fully agree with Paolo — it is more accurate to state that “engineers don’t get paid to …” and perhaps “the project requirements do not address …” And occasionally, we have technologists that resist the corporate momentum and try to get their employer to “do the right thing”.  SSIT seeks to honor such courage with the “Carl Barus Award for Outstanding Service in the Public Interest” (nominations always welcomed.)

But back to the future, I mean the fiction. Paolo also observes “..imaginative literature is mythic. The kinds of stories we build, the way we encourage people to live into those myths and dream the future — those stories have power. Once we build this myth that the rocket-ship and the techno-fix is the solve for all our plights and problems, that’s when we get ourselves in danger. It’s the one fantasy that almost certainly guarantees our eventual self-destruction.” 

I suspect we need a good dose of reality, perhaps in the guise of predictive fiction.

Your TV might be Binge watching you!

VIZIO is reportedly paying fines for using users TVs to track their viewing patterns in significant detail as well as associating this with  IP address data including age, sex, income, marital status, household size, education level, home ownership, and home values.

Presumably this might have been avoided if VIZIO had presented the users with a “privacy statement” or “terms of use” when they installed their TV.  But failure to have obtained the appearance of consent put them in this situation.

It has been clear that all “free” media (and many paid channels), for TV, Cable, Radio, Internet streaming, etc. all want to track this information.  On one hand they can use it to provide “a better user experience” (show you the ads/suggested programs that match your demographics) … and of course the flip side is also true, selling your data to 3rd parties (a.k.a. ‘trusted business partners’)  so they can be more effective at interacting with you is part of the game.

Now lets step it up a notch.  Your TV (or remote controller) may use voice recognition, often using the “mother ship’ resources for the AI analysis if what you have requested. That is, your voice is sent back to servers that interpret and respond.  This leads to another level of monitoring … some of your characteristics might be infered from your voice, and others from background sounds or voices, and even more if the recording device just happens to track you all the time.  “Seri are you listening in again?” — and then add a camera … now the fun can really start.

Bond Doesn’t make the Ethics Cut

For those of us who have been enjoying the antics of 007, aka James Bond — and those of us in the real world who have been providing technology that helps our covert entities to accomplish their missions…. it is worthwhile to note that Alex Younger, head of UK’s MI6 agency (which of course does not exist), indicates Bond’s personality and activities do not meet their ethical standards.

It’s safe to say that James Bond wouldn’t get through our recruitment process and, whilst we share his qualities of patriotism, energy and tenacity, an intelligence officer in the real MI6 has a high degree of emotional intelligence, values teamwork and always has respect for the law… unlike Mr Bond.

27 Oct 2016 UK Telegraph article

A number of technologists are called upon to support covert, military or police organizations in their countries.  There is some comfort in thinking that such entities, including MI6 (yes it is real), have some level of ethical standards they apply.  Which does not exempt an individual from applying their own professional and other standards as well in their work.

“Remaining Human”

CLICK HERE for the must-watch short film:

VIMEO.COM|BY J.MITCHELLJOHNSON
 
produced with a small IEEE grant on the work of Norbert Wiener.
Launched October 21, 2016, at the IEEE ISTAS 2016 conference in Kerala, India. EXCLUSIVE. #norbert#wiener #cybernetics #communications #ethics #feedback #brain#machines #automation

For more see www.norbertwiener.org and www.norbertwiener.com

 

AI Ethics

A growing area reflecting the impact of technology on society is ethics and AI.  This has a few variations… one is what is ethical in terms of developing or applying AI, the second is what is ethical for AI’s.  (Presumably for an AI to select an ethical vs unethical course of action either it must be programmed that way, or it must learn what is ethical as part of it’s education/awareness.)

Folks playing in the AI Ethics domain include a recent consortia of industry players (IBM, Google, Facebook, Amazon and Microsoft), the IEEE Standards folks, and the White House (with a recent white paper).

This is a great opportunity for learning about the issues in the classroom, to develop deep background for policy and press folks — concerns will emerge here — consider self driving cars, robots in warfare or police work, etc.  and of course the general public where misconceptions and misinformation are likely.  We see many movies where evil technology is a key plot device, and get many marketing messages on the advantages of progress.  Long term challenges for informed evolution in this area will require less simplistic perspectives of the opportunities and risks.

There is a one day event in Brussels, Nov. 15, 2016 that will provide a current view on some of the issues, and discussions.

 

IEEE T&S Magazine, September 2016

 


ON THE COVER:

DON ADAMS AS MAXWELL SMART, WITH INFAMOUS “SHOE PHONE.” GENERAL ARTISTS CORPORATION- GAC-MANAGEMENT/PUBLIC DOMAIN/WIKIMEDIA.

Moving ICTD Research Beyond Bungee Jumping Lost in Translation Smartphones, Biometrics, and a Brave New World

 


Conference Announcement
ISTAS 2016, Kerala, India icon.access.free


President’s Message
Is Ethics an Emerging Property? icon.access.free
Greg Adamson


Editorial
Can Good Standards Propel Unethical Technologies?  icon.access.free
Katina Michael


News and Notes
IEEE Technology and Society icon.access.freeMagazine Seeks Editor-in-Chief


Book Reviews
How Not to Network a Nation icon.access.free
Loren Graham


Leading Edge
Internet Governance, Security,
Privacy and the Ethical Dimension of ICTs in 2030 icon.access.free

Vladimir Radunovic


Commentary
The Paradox of the Uberveillance Equation   icon.access.free
MG Michael

Can We Trust For-Profit Corporations to Protect Our Privacy?*icon.access.locked
Wilhelm E.J. Klein

Are Technologies Innocent?*icon.access.locked
Michael Arnold and Christopher Pearce


Opinion
ICTs and Small Holder Farming icon.access.free
Janet Achora


Industry Perspective
Smart Cities: A Golden Age for Control Theory? icon.access.free
Emanuele Crisostomi, Robert Shorten, and Fabian Wirth


Last Word
An Ounce of Steel: Crucial Alignmentsicon.access.free
Christine Perakslis

*Refereed articles

 

 

 

SPECIAL ISSUE ON ISTAS 2015 – 

TECHNOLOGY, CULTURE, AND ETHICS

Guest Editorial – Future of Sustainable Developmenticon.access.free
Paul M. Cunningham

Technology-Enhanced Learning in Kenyan Universities*icon.access.locked
Miriam Cunningham

Moving ICTD Research Beyond Bungee Jumping*icon.access.locked
Andy Dearden and William D. Tucker

Expanding the Design Horizon for Self-Driving Vehicles*icon.access.locked
Pascale-L. Blyth, Miloš N. Mladenovic´, Bonnie A. Nardi,
Hamid R. Ekbia, and Norman Makoto Su

SPECIAL READER ACCESS FOR THIS ARTICLE – CLICK HERE:

Lost in Translation – Building a Common Language for Regulating
Autonomous Weapons*
icon.access.free

Marc Canellas and Rachel Haga

Smartphones, Biometrics, and a Brave New World*icon.access.locked
Peter Corcoran and Claudia Costache

Ethics, Children, and Biometric Technology*icon.access.locked
Darelle van Greunen

Intelligent Subcutaneous Body Area Networks*icon.access.locked
P.A. Catherwood, D.D. Finlay, and J.A.D. McLaughlin


Humanitarian Cyber Operations*icon.access.locked
Jan Kallberg

 

 

 

*Refereed articles.

Is RFID Getting Under Your Skin?

Technology & Society has touched on this a few times… RFID implants in people.  WSJ has an update worth noting. My new car uses RFID chips to open doors and start the ignition.  Having these “embedded” could be of value… but what if I buy a different car?   The article lists electronic locks as one application, and also embedding medical history, contact information, etc.   Your “RFID” constellation (credit cards, ID cards, keys, etc.) can identify you uniquely — for example as you enter a store, etc.  So the ‘relationship’ between your RFID and the intended devices goes beyond that one-to-one application.

An ethical issue raised was that of consent associated with embedding RFID in a person who may not be able to provide consent, but would benefit from the ID potential, lock access (or denial), etc.  An obvious example is tracking a dementia patient if they leave the facility.  Of course we already put on wrist bands that are difficult to remove, and these might contain RFID or other locating devices.

What applications might cause you to embed a device under your skin? What concerns do you have about possible problems/issues?

Killing Mosquitoes

The elimination of Malaria (438,000 deaths per year) and a number of other deadly/debilitating diseases (Zika, dengue fever, yellow fever, etc.) is often a war against the mosquitoes that carry these diseases.   Bill Gates has designated the mosquito “the deadliest animal in the world“, and fighting these diseases is a top priority for the Gates Foundation.  Another wealthy ExMicrosoft wizard, Nathan Myhrvold, has developed a prototype laser to zap the bugs selectively. And a recent Wall St. Journal article suggests a variety of genetic engineering attacks that are in development. With the spread of these diseases beyond their traditional “range”, their impact will increase as will the needs of a broader range of countries.

There are a number of Technology/Society impacts of interest here.  First, any objective for which there are multiple, diverse approaches that are likely to reach the objective are likely to be accomplished — don’t bet on the bugs here (I know, “Jurassic Park” seeks to make the point that “Nature will Find a Way” … and that is often true, but humans have been very effective at driving the extinction of so many species that geologists have declared this a new age, the Anthropocene.)

Second, if anyone wonders how to change the world, the answer clearly is technology — from DDT impregnated sleeping nets, lasers and genetic engineering we are talking tech— and “engineering thinking”. (My granddaughter has a T shirt: front side “A-Stounding”, back side “Stoundings: persons who like to solve problems rather than cause them.” )  I call those folks Technologists.  Bugs Beware — you have a whole generation of Robotics Competition and Mindcraft modders headed your way.

Third — is this a good idea?  Note, there are significant variations.  Some approaches target just one species (Aedes aegypti, at least outside of it’s forest habitat origin), others target a wider range of species, others focused areas.)  One recurrent human failure is anticipating consequences of our actions.  What animals depend on these critters for dinner, and so forth up the food chain. What plants depend on these for pollination?  We abound in ignorance on such matters, and we find it easier to fund the research for eradication than for understanding.

So .. should we eliminate the deadliest animal on earth?  (let me qualify, other than Homo Sapiens.)

 

Ethics of Killing with Robots

The recent murder of police officers in Dallas, finally terminated by the lethal use of a robot to kill the shooter has triggered an awareness of related ethical issues. First, it must be understood that the robot was under human control during it’s entire mission, so in this particular case it reflects a sophisticated “projection of power” with no autonomous capability. The device might have been a drone controlled remotely and simply paralleled the current use of drones (and no doubt other devices) as weapons.

We already have examples of “autonomous” devices as well. Mines, both land and ocean (and eventually space) all reflect devices “programmed to kill” that operate with no human at the trigger. If anything, a lethal frustration with these devices is that they are too dumb, killing long after their intended use.

I saw a comment in one online discussion implying that robots are or would be programed with Asimov’s first law: “Do not harm humans”. But of course this is neither viable at this time (it takes an AI to evaluate that concept) nor is it a directive that is likely to be implemented in actual systems. Military and police applications are among the most likely for robotic systems of this kind, and harming humans may be a key objective.

Projecting lethal force at a distance may be one of the few remaining characteristics of humans (since we have found animals innovating tools, using language and so forth). Ever since Homo Whomever (pre Sapians as I understand it), tossed a rock to get dinner we have been on this slippery slope. The ability to kill a target from a ‘position of safety’ is essentially the basic design criteria for many weapon systems.  Homo Whomever may have also crossed the autonomous Rubicon with the first snare or pit-fall trap.

Our challenge is to make sure our systems designers and those acquiring the systems have some serious ethical training with practical application.  Building in the safeguards, expiration dates, decision criteria, etc. should be essential aspects of lethal autonomous systems design. “Should” is unfortunately the case, it is unlikely in many scenarios.

Teaching Computers to Lie

A recent article on the limitations of computer “players” in online games is that they don’t know about lying.   No doubt this is true.  Both the detection of lies (which means anticipating them, and in some sense understanding the value of mis-representation to the other party) and the ability to use this capability are factors in ‘gaming’.  This can be both entertainment games, and ‘gaming the system’ — in sales, tax evasion, excusing failures, whatever.

So here is a simple question: Should we teach computers to lie?
(unfortunately, I don’t expect responses to this question will alter the likely path of game creators, or others who might see value in computers that can lie.)   I will also differentiate this from using computers to lie.  I can program a computer so that it overstates sales, understates losses, and many other forms of fraud.  But in this case it is my ethical/legal lapse, not a “decision” on the part of the computer.