A not-so-flat World – Friedman 2.0

I’m prepping a program on the future and pursuing a number of related books that will no doubt result in Technology and Society blog posts in the future. One recent (2016) book is Tom Friedman’s Thank You for Being Late.  I’m only part way in, but clearly technology impact considerations are top on his list.  You may recognize Tom from his prior best seller, The World is Flat, that pointed out how technology had changed the shape of the world. Since that book (2005) the world has changed, significantly.  The future is arriving more quickly than he anticipated.  Like some other authors, he sees this window of time, in particular from 2007 on, as a “dislocation” not just a “disruption.” The short take on this is that a disruption just destroys your business (think PC’s and mini computers, cell phones and land lines, cars and horses) — it wipes some folks out, but the world keeps puttering along. Dislocation makes EVERYONE sense that they are no longer able to keep up.  Friedman suggests the last such disruption was the advent of the printing press and subsequent reformation (taking decades to play out, and only affecting the Western world.) Today’s dislocation is global, affecting almost every activity, and requires our serious attention and consideration.

The title Thank You for Being Late results from some of Friedman contacts showing up late for breakfast, and realizing that it gave him a few essential minutes to reflect on the deluge of changes and data he had been assimilating for the last few years.  A break he suggests we all need.

While there will be a few more posts based on this, I will point out a few essential factors he has surfaced so far:

  • Computing has gone past a tipping point with individual and networked power
    tasks that were unimaginable even a decade ago (2007) propagating now.
  • Communications capacity has exploded (AT&T asserting 100,000 times as much traffic as their pre-iPhone exclusive in 2007 — note that year).
  • The Cloud and Big Data — we can now store everything (and we are), with tools (Hadroop being the leading example) that facilitate analyzing unimaginable content (since 2007).
  • Access has gone global — along with collaboration — and many other factors.
  • Sensors are everywhere — it is the “Internet of things,” but more than that, “the machine” as he calls it, has ears, eyes, touch, (eventually taste and smell) almost everywhere (including every cell phone, etc.)

And all of the pieces of the equation are advancing at accelerating rates in an event he calls the “SuperNova.”

One key is that the changing of technology has surpassed our ability to adapt to the changes. A decade ago, we might have considered this a generational issue (us old folks unable to keep up with the younger ones. — “if you need help with your PC ask your grandchild”.) Today this challenge is penetrating every demographic. It’s not that the world just isn’t flat anymore, it’s that we can no longer grasp sufficient information to identify what shape it is this year, and next year it will be different.

What factors are changing the shape of your world?

Who do you want listening in at your home?

The Wall St. Journal has a note today comparing Amazon’s Echo and Google Home as voice activated, in-home assistants.   This space is fraught with impacts on technology and society — from services that can benefit house-bound individuals, to serious opportunities for abuse by hacking, for commercial purposes, or governmental ones. To put it in a simple form: you are being asked to “bug your house” with a device that listens to every noise in the house.  Of course you may have already bugged your pocket with  a device that is listening for the magic words “hey, Siri” (or the person next to you in the office, train, or restaurant may be carrying that “wire”.)  Robots that respond to “OK Google” or “Alexa” are expanding into our monitored domains. (What to folks named Alexa or Siri have to look forward to in this world?) (Would you name your child “OK Google”?)

The immediate use cases seem to be a cross between control of the “Internet of Things”, and the specific business models of the suppliers; online sales for Amazon Alexa, and more invasive advertising for Google. Not only can these devices turn on and off your lights, they can order new bulbs …ones that blink subliminal advertising messages (uh oh, now I’ve given someone a bad idea.)

From our technology and society perspective we need to look forward to the pros and cons of these devices. What high benefit services might be offered?  What risks do we run?  Are there policy or other guidelines that should be established? …. Please add your thoughts to the list …

Meanwhile I’m trying to find out why my new car’s navigation system keeps trying to take me to Scotland when I ask “Find McDonald’s”.

 

AI Ethics

A growing area reflecting the impact of technology on society is ethics and AI.  This has a few variations… one is what is ethical in terms of developing or applying AI, the second is what is ethical for AI’s.  (Presumably for an AI to select an ethical vs unethical course of action either it must be programmed that way, or it must learn what is ethical as part of it’s education/awareness.)

Folks playing in the AI Ethics domain include a recent consortia of industry players (IBM, Google, Facebook, Amazon and Microsoft), the IEEE Standards folks, and the White House (with a recent white paper).

This is a great opportunity for learning about the issues in the classroom, to develop deep background for policy and press folks — concerns will emerge here — consider self driving cars, robots in warfare or police work, etc.  and of course the general public where misconceptions and misinformation are likely.  We see many movies where evil technology is a key plot device, and get many marketing messages on the advantages of progress.  Long term challenges for informed evolution in this area will require less simplistic perspectives of the opportunities and risks.

There is a one day event in Brussels, Nov. 15, 2016 that will provide a current view on some of the issues, and discussions.

 

Internet Resilience

The Internet is a widespread tool reflecting, to some degree, free speech and freedom of the ‘press’.  As such, it is a threat to entities that wish to suppress these, or make them subservient to other priorities. A recent report on DefenseOne.com outlines the ways in which some countries have been able to put an “on-off” switch in place, and use this.  The trick is having all or most of the traffic going though a small number of (authorized) intermediate nodes where the pug can be pulled.

Countries like Egypt and China have such bottlenecks.  Countries with large numbers of intermediate nodes connected outside the country include Canada, Germany and the Netherlands.  Surprisingly Russia has a very large number of such connections — explained by the article as a complexity designed to make tracking cyber-crime nearly impossible.

Ethics of Killing with Robots

The recent murder of police officers in Dallas, finally terminated by the lethal use of a robot to kill the shooter has triggered an awareness of related ethical issues. First, it must be understood that the robot was under human control during it’s entire mission, so in this particular case it reflects a sophisticated “projection of power” with no autonomous capability. The device might have been a drone controlled remotely and simply paralleled the current use of drones (and no doubt other devices) as weapons.

We already have examples of “autonomous” devices as well. Mines, both land and ocean (and eventually space) all reflect devices “programmed to kill” that operate with no human at the trigger. If anything, a lethal frustration with these devices is that they are too dumb, killing long after their intended use.

I saw a comment in one online discussion implying that robots are or would be programed with Asimov’s first law: “Do not harm humans”. But of course this is neither viable at this time (it takes an AI to evaluate that concept) nor is it a directive that is likely to be implemented in actual systems. Military and police applications are among the most likely for robotic systems of this kind, and harming humans may be a key objective.

Projecting lethal force at a distance may be one of the few remaining characteristics of humans (since we have found animals innovating tools, using language and so forth). Ever since Homo Whomever (pre Sapians as I understand it), tossed a rock to get dinner we have been on this slippery slope. The ability to kill a target from a ‘position of safety’ is essentially the basic design criteria for many weapon systems.  Homo Whomever may have also crossed the autonomous Rubicon with the first snare or pit-fall trap.

Our challenge is to make sure our systems designers and those acquiring the systems have some serious ethical training with practical application.  Building in the safeguards, expiration dates, decision criteria, etc. should be essential aspects of lethal autonomous systems design. “Should” is unfortunately the case, it is unlikely in many scenarios.

Teaching Computers to Lie

A recent article on the limitations of computer “players” in online games is that they don’t know about lying.   No doubt this is true.  Both the detection of lies (which means anticipating them, and in some sense understanding the value of mis-representation to the other party) and the ability to use this capability are factors in ‘gaming’.  This can be both entertainment games, and ‘gaming the system’ — in sales, tax evasion, excusing failures, whatever.

So here is a simple question: Should we teach computers to lie?
(unfortunately, I don’t expect responses to this question will alter the likely path of game creators, or others who might see value in computers that can lie.)   I will also differentiate this from using computers to lie.  I can program a computer so that it overstates sales, understates losses, and many other forms of fraud.  But in this case it is my ethical/legal lapse, not a “decision” on the part of the computer.

Internet 3.0?

Steve Case, founder of AOL, has a new book out “The Third Wave: An Entrepreneur’s Vision of the Future“.  As a leader in the “First Wave” (remember dial up modems?… and getting a floppy disk from AOL every month in the mail? — that was SO last millennium) — Steve has some perspective on the evolution of the net.   His waves are:

  1. Building the Internet – companies such as AOL creating infrastructure, peaking circa 2000 (remember the dot-com bubble?)
  2. Apps and Services on top of the net. (the currently declining wave)
  3. Ubiquitous, integrated in our everyday lives — touching everything

This seems to ignore a few major ‘game-changers’ as I see it, including the introduction of the Web and Browsers, Altavista/Google for search, and Amazon for retail. But, that does not diminish the reality of the social impact of whatever Internet Wave we are on at this point.  You might tend to align his assertion with the “Internet of Things”, where very light bulb (or other device) has an IP address and can be managed over the net.  But Steve points to much broader areas of impact:
education, medical care, politics, employment and as promised in his title, entrepreneurial success.

Another way to look at this is “what fields, if any, are not being transformed by networked computing devices?” Very few, even technology that does not incorporate these devices (genetically modified whatever), they depend on networked computer technology at many points in their invention and production.

Steve suggests we need a “new play book” for this emerging economic reality.  I suspect he is only half right.  This was the mantra of the Internet Bubble, where generating income was subservient to new ideas, market growth, mind-share, etc.  What is clear is that it will be increasingly difficult for existing corporations to recognize, much less invest in the innovations that will disrupt or destroy their business. AOL and my past employer, Digital Equipment, are both examples of companies that had failed transitions, in part due to their momentum in “previous generations” of technology. (AOL continues as a visible subsidiary of Verizon, Digital has been subsumed into HP.)  What is happening is that the rate of change is increasing, The challenges associated with this were documented in the 1970’s by Alan Toffler in his book “Future Shock” and it’s sequels, “The Third Wave“, “Powershift” and most recently in “Revolutionary Wealth” (2006).  Toffler’s short form of Future Shock is: “too much change in too short a period of time” — a reality that has traction 50 years later.

What examples of disruption do you see coming? (But beware, it’s the ones we don’t see that can get us.)

T&S Magazine December 2015

cover dec 15

IEEE Technology and Society Magazine

Volume 34, Number 4, December 2015

Departments

President’s Message
3 Improving Our “Engineering-Crazed” Image
Greg Adamson

Book Reviews
4 Hackers, Geniuses, and Geeks
6 Faxed: The Rise and Fall of the Fax Machine

Editorial
9 Reflecting on the Contribution of T&S Magazine to the IEEE
Katina Michael

Open Letter
15 Technology and Change

Interview
16 On the Road with Rick Sare… and Google Glass

Viewpoint
17 Shakespeare, Social Media and Social Networks
Fernando A. Crespo, Sigifredo Laengle, Paula Baldwin Lind and Víctor Hugo Masías

Leading Edge
20 Corporate Individualism – Changing the Face of Capitalism
László G. Lovászy

23 Multimedia and Gaming Technologies for Telerehabilitation of Motor Disabilities
Andrea Proietti, Marco Paoloni, Massimo Panella, Luca Liparulo and Rosa Altilio

31 MoodTrek – A New App to Improve Mental HealthCare
Ganesh Gopalakrishna and Srivam Chellappan

33 Alternative Planning and Land Administration for Future Smart Cities
Soheil Sabri, Abbas Rajabifard, Serene Ho, Mohammad-Reza Namazi-Rad, and Christopher Pettit

Commentary
36 Pharmaco-Electronics Emerge
Joseph R. Carvalko

41 Blockchain Thinking*
Melanie Swan

63 Information Paradox*
Levent V. Orman

Fiction
54 Held Captive in the Cyberworld
Michael Eldred

Last Word
104 Digitus Secundus: The Swipe
Christine Perakslis

Features
74_ The Value of Accountability in the Cloud*
Wouter M.P. Steijn and Maartje G.H. Niezen

83_ Shaping Our Technological Futures*
Reihana Mohideen and Rob Evans

88_ Driver Distraction from Dashboard and Wearable Interfaces*
Robert Rosenberger

100_ Are Technologies Innocent?*
Michael Arnold and Christopher Pearce

*Refereed articles.

On the cover: Blockchain Thinking. English Wikipedia/The Opte Project/Creative Commons Attribution 2.5 Generic license.

T&S Magazine September 2015 Contents

cover 1

Volume 34, Number 3, September 2015

4 President’s Message
Coping with Machines
Greg Adamson
Book Reviews
5 Marketing the Moon: The Selling of the Apollo Lunar Mission
7 Alan Turing: The Enigma
10 Editorial
Resistance is Not Futile, nil desperandum
MG Michael and Katina Michael
13 Letter to the Editor
Technology and Change
Kevin Hu
14 Opinion
Privacy Nightmare: When Baby Monitors Go Bad
Katherine Albrecht and Liz Mcintyre
15 From the Editor’s Desk
Robots Don’t Pray
Eugenio Guglielmelli
17 Leading Edge
Unmanned Aircraft: The Rising Risk of Hostile Takeover
Donna A. Dulo
20 Opinion
Automatic Tyranny, Re-Theism, and the Rise of the Reals
Sand Sheff
23 Creating “The Norbert Wiener Media Project”
J. Mitchell Johnson
25 Interview
A Conversation with Lazar Puhalo
88 Last Word
Technological Expeditions and Cognitive Indolence
Christine Perakslis

SPECIAL ISSUE: Norbert Wiener in the 21st Century

33_ Guest Editorial
Philip Hall, Heather A. Love and Shiro Uesugi
35_ Norbert Wiener: Odd Man Ahead
Mary Catherine Bateson
37_ The Next Macy Conference: A New Interdisciplinary Synthesis
Andrew Pickering
39_ Ubiquitous Surveillance and Security
Bruce Schneier
41_ Reintroducing Wiener: Channeling Norbert in the 21st Century
Flo Conway and Jim Siegelman
44_ Securing the Exocortex*
Tamara Bonaci, Jeffrey Herron, Charles Matlack, and Howard Jay Chizeck
52_ Wiener’s Prefiguring of a Cybernetic Design Theory*
Thomas Fischer
60_ Norbert Wiener and the Counter-Tradition to the Dream of Mastery
D. Hill
64_ Down the Rabbit Hole*
Laura Moorhead

Features

74_ Opening Pandora’s 3D Printed Box
Phillip Olla
81_ Application Areas of Additive Manufacturing
N.J.R. Venekamp and H.Th. Le Fever

*Refereed article.

T&S Magazine June 2015 Contents

cover 1

Volume 34, Number 2, June 2015

3 ISTAS 2015 – Dublin
4 President’s Message
Deterministic and Statistical Worlds
Greg Adamson
5 Editorial
Mental Health, Implantables, and Side Effects
Katina Michael
8 Book Reviews
Reality Check: How Science Deniers Threaten Our Future
Stealing Cars: Technology & Society from the Model T to the Gran Torino
13 Leading Edge
“Ich liebe Dich UBER alles in der Welt” (I love you more than anything else in the world)
Sally Applin
Opinion
16 Tools for the Vision Impaired
Molly Hartman
18 Learning from Delusions
Brian Martin
21 Commentary
Nanoelectronics Research Gaps and Recommendations*
Kosmas Galatsis, Paolo Gargini, Toshiro Hiramoto, Dirk Beernaert, Roger DeKeersmaecker, Joachim Pelka, and Lothar Pfitzner
80 Last Word
Father’s Day Algorithms or Malgorithms?
Christine Perakslis

SPECIAL ISSUE—Ethics 2014/ISTAS 2014

31_ Guest Editorial
Keith Miller and Joe Herkert
32_ App Stores for the Brain: Privacy and Security in Brain-Computer Interfaces*
Tamara Bonaci, Ryan Calo, and Howard Jay Chizeck
40_ The Internet Census 2012 Dataset: An Ethical Examination*
David Dittrich, Katherine Carpenter, and Manish Karir
47_ Technology as Moral Proxy: Autonomy and Paternalism by Design*
Jason Millar
56_ Teaching Engineering Ethics: A Phenomenological Approach*
Valorie Troesch
64_ Informed Consent for Deep Brain Stimulation: Increasing Transparency for Psychiatric Neurosurgery Patients*
Andrew Koivuniemi
71_ Robotic Prosthetics: Moving Beyond Technical Performance*
N. Jarrassé, M. Maestrutti, G. Morel, and A. Roby-Brami

*Refereed Articles