I’m preping a program on the future and pursuing a number of related books that will not doubt result in Technology and Society blog posts in the future. One recent (2016) book is Tom Friedman’s “Thank You for Being Late”. Where I’m part way in, but clearly the technology impact considerations are top on his list. You may recognize Tom with his prior best seller, “The World is Flat” … that pointed out how technology had changed the shape of the world. Since that book (2005) the world has changed, significantly. The future is arriving more quickly than he anticipated. Like some other authors, he sees this window of time, and in particular from 2007 on, as a “dislocation” not just a “disruption”. The short take on this is a disruption just destroys your business (think PC’s and mini computers, Cell phones and land lines, Cars and horses) — it wipes some folks out, but the world keeps puttering along. Dislocation makes EVERYONE sense that they are no longer able to keep up. He suggests the last such disruption was the advent of the printing press and subsequent reformation (taking decades to play out, and only affecting the western world.) Today’s dislocation is global, affecting almost every activity, and requires our serious attention and consideration.
His title results from some of his contacts showing up late for breakfast, and realizing that it gave him a few essential minutes to reflect on the deluge of changes and data he had been assimilating for the last few years. A break he suggests we all need.
While there will be a few more posts based on this, I will point out a few essential factors he has surfaced so far:
- Computing has gone past a tipping point with individual and networked power
tasks that were unimaginable even a decade ago (2007) are propagating now.
- Communications capacity has exploded (AT&T asserting 100,000 times as much traffic as their pre-iPhone exclusive in 2007) (note that year)
- The Cloud and Big Data — we can now store everything (and we are), with tools (Hadroop being the leading example) that facilitate analyzing the unimaginable content. (since 2007)
- Access has gone global – along with collaboration — and many other factors.
- Sensors are everywhere — it is the “internet of things’ but more than that, “the machine” as he calls it, has ears, eyes, touch, (eventually taste and smell) almost everywhere (including every cell phone, etc.)
And all of the pieces of the equation are advancing at accelerating rates in an event he calls the “SuperNova”.
One key is that the changing of technology has passed our ability to adapt to the changes. A decade ago, we might have considered this a generational issue (us old folks unable to keep up with the younger ones. — “if you need help with your PC ask your grandchild”.) Today this challenge is penetrating every demographic. It’s not that the world just isn’t flat anymore, it’s that we can no longer grasp sufficient information to identify what shape it is this year, and next year it will be different.
What factors are changing the shape of your world?
This new book, Irresistible: the Rise of Addictive Technology, points to a challenge that may be hitting a tipping point. It is not a surprise that we find various of our tech-toys addictive in various ways. Nor is it surprising that there is a business incentive to have folks “hooked” on your toy rather than someone else’s. But … are we moving towards “maximally effective addiction?.” There is the traditional story of “the wire” that allows rats/people to stimulate brain pleasure centers can result in addictive, potentially fatal activity. Presumably, to the extent possible from a basis of external sensory input, technology will move towards this point. With the addition of fairly comprehensive individual analysis, AI driven analysis and expanded virtual reality capabilities will approach the maximally effective endpoint. The only business constraint may be the loss of a revenue generating consumer as a result. Is this the direction we are headed? And what might prevent our reaching that point?
The U.S. government recently announced sanctions targeting Syrian scientists (and no doubt engineers, news papers are not clear on the differences.) Presumably the individuals targeted are ones involved in engineering chemical weapons, which contravene the International Chemical Weapons Convention. So here is my question, is the development of these weapons or their precursors (specified in the convention) unethical?
While this is an issue for chemical engineers, it also overlaps with IEEE space in various ways.The IEEE Code of Ethics calls for members to “accept responsibility in making decisions consistent with the safety, health, and welfare of the public, and to disclose promptly factors that might endanger the public or the environment.” We do not specify any nationally or internationally “illegal” activities. It seems that this class of weapons might endanger the public and environment. I note IEEE only specified “the public” as a concern, which allows for weapons that might endanger other classes of persons such as criminals and enemy combatants. And of course many IEEE field professionals are involved in the creation of weapon systems, often working for nation states or their contractors, and this is “business as usual”. Ideally, none of these weapons would be used — presuming the absence of crime or combat (one can hope). A related question is the context of such development — if an individual is fairly sure the device will not be used, is that different than a situation where they are fairly sure it will be used?
But the crux of the issue is what is expected of an ethical engineer in a case such as that of Syria? Going to “management” to present the concern that the work might endanger the public or environment would be a career (or life) limiting action. Going to the Organisation for the Prohibition of Chemical Weapons that is responsible for related inspections/determinations could be difficult, treasonous, and life threatening. Should the IEEE Code of Ethics specifically include illegal or treaty violations as a designated consideration? And what might IEEE do about identified violations?
United Airlines has been having it’s problems since recently ejecting a passenger to facilitate crew members getting to their next flight. As the Wall St. Journal article points out this is a result (in part) of employees following a fairly strict rule book — i.e. an algorithm. In many areas from safety to passenger relations United has rules to follow, and employee (i.e. human) discretion is reduced or eliminated. It is somewhat ironic that the employees who made the decisions that lead up to this debacle could have been fired for not taking this course of action. But how does this relate to Technology and Society?
There are two immediate technology considerations that become apparent. First is the automated reporting systems. No doubt the disposition of every seat, passenger and ticket is tracked, along with who made what decisions. This means that employees not following the algorithm will be recorded, ,may be detected/reported. In the good old days a supervisor could give a wink and smile to an employee who broke the ‘rules’ but did the right thing. Now-a-days, the technology is watching and increasingly, the technology is comparing the data with history, rule books and other data.
The second aspect of this is “gate attendant 2.0” — when we automate these humans out of their jobs, or into less responsible “face-keepers”. (i.e. persons present only to provide a human face to the customer while all of the actual work/decisions are automated, akin to the term “place-keeper”.) Obviously if there is a “rule book”, this will be asserted in the requirements for the system, and exact execution of the rules can be accomplished. It is possible that passengers will respond differently if a computerized voice/system is informing them of their potential removal — realizing there is no “appeal”. However it is also possible that an AI system spanning all of an airlines operations, aware of all flight situations, and past debacles like this one may have more informed responses. The airline might go beyond the simple check-in, frequent flyer and TSA passenger profile to their Facebook, credit-score and other data in making the decisions on who to “bump”. One can envision bumping passengers with lower credit ratings, or who’s Facebook psychological profiles indicate that they are mild-mannered reporters, or shall we say “meek”.
The ethics programmed into gate-attendant 2.0 are fairly important. They will reflect the personality of the company, the prejudices of the developers, the wisdom of the deep-learning processes, and the cultural narratives of all of the above.
Presumably we will reach a tipping point when Intelligent Devices surpass humans in many key areas, quite possibly without our ability to understand what has just happened, a variation of this is called “the singularity” (coined by Vernor Vinge, and heralded by Ray Kurzweil) How would we know we have reached such a point? One indicator might be an increased awareness, concern and discussion about the social impact of AI’s. There has been a significant increase in this activity in the last year, and even in the last few months. Here are some examples for those trying to track the trend (of course Watson, Siri, Google Home, Alexa, Cortana and their colleagues already know this.)
A significant point made by Harari is that Artificial Intelligence does not require Artificial Consciousness. A range of purpose built AI systems can individually have significant impact on society without reflecting what the IEEE Ethics project recognizes to as “Artificial Generalized Intelligence”. This means that jobs, elections, advertising, online/phone service centers, weapons systems, vehicles, book/movie recommendations, news feeds, search results, online dating connections, and so much more will be (or are being) influenced or directed by combinations of big data, personalization and AI.
What concerns/opportunities do you see in this, ‘brave new world’?
A recent anthology of “climate fiction”, Loosed Upon the World, projects climate change forward some years into dystopian scenarios. The editor, John Joseph Adams, asserts “Fiction is a powerful tool … perhaps [we can] humanize and illuminate the issue in ways that aren’t as easy to to with only science and cold equations.”
I have been an advocate of near-term science fiction, which I refer to as predictive fiction, as a tool to explore the “what if” scenarios that may result from technology, hopefully allowing us to avoid the negative impacts. Unfortunately this particular anthology is dealing with a current trajectory that is more an exploration of “when, what then?”
But some of the basic issues that we technologists face enter the spotlight, albeit one we may not like. In the forward, Paolo Bacigalupi has a painful message for us techies (many of whom fall into his category of “Techno-optimists”): “Engineers don’t grow up thinking about building a healthy soil eco-system, or trying to restore some estuary, … to turn people into better long-term planners, or better educated and informed citizens, or creating better civic societies.” I don’t fully agree with Paolo — it is more accurate to state that “engineers don’t get paid to …” and perhaps “the project requirements do not address …” And occasionally, we have technologists that resist the corporate momentum and try to get their employer to “do the right thing”. SSIT seeks to honor such courage with the “Carl Barus Award for Outstanding Service in the Public Interest” (nominations always welcomed.)
But back to the future, I mean the fiction. Paolo also observes “..imaginative literature is mythic. The kinds of stories we build, the way we encourage people to live into those myths and dream the future — those stories have power. Once we build this myth that the rocket-ship and the techno-fix is the solve for all our plights and problems, that’s when we get ourselves in danger. It’s the one fantasy that almost certainly guarantees our eventual self-destruction.”
I suspect we need a good dose of reality, perhaps in the guise of predictive fiction.
VIZIO is reportedly paying fines for using users TVs to track their viewing patterns in significant detail as well as associating this with IP address data including age, sex, income, marital status, household size, education level, home ownership, and home values.
It has been clear that all “free” media (and many paid channels), for TV, Cable, Radio, Internet streaming, etc. all want to track this information. On one hand they can use it to provide “a better user experience” (show you the ads/suggested programs that match your demographics) … and of course the flip side is also true, selling your data to 3rd parties (a.k.a. ‘trusted business partners’) so they can be more effective at interacting with you is part of the game.
Now lets step it up a notch. Your TV (or remote controller) may use voice recognition, often using the “mother ship’ resources for the AI analysis if what you have requested. That is, your voice is sent back to servers that interpret and respond. This leads to another level of monitoring … some of your characteristics might be infered from your voice, and others from background sounds or voices, and even more if the recording device just happens to track you all the time. “Seri are you listening in again?” — and then add a camera … now the fun can really start.
London haptic researchers have developed a device to add to a cell phone that will allow remote persons kiss. As described in an IEEE Spectrum article. And since “a picture is worth a thousand words”:
No doubt a wider range of haptic appliances will follow. A major US phone company used to have the slogan “reach out and touch someone”, perhaps our mobile devices are headed that way.
It should be noted that an early, if not first, instance of a physical attack on a person has been carried out by the use of online means, in particular social media used to trigger an epileptic seizure. This concept has surfaced in science fiction, notably in Neil Stephenson’s Snow Crash (which also inspired the creation of Google Earth). In that case, persons are exposed to an attack while in virtual reality that causes them to become comatose.
With the Internet of Things, and the potential for projecting “force” (or at least damage causing light/sound ) over the network a new level of abuse and need for protection is emerging. One key in this particular case, and into the future, might be to have true identity disclosed, or as a criteria for accepting content over the net.
“Alexa, tell me, in your own words, what happened on the night in question.” … actually the request is more like “Alexa, please replay the dialog that was recorded at 9:05PM for the jury”. The case is in Bentonville Arkansas, and the charge is murder. Since an Echo unit was present, Amazon has been asked to disclose whatever information might have been captured at the time of the crime.
Amazon indicates that “Echo” keeps less than sixty seconds of recorded sound, it may not have that level of details, but presumably a larger database exists of requests and responses for the night in question as well. Amazon has provided some data about purchase history, but is waiting for a formal court document to release any additional information.
Which begs the issue of how they might respond to apparent sounds of a crime in progress. “Alexa call 911!” is pretty clear, but “Don’t Shoot!” (or other phrases that might be ‘real’ or ‘overheard’ from a movie in the background …) An interesting future awaits us.