While this is an issue for chemical engineers, it also overlaps with IEEE space in various ways.The IEEE Code of Ethics calls for members to “accept responsibility in making decisions consistent with the safety, health, and welfare of the public, and to disclose promptly factors that might endanger the public or the environment.” We do not specify any nationally or internationally “illegal” activities. It seems that this class of weapons might endanger the public and environment. I note IEEE only specified “the public” as a concern, which allows for weapons that might endanger other classes of persons such as criminals and enemy combatants. And of course many IEEE field professionals are involved in the creation of weapon systems, often working for nation states or their contractors, and this is “business as usual”. Ideally, none of these weapons would be used — presuming the absence of crime or combat (one can hope). A related question is the context of such development — if an individual is fairly sure the device will not be used, is that different than a situation where they are fairly sure it will be used?
But the crux of the issue is what is expected of an ethical engineer in a case such as that of Syria? Going to “management” to present the concern that the work might endanger the public or environment would be a career (or life) limiting action. Going to the Organisation for the Prohibition of Chemical Weapons that is responsible for related inspections/determinations could be difficult, treasonous, and life threatening. Should the IEEE Code of Ethics specifically include illegal or treaty violations as a designated consideration? And what might IEEE do about identified violations?
United Airlines has been having it’s problems since recently ejecting a passenger to facilitate crew members getting to their next flight. As the Wall St. Journal article points out this is a result (in part) of employees following a fairly strict rule book — i.e. an algorithm. In many areas from safety to passenger relations United has rules to follow, and employee (i.e. human) discretion is reduced or eliminated. It is somewhat ironic that the employees who made the decisions that lead up to this debacle could have been fired for not taking this course of action. But how does this relate to Technology and Society?
There are two immediate technology considerations that become apparent. First is the automated reporting systems. No doubt the disposition of every seat, passenger and ticket is tracked, along with who made what decisions. This means that employees not following the algorithm will be recorded, ,may be detected/reported. In the good old days a supervisor could give a wink and smile to an employee who broke the ‘rules’ but did the right thing. Now-a-days, the technology is watching and increasingly, the technology is comparing the data with history, rule books and other data.
The second aspect of this is “gate attendant 2.0” — when we automate these humans out of their jobs, or into less responsible “face-keepers”. (i.e. persons present only to provide a human face to the customer while all of the actual work/decisions are automated, akin to the term “place-keeper”.) Obviously if there is a “rule book”, this will be asserted in the requirements for the system, and exact execution of the rules can be accomplished. It is possible that passengers will respond differently if a computerized voice/system is informing them of their potential removal — realizing there is no “appeal”. However it is also possible that an AI system spanning all of an airlines operations, aware of all flight situations, and past debacles like this one may have more informed responses. The airline might go beyond the simple check-in, frequent flyer and TSA passenger profile to their Facebook, credit-score and other data in making the decisions on who to “bump”. One can envision bumping passengers with lower credit ratings, or who’s Facebook psychological profiles indicate that they are mild-mannered reporters, or shall we say “meek”.
The ethics programmed into gate-attendant 2.0 are fairly important. They will reflect the personality of the company, the prejudices of the developers, the wisdom of the deep-learning processes, and the cultural narratives of all of the above.
Presumably we will reach a tipping point when Intelligent Devices surpass humans in many key areas, quite possibly without our ability to understand what has just happened, a variation of this is called “the singularity” (coined by Vernor Vinge, and heralded by Ray Kurzweil) How would we know we have reached such a point? One indicator might be an increased awareness, concern and discussion about the social impact of AI’s. There has been a significant increase in this activity in the last year, and even in the last few months. Here are some examples for those trying to track the trend (of course Watson, Siri, Google Home, Alexa, Cortana and their colleagues already know this.)
A significant point made by Harari is that Artificial Intelligence does not require Artificial Consciousness. A range of purpose built AI systems can individually have significant impact on society without reflecting what the IEEE Ethics project recognizes to as “Artificial Generalized Intelligence”. This means that jobs, elections, advertising, online/phone service centers, weapons systems, vehicles, book/movie recommendations, news feeds, search results, online dating connections, and so much more will be (or are being) influenced or directed by combinations of big data, personalization and AI.
What concerns/opportunities do you see in this, ‘brave new world’?
A recent anthology of “climate fiction”, Loosed Upon the World, projects climate change forward some years into dystopian scenarios. The editor, John Joseph Adams, asserts “Fiction is a powerful tool … perhaps [we can] humanize and illuminate the issue in ways that aren’t as easy to to with only science and cold equations.”
I have been an advocate of near-term science fiction, which I refer to as predictive fiction, as a tool to explore the “what if” scenarios that may result from technology, hopefully allowing us to avoid the negative impacts. Unfortunately this particular anthology is dealing with a current trajectory that is more an exploration of “when, what then?”
But some of the basic issues that we technologists face enter the spotlight, albeit one we may not like. In the forward, Paolo Bacigalupi has a painful message for us techies (many of whom fall into his category of “Techno-optimists”): “Engineers don’t grow up thinking about building a healthy soil eco-system, or trying to restore some estuary, … to turn people into better long-term planners, or better educated and informed citizens, or creating better civic societies.” I don’t fully agree with Paolo — it is more accurate to state that “engineers don’t get paid to …” and perhaps “the project requirements do not address …” And occasionally, we have technologists that resist the corporate momentum and try to get their employer to “do the right thing”. SSIT seeks to honor such courage with the “Carl Barus Award for Outstanding Service in the Public Interest” (nominations always welcomed.)
But back to the future, I mean the fiction. Paolo also observes “..imaginative literature is mythic. The kinds of stories we build, the way we encourage people to live into those myths and dream the future — those stories have power. Once we build this myth that the rocket-ship and the techno-fix is the solve for all our plights and problems, that’s when we get ourselves in danger. It’s the one fantasy that almost certainly guarantees our eventual self-destruction.”
I suspect we need a good dose of reality, perhaps in the guise of predictive fiction.
VIZIO is reportedly paying fines for using users TVs to track their viewing patterns in significant detail as well as associating this with IP address data including age, sex, income, marital status, household size, education level, home ownership, and home values.
It has been clear that all “free” media (and many paid channels), for TV, Cable, Radio, Internet streaming, etc. all want to track this information. On one hand they can use it to provide “a better user experience” (show you the ads/suggested programs that match your demographics) … and of course the flip side is also true, selling your data to 3rd parties (a.k.a. ‘trusted business partners’) so they can be more effective at interacting with you is part of the game.
Now lets step it up a notch. Your TV (or remote controller) may use voice recognition, often using the “mother ship’ resources for the AI analysis if what you have requested. That is, your voice is sent back to servers that interpret and respond. This leads to another level of monitoring … some of your characteristics might be infered from your voice, and others from background sounds or voices, and even more if the recording device just happens to track you all the time. “Seri are you listening in again?” — and then add a camera … now the fun can really start.
For those of us who have been enjoying the antics of 007, aka James Bond — and those of us in the real world who have been providing technology that helps our covert entities to accomplish their missions…. it is worthwhile to note that Alex Younger, head of UK’s MI6 agency (which of course does not exist), indicates Bond’s personality and activities do not meet their ethical standards.
“It’s safe to say that James Bond wouldn’t get through our recruitment process and, whilst we share his qualities of patriotism, energy and tenacity, an intelligence officer in the real MI6 has a high degree of emotional intelligence, values teamwork and always has respect for the law… unlike Mr Bond.“
A number of technologists are called upon to support covert, military or police organizations in their countries. There is some comfort in thinking that such entities, including MI6 (yes it is real), have some level of ethical standards they apply. Which does not exempt an individual from applying their own professional and other standards as well in their work.
A growing area reflecting the impact of technology on society is ethics and AI. This has a few variations… one is what is ethical in terms of developing or applying AI, the second is what is ethical for AI’s. (Presumably for an AI to select an ethical vs unethical course of action either it must be programmed that way, or it must learn what is ethical as part of it’s education/awareness.)
This is a great opportunity for learning about the issues in the classroom, to develop deep background for policy and press folks — concerns will emerge here — consider self driving cars, robots in warfare or police work, etc. and of course the general public where misconceptions and misinformation are likely. We see many movies where evil technology is a key plot device, and get many marketing messages on the advantages of progress. Long term challenges for informed evolution in this area will require less simplistic perspectives of the opportunities and risks.
Technology & Society has touched on this a few times… RFID implants in people. WSJ has an update worth noting. My new car uses RFID chips to open doors and start the ignition. Having these “embedded” could be of value… but what if I buy a different car? The article lists electronic locks as one application, and also embedding medical history, contact information, etc. Your “RFID” constellation (credit cards, ID cards, keys, etc.) can identify you uniquely — for example as you enter a store, etc. So the ‘relationship’ between your RFID and the intended devices goes beyond that one-to-one application.
An ethical issue raised was that of consent associated with embedding RFID in a person who may not be able to provide consent, but would benefit from the ID potential, lock access (or denial), etc. An obvious example is tracking a dementia patient if they leave the facility. Of course we already put on wrist bands that are difficult to remove, and these might contain RFID or other locating devices.
What applications might cause you to embed a device under your skin? What concerns do you have about possible problems/issues?