Updating a Declaration

By on June 29th, 2017 in Editorial & Opinion, Ethics, Human Impacts, Magazine Articles

It appears that humanity’s great challenge for this century is to extend cooperative human values and institutions to autonomous technology for the greater good.—Steve Omohundro, Autonomous Technology and The Greater Human Good [1]

Sensors and data have the capacity to easily overtake control of our lives. In any situation, we’ll soon be able to analyze the emotions, facial expressions, or data about someone we’re interacting with in business and social situations. While there will be great knowledge in this new cultural landscape, there’s also great opportunity for emotional paralysis. How do you talk to someone when every word and emotion is analyzed? How do you form relationships of trust when you’re worried about every thought and response? If we don’t evolve values outside the context of this type of surveillance, our tracked words and actions will form the basis for preferences leading to a homogenized humanity.

This is why values play such an essential role in the creation of Artificial Intelligence (AI). If we can’t identify the human values we hold dear, we can’t program them into our machines. Values don’t just provide perspective on a person’s life — they provide specificity. What morals do you feel are absolute? What spiritual, mental, or emotional qualities drive your actions?

We need a new Universal Declaration of Human Rights created in the context of how we’re going to live with autonomous machines.

If we were to create Humanity’s Manifesto, what would it be? How can we codify such a treatise on human values to capture it for other people’s use?

One idea would be to model an Artificial Intelligence Ethics Protocol after something like The United Nations’ Universal Declaration of Human Rights [2]. Adopted in 1948, the Declaration came about after the experience of the Second World War where the international community came together to try and guarantee universal rights of individuals. The process took almost two years, and contains Articles that could certainly be emulated for AI Values, such as, “everyone has the right to life, liberty, and security of person.” Or, “No one shall be subjected to torture or to cruel, inhuman, or degrading treatment or punishment.”

Where these values become difficult to implement, however, is their lack of clarity on specifics. For instance, how should we define “liberty” with regard to programming machines? Would that infer allowing freedom for an autonomous program to iterate on its own, outside the confines of existing laws? Or does it refer to a machine’s human operators, who must retain “liberty” of control over machines? As another example, what’s “degrading” for a machine? Porn? Insider trading schemes?

Hernan Santa Cruz of Chile, a member of the drafting sub-Committee had a response to the adoption of the Declaration that’s quite fascinating to reflect on regarding Artificial Intelligence and ethics:

I perceived clearly that I was participating in a truly significant historic event in which a consensus had been reached as to the supreme value of the human person, a value that did not originate in the decision of a worldly power, but rather in the fact of existing — which gave rise to the inalienable right to live free from want and oppression and to fully develop one’s personality [3].

Let’s unpack this a bit.

  1. The Supreme Value of a Human Person. According to Santa Cruz, this value originates “in the fact of existing.” In the context of AI, this mirrors the questions of human consciousness. Does an autonomous machine deserve rights typically afforded a human just because they exist? That would mean Google’s self-driving cars or militarized AI robots should be given those rights today. Or would a machine have to fool the Turing test to gain its rights so at least thirty percent of the humans giving the test felt it was a person? Or perhaps the machine would need to be self-aware enough to know it exists to gain these rights, like the androids in Philip K. Dick’s Do Androids Dream of Electric Sheep, the basis for the cult classic film, Blade Runner. Whatever the case, the notion of a “human person” may soon be outdated or considered bigotry. So defining specific human qualities for AI ethics today is of paramount importance.
  2. A Value that Did Not Originate in the Decision of a Worldly Power. Whereas the value of a human person may originate outside of the context of any particular worldly power, autonomous machines are currently being created in multiple countries and jurisdictions around the world. This means any AI ethics standards would have to account for multicultural views about the nature of their existence. As always, the element of money clouds these issues, as there’s such an opportunity for profit in the field of robotics and AI in the coming years. This means we need to separate the notion of Value as profit from Values as human/moralistic guidelines.
  3. To Fully Develop One’s Personality. This phrase from 1948 is helpful regarding its specificity about a universal value. If it’s possible to provide an environment where people could live free from want and oppression, Santa Cruz felt it was an inalienable right for those people to fully develop their personality. Applied to an algorithm, would this tenet again infer programs should be left alone to develop autonomously outside of regulation? Or, as I believe, will the potential presence of multiple or even universal algorithms in our lives prevent us from naturally developing our human personalities?

While my initial vision for this chapter was to create an “Artificial Intelligence Ethics Manifesto,” I’ve come to realize it’s impossible to simply provide a list of ten rules for all AI programmers to follow to ensure human values are imbued into robots and machines. We need a new Universal Declaration of Human Rights created in the context of how we’re going to live with autonomous machines. And while any declaration along these lines wouldn’t be focused on mandating morals per se, they would reflect the moralistic ideas of the people who created them.

Difficulties in the Design

In my interview for this book with Kate Darling, Research Specialist at M.I.T. Media lab, we spoke a great deal about the issue of ethics in AI programming. Her legal expertise and background in social robotics makes her particularly suited to understand the difficulties involved in creating standards for a field as vast as AI. One of the greatest challenges she points out is how siloed academia can be regarding other disciplines within the same institution and the world at large:

This isn’t the fault of anyone building the robots. The message (within academia) is, “We don’t want to restrict innovation. Let people build this stuff, then people doing the social sciences should work out the regulation after it exists.” After I worked at M.I.T. for a while, I realized there are very simple design decisions that can be made early on that set standards for later on that are very hard to change. You want people building these robots to at least have privacy and security in the back of their minds. But you bring these things up to them, and they often say, “Oh. I should have thought about that.” It’s a general problem that the disciplines are too siloed off from each other and there’s no cross-pollination.

Providing the cross-pollination Darling mentions is a much simpler solution to tackle regarding AI ethics than creating a Universal Standard of some kind. Fortunately, these issues have taken on a greater prominence since revered figures like Elon Musk and Steven Hawking have expressed concerns [4] over AI that have gained attention in the mainstream press.

Organizations within the AI industry have also tackled ethical issues for years, and in January of 2015, the Association for the Advancement of Artificial Intelligence (AAAI) [5] even held its First International Workshop on AI and Ethics [6] in Austin, TX. Titles for talks at the event directly addressed issues of ethics, like the one presented by Michael and Susan Leigh Anderson, Towards Ensuring Ethical Behavior from Autonomous Systems: A Case-Supported Principle-Based Paradigm [7]. The Anderson’s hypothesis focuses on gaining the consensus of groups of ethicists regarding scenarios where autonomous systems are likely to be used. Similar to the idea of observing human behavior as I wrote about for the fictional company Moralign, the Anderson’s proposal makes a great deal of sense as they point out in the abstract for their paper on the subject that, “we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another.” Once these agreements are reached and codified, they could begin to lay the basis for Principles leading to Ethical Standards or Best Practices.

A significant difficulty in creating any ethical standards for AI comes with the nature of something James Barrat, author of Our Final Invention: Artificial Intelligence and the End of The Human Era [8] calls, “The inscrutability paradox.” The basic concept of this idea has to do with the difference between what Barrat refers to as “designed” or “evolved” systems. Designed systems feature transparent programming, where humans write all the code to allow for easier testing and scrutiny regarding ethical concerns. Evolved systems refer to programming outfitted with genetic algorithms or hardware driven by neural networks. Even before AGI (or sentient AI) is reached, self-perpetuating algorithms are far from uncommon. Analyzing these programs to allow for human intervention invokes the inscrutability paradox when their self-driving behavior can no longer be accounted for. The AI involved, even if intended to be “friendly” in nature, cannot be broken down for ethical analysis because it has evolved outside of human intervention from when it was developed. As Barrat notes, “This means that instead of achieving a humanlike superintelligence, or ASI, evolved systems or subsystems will ensure an intelligence whose ‘brain’ is as difficult to grasp as ours: an alien. That alien brain will evolve and improve itself at computer, not biological, speeds.”

This is no trivial matter. In simplistic terms, it means turning systems off that have programming directives to stay on may become impossible. Not because an evil spirit has overtaken an operating system, but because the program is maximizing its efficiency through a logic programmers would not be able to ascertain.

“Being cynical, for ethics within all branches of technology most researchers think of themselves as ethical and think about people writing about ethics as either being obvious or pompous. So in a field like AI, where it’s very difficult to build intelligent systems, the concern that your system might be too intelligent and pose a risk hasn’t been too high on people’s agenda.” This is a quote from an interview I did with Stuart Russell, author of one of the most widely used textbooks on AI, Artificial Intelligence: A Modern Approach [9]. I quoted Russell before where as a reminder he said, “instead of pure intelligence, we need to build intelligence that is provably aligned with human values” [10].

I am greatly encouraged to hear that this thought leader feels AI programming needs to be aligned with human values even with “unintelligent AI systems.” This means, as he notes, changing the goals of the field to incorporate the perspective that human values provide outside of the general notion of intelligence.

A recent event involving Facebook algorithms provides a great example of what I mean in regards to the importance of aligning human values with algorithms. In December of 2014, Facebook introduced a feature called Year In Review that allowed users to see their top pictures and posts from the past twelve months based on friend’s clicks and likes. The algorithm creating the feature also took a person’s pictures and posted them within a holiday frame featuring happily dancing cartoon characters. On Christmas Eve, 2014, Eric Meyer, author and founder of web design conference, An Event Apart, wrote a post for his blog called, Inadvertent Algorithmic Cruelty. As it turns out, Facebook’s Year in Review feature posted a picture of Meyer’s daughter in his feed, not recognizing that she had passed away six months before. Meyer’s response points out the need for ethical reasoning behind the systems already driving so much of our lives:

Algorithms are essentially thoughtless. They model certain decision flows, but once you run them, no more thought occurs. To call a person “thoughtless” is usually considered a slight, or an outright insult; and yet, we unleash so many literally thoughtless processes on our users, on our lives, on ourselves [10].

By definition algorithms are thoughtless and heartless. It doesn’t make sense to try and define code by human values in this regard, since they’re completely separate paradigms. This is why Stuart Russell’s work regarding Inverse Reinforcement Learning (IRL) is so compelling. During our discussion, I mentioned the well-known example from AI ethics about an algorithm designed to make paper clips. While the program seems harmless enough in and of itself, if it were set to create paper clips at all costs, it might harness electrical power from nearby buildings or hoard other natural resources needed by humans to satisfy its primary directive. As compared to a simplistic, “AI machine gone rogue” scenario, the example is used to show people the importance programming plays in designing autonomous machines.

Russell, however, in our interview points out that goals for humans exist within the context of how we’ve already lived our lives up to the point we receive a new goal. “When I tell a human to make paper clips, that’s not what I mean. I want you to make paper clips in the context of all the other goals you’ve ever been given and that everyone else takes for granted within the spectrum of morals and goals we all have.” This is why Russell feels there should be companies that construct representations for human values, including this concept of people’s backgrounds that would recognize the layers or ethics, laws, and morals we all naturally take for granted. This is why as a caveat for AI designers along these lines Russell often uses the example of an AI cooking program not programmed with certain morals if it’s designed to cook dinner at all costs: “If there’s nothing in the fridge,” noted Russell in our interview for Heartificial Intelligence, “the last thing you want your robot to do is put the cat in the oven. That’s cooking dinner—what’s wrong with that?”

I think the notion of IRL that Russell is working on provides a solid methodology for creating a set of Ethical Standards for the AI Industry. Like the work of the Anderson’s mentioned above, by observing how autonomous systems respond to and interact with humans, we can more easily determine how people should be treated than by simply philosophizing about future scenarios. It’s also encouraging to note Russell does believe more experts are beginning to address ethical issues as AI is already having such a huge impact on society. However, the problems of siloes and vested interests will still have to be dealt with to ensure human values are universally reflected: “AI is a technology where the people who develop it aren’t necessarily the people who use it and the people who use it have the best interests of their shareholders or their secretary of defense at heart. Outcomes may not be what the human race would want if we all put our heads together.”

Crowdsourcing Control

AJung Moon [11] is a Ph.D. candidate studying human-robot interaction and roboethics with the Mechanical Engineering Department at the University of British Columbia (UBC). She is also founder of the Open Robo-ethics Initiative [12] (ORI), an organization that allows a multidisciplinary community to crowdsource people’s opinions on ethical and moral issues surrounding emerging technology. Crowdsourcing and collaboration involves the ability for website visitors to suggest polls based on ethical questions the community can vote on, such as issues around autonomous vehicles or elderly care bots. Her work and the crowdsourcing model provide a pragmatic and compelling way to examine issues of ethics in Artificial Intelligence.

I interviewed AJung about an experiment she conducted featuring a delivery robot and a set of scenarios involving how the robot would deal with humans while waiting for an elevator. In the video featuring the experiment [13], viewers are presented with multiple scenarios mirroring the types of ethical decisions we currently make while waiting for an elevator in a crowded building. As Moon explains in the paper [14] describing the experiment “The goal of this work is to provide an example of a process in which a collection of stakeholder discussion contents from an online platform can provide data that captures acceptable social and moral norms of the stakeholders. The collected data can then be analyzed and used in a manner suitable to be implemented onto robots to govern robot behaviors.” In other words, Moon believes standards for AI and robots can be created by humans, crowdsourcing their aggregate opinions around certain situations to form an ethical framework that can be adopted by designers.

Her work and the community’s polls are fascinating in regards to the complexity and depth of human ethics we need to examine in light of AI technology. For instance, in the video series showing the large delivery robot next to a person in a wheelchair, most of us would likely react as the machine does, offering to take the next elevator. But would the person in the wheelchair see this as a form of condescension? And how should the robot respond to the person in the wheelchair in a country where women are second-class citizens? Will manufacturers provide a “human base level” set of ethics that can then be iterated based on local culture in different countries? Taking part in Ori’s polls is an excellent way to begin to understand and empathize with the complexity of decisions faced by AI manufacturers while also confronting the urgent need for individuals to become ethically self aware. As AJung noted in our interview: “Looking at a very simple, daily life decision scenario, we can come to a consensus so we can program a type of human and democratic decision making into the system. Our purpose of conducting these polls is to involve the general public in learning what kinds of things people value.”

The Effect of the Ethics

Technology already exists that can measure facial expression as a proxy for emotion. Sensors in and outside of our bodies will soon be able to enhance the capabilities of algorithms running Facebook or other services we use throughout the day. We’ve all had experiences with machines or software like the one Meyer described regarding his deceased daughter to some degree. This is because we’re still able to differentiate between machines or poorly designed algorithms and the humans in our lives. But this is a finite era. While we may recognize the glitches associated with newer technologies, we also tend to forget the multiple times we’ve responded to the voice of our GPS as if it were a real person or the reverence we direct towards our mobile devices. Our values regarding the ubiquity of technology in our lives have already fundamentally shifted. Now with artificial intelligence on the rise we get the unique opportunity to decide what parts of our humanity we feel are worth automating. Or not.

This process is about much more than standards or regulations. I’m not interested in creating a set of rules just for the sake of clarity or legal purposes. If we are truly at the end of the human era or at a point in our evolution where machines may gain a prominence in our lives like never before, now is a great time to illuminate our Manifesto of Humanity.

I interviewed Steve Omohundro for this book to discuss his ideas on ethics and AI, I closed our conversation with a final question I often like to ask interviewees, which is, “What’s the question nobody asks you that you wish they would?” I do this because many times experts like Omohundro get asked similar questions based on their most popular theories and I always wonder what’s on their minds they think journalists may have missed. Here’s what he had to say: “I don’t see a lot of people asking, ‘What is human happiness?’ or ‘What is the model of a human society?’ People often wonder if AI is going to kill them, but they don’t think about the fact that if we had a clear vision of what we’re going for with these bigger types of questions, then we could shape technology based on the vision of where humanity should go.”

Here is a summary of the primary ideas from this excerpt:

  • Human Values Should Be Central in the Creation of Artificial Intelligence. Ethics as an afterthought won’t work in the widespread adoption of autonomous systems. “Evolved” AI programs that won’t allow human intervention make ethical standards useless unless incorporated at the very earliest stages of development. As Stuart Russell noted, these values-based directives should also be applied to nonintelligent systems to shift the goal of the AI industry from seeking generalized “intelligence” to outcomes that can be provably aligned with human values.
  • It’s Time to Break the Silos. While it’s common for siloes to exist within academia, researchers, programmers, and the companies funding their work have an ethical responsibility to break these barriers regarding to the production of AI. In the same way sociologists adhere to standards in how they create surveys or other research to study volunteers, developers need to apply similar criteria to the machines or algorithms they’re creating that directly connect with human users.
  • AI Needs to Incorporate Ethics By Design. Whether its IRL or a different methodology, issues regarding Values and Ethics need to be a standard starting point for AI developers in academia and corporate sectors around the world.

ACKNOWLEDGMENT

This article is an excerpt from Heartificial Intelligence: Embracing Our Humanity to Maximize Machines, Tarcher/Penguin, 2016, pp. 157–168; reprinted with permission of the author.

Author

John C. Havens is Executive Director of The Global Initiative for Ethical Considerations in the Design of Autonomous Systems and author of, Heartificial Intelligence. Email: John.Havens.US@ieee.org.

Acknowledgment

Thank you to Steve Omohundro for alerting the author to the document in [2].