Coping with Machines

By on June 29th, 2017 in Magazine Articles, President's Message, Robotics, Societal Impact

Computers tell us what to do, every day, all the time. Is this a problem? I may get irritated when a traffic light tells me to stop while there is no other traffic, or my microwave alerts me it has finished, and reminds me again and again. I just write these off as poor design decisions.

As the stakes rise, so do the problems. In 1999 after a nine-month journey, programming errors resulted in a $327 million probe crashing into Mars due to misunderstanding between imperial and metric systems. Very expensive, and an unfortunate setback to science, but not life threatening.

As things get more complicated the decisions get more convoluted. Should a self-driving car cause a crash that will kill its driver rather than hit a group of pedestrians? This is a variant of the “trolley” problem in ethics. (A more practical issue may be the ethical basis on which we accept the deaths of more than a million people and the injury of tens of millions of people each year related to automotive technology accidents.)

This is moving into the territory of the near future. Ken Jennings’ tongue-in-cheek response to losing Jeopardy to the Watson machine in 2011 was: “I for one welcome our new computer overlords.” There is some suggestion that if computers become “smarter” than people, why not let them take over. While this may become an ethical question in the future, for the moment we face a related challenge: we are not smart enough to program machines to do exactly what we want them to do.

As machines become more complicated, especially as they become self-learning, we can no longer easily predict what will happen. Unanticipated errors have existed since the first computers. Now we have two further challenges.

First, we are going to need to apply all our intelligence to keep up with machines if we are to minimize unexpected negative consequences. The arrival of machine self-learning makes this all the more pressing.

Second, while technologists can understand this problem, for many in the wider community technology is a black box, treated with undeserved suspicion or trust. To avoid a crisis in trust we have an obligation to let the public know that we don’t have all the answers. This may be an uncomfortable position in our role as experts, but it is a necessary one.

Some of these points will be the subject of a conference in Melbourne, Australia, July 13-15, 2016, Thinking Machines in the Physical World: http://21stcenturywiener.org. I encourage you to attend.

Author

Greg Adamson, 2016 IEEE-SSIT President, is with the School of Engineering, University of Melbourne, Australia. Email: g.adamson@ieee.org.