Robot Friends

The Wall St. Journal has a piece “Your Next Friend Could Be a Robot“, which is talking about a device in your home, not a disembodied associate on Facebook. The initial example is “embodied” as a speaker/microphone in Amazon’s Echo Dot, but also includes similar devices from Google, cell phones and even Toyota.  So what?

Folks, the article focuses on a 69 year old woman living alone, have a relationship with the devices.  These are 24/7 connected to the Internet, with a back end AI voice recognition/response system. (The article asserts it’s not AI because it’s not conscious, which is a different consideration.) … Apparently “double digit” percentages of interactions with Alexa (Amazon’s non-AI personality) are “non-utilitarian” presumably not triggering orders for Amazon products.

The good news: folks feel less lonely, more connected, and have “someone” there 24/7 — and responding to queries (with pre-programed answers) such as “what are the laws of robotics” — see Reddit’s list of fun questions.  … but

The bad news — it’s not clear what happens when you tell Alexa to call 911, or that you have fallen down and can’t get up, etc.  While there are “wakeup” and “sleep” words you can use, just the fact that a wakeup word can be recognized indicates that a level of 24/7 monitoring is in place.  No doubt this can be hacked, and tapped, and otherwise abused.

What is Amazon’s liability if you tell Alexa you need help and no effective response occurs?  No doubt time and lawsuits will tell.

Ethics of Killing with Robots

The recent murder of police officers in Dallas, finally terminated by the lethal use of a robot to kill the shooter has triggered an awareness of related ethical issues. First, it must be understood that the robot was under human control during it’s entire mission, so in this particular case it reflects a sophisticated “projection of power” with no autonomous capability. The device might have been a drone controlled remotely and simply paralleled the current use of drones (and no doubt other devices) as weapons.

We already have examples of “autonomous” devices as well. Mines, both land and ocean (and eventually space) all reflect devices “programmed to kill” that operate with no human at the trigger. If anything, a lethal frustration with these devices is that they are too dumb, killing long after their intended use.

I saw a comment in one online discussion implying that robots are or would be programed with Asimov’s first law: “Do not harm humans”. But of course this is neither viable at this time (it takes an AI to evaluate that concept) nor is it a directive that is likely to be implemented in actual systems. Military and police applications are among the most likely for robotic systems of this kind, and harming humans may be a key objective.

Projecting lethal force at a distance may be one of the few remaining characteristics of humans (since we have found animals innovating tools, using language and so forth). Ever since Homo Whomever (pre Sapians as I understand it), tossed a rock to get dinner we have been on this slippery slope. The ability to kill a target from a ‘position of safety’ is essentially the basic design criteria for many weapon systems.  Homo Whomever may have also crossed the autonomous Rubicon with the first snare or pit-fall trap.

Our challenge is to make sure our systems designers and those acquiring the systems have some serious ethical training with practical application.  Building in the safeguards, expiration dates, decision criteria, etc. should be essential aspects of lethal autonomous systems design. “Should” is unfortunately the case, it is unlikely in many scenarios.

Teaching Computers to Lie

A recent article on the limitations of computer “players” in online games is that they don’t know about lying.   No doubt this is true.  Both the detection of lies (which means anticipating them, and in some sense understanding the value of mis-representation to the other party) and the ability to use this capability are factors in ‘gaming’.  This can be both entertainment games, and ‘gaming the system’ — in sales, tax evasion, excusing failures, whatever.

So here is a simple question: Should we teach computers to lie?
(unfortunately, I don’t expect responses to this question will alter the likely path of game creators, or others who might see value in computers that can lie.)   I will also differentiate this from using computers to lie.  I can program a computer so that it overstates sales, understates losses, and many other forms of fraud.  But in this case it is my ethical/legal lapse, not a “decision” on the part of the computer.

Killer Robots (again?)

The International Joint Conference on Artificial Intelligence in July announced an open letter on Autonomous Weapons: an Open Letter from AI & Robotics Researchers which has probably broken the 20,000 signatures mark by now. (Wouldn’t you like your name on a letter signed by Stephan Hawking and Elon Musk, among other impressive figures?)    This touches on the cover topic of SSIT’s Technology and Society Magazine article in Spring of 2009 whose cover image just about says it all:spg09cov

The topic of this issue is Lethal Robots.  The letter suggests that letting AI software decide when to initiate fatal actions was not a good idea.  Specifically, “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.”

Unfortunately, I can’t exactly think of any way to actually prevent the development of such systems by organizations that would like to pursue the items listed above for which the killer robots are ideally suited.  Perhaps you have some thoughts?  How can we make these not just “not beneficial” but discourage their development? Or is that possible?

SSIT is a sponsor of a new IEEE Collaboratec community on Collaboratec CyberEthics and CyberPeace. I encourage you to join this community (which is not limited to IEEE members) and contribute to a discussion there.

Humans in a Post Employment World?

There are many sources suggesting that productivity (including robotics and A.I. interfaces) will increase enough to have a significant impact on future employment world wide.   This includes:

Geoff Colvin, in his new ‘underrated’ book suggests that even in a world where most if not all jobs can be done by robots, humans are social animals and will prefer human interactions in some situations.  The Atlantic, focuses on what the future may include for jobless persons when that is the norm.  “The Jobless don’t spend their time socializing or taking up new hobbies. Instead they watch TV or sleep.”  A disturbing vision of a world which currently includes, according to this article, 16% of American men ages 25-54.  The article did not discuss the potential for younger men who see limited future opportunity to turn to socially problematic activities from crime and drugs to radicalization and revolution.

As with any challenge, the first step is recognizing there is a problem. This may be more difficult in the U.S. where work is equated with status, personal identity (“I am a <job title here>”), and social responsibility.  One suggestion is the creation of civic centers where folks can get together and “meet, learn skills, bond around sports or crafts, and socialize.” These might be combined with maker-spaces and start-up incubators that become a catalyst for creator-consumer-funder collaborations.

So — what’s your future “job” — will you be in the “on-demand” economy?  Perhaps engaging in the maker-world? — How might this future differ in various countries? Will Europe or India or ?? yield different responses to a situation that is expected to affect global economies over this century?

Apocalypse Deterrence

The Center for the Study of Existential Risk (CSER.org) at Cambridge (U.K.) is focusing on how to protect humanity from the downside(s) of technology.  By “Existential” they are not referring to Camus, but to the elimination of Homo Sapiens — i.e. our existence.

Their concerns include the question of AI*’s that might have both sufficient power and motivation to disrupt humanity, and genetic engineering that could either make us obsolete, or get out of hand and make us extinct.

Who Cares? … well some fairly knowledgeable folks are involved, including:

  • Stephen Hawlking
  • Jaan Tallinn
  • Elon Musk
  • George Church

I suspect that some SSIT folks may find it useful to monitor CSER’s newsletter and consider how their concerns and issues relate to SSIT’s activities. — Grist for the Mill as it were.

 

Eavesdropping Barbie?

So should children have toys that can combine speech recognition, wi-fi connection to capture and respond to them and potentially recording their conversations as well as feeding them “messages”.  Welcome to the world of Hello Barbie.

Perhaps I spend too much time thinking about technology abuse … but let’s see.  There are political/legal environments (think 1984 and it’s current variants) where capturing voice data from a doll/toy/IoT device could be used as a basis for arrest and jail (or worse) — can  Barbie be called as a witness in court? And of course there are the “right things to say” to a child, like “I like you”  (dolls with pull strings do that), and things you may not want to have your doll telling your child (“You know I just love that new outfit” or “Wouldn’t I look good in that new Barbie-car?”) or worse (“your parents aren’t going to vote for that creep are they?)

What does a Hello Barbie doll do when a child is clearly being abused by a parent?  Can it contact 9-1-1?  Are the recordings available for prosecution?  What is abuse that warrants action?  And what liability exists for failure to report abuse?

Update: Hello Barbie is covered in the NY Times 29 March 2015 Sunday Business section Wherein it is noted that children under 13 have to get parental permission to enable the conversation system — assuming they understand the implications. Apparently children need to “press a microphone button on the app” to start interaction. Also, “parents.. have access to.. recorded conversations and can .. delete them.”  Which confirms that a permanent record is being kept until parental action triggers deletion. Finally we are assured “safeguards to ensure that stored data is secure and can’t be accessed by unauthorized users.”  Apparently Mattel and ToyTalk (the technology providers)  have better software engineers than Home Depot, Target and Anthem.

Hard Hitting Robots (Not) — and Standards

ISO is developing standards for the contact between collaborative robots and humans working in close proximity.  Which raises a question of how hard a robot can hit you, legally. Of course this also raises concerns in industry about liability, work-place-safety legislation etc.

There is nothing new here in reality.  Humans have been working in collaborative environments with machines, animals and even other humans. In the U.S. some of the first workplace limitations were actually triggered by animal cruelty legislation applied to child labor. And of course as our experience has increased with different types of equipment, so have the sophistication of work-place protections.   Industry leaders working in these areas should be pro-active in helping to set standards, both to have a voice in the process, and to protect workers.  Well considered standards provide protection for employers as well as workers.  Of course when insufficient diversity of perspectives establish the standards, they can end up with an imbalance.

In my own experience, which involved significant work with standards (POSIX at IEEE and ISO levels) industry is wise to invest in getting a good set of standards in place, and users/consumers are often under-represented.  IEEE and IETF are two places where technologists can participate as individuals, which provides a better diversity in my experience. ISO operates as “countries”, with some countries under the lead of corporate interests, others academic, some government agencies. In general I suspect we get better standards out of the diversity possible from forums like IEEE — but with industry hesitant to fund participation by individual technologists, these forums may lack sufficient resources.

One of these days, you may get a pat on the back from a robot near you. Perhaps even for work well done in the standards process.

Robots, Jobs, Satisfaction and Innovation

An IEEE Spectrum podcast with Henrik Christensen of Georgia Tech asserts that Robots are not destroying jobs.  This is to counter an earlier assertion by Rice University Professor Moshe Vardi that “by 2045, machines will be able to do if not any work that humans can do, then a very significant fraction of the work that humans can do.”

Christensen admits that earlier phases of technology made some jobs obsolete, but at the same time created new opportunities.  Farmers move into factories, factory workers into Walmart, and so forth. There are some serious problems with this model, even if it is true. First, humans are motivated by things other than “jobs” (when they  have enough to eat and a place to stay) — so simply moving from “x” to “X + 1” employees does not assure job satisfaction.  There are some who would argue that the farmer was far more satisfied than the factory worker.  Dan Pink asserts that Autonomy, Mastery and Purpose are the real motivators, and one can ask how these are satisfied by the transition of careers over time. In his recent book, The Future, Al Gore argues that RoboSourcing on a global basis will decrease the available jobs on a global basis … something the world has not previously experienced.

In the United States, we are seeing economic growth via significantly increased productivity with a much smaller increase in employment.  In my state of New Hampshire, we have had less impact in terms of job loss, but the job growth has been in lower paying jobs.  Perhaps most painfully, the jobs in New Hampshire have been operational not innovational.  Even the good paying jobs in management, or accounting cannot match engineering and technology for innovation.  This is important since the quality of future jobs grows from the soil of existing ones.  Some years ago our largest employer was Digital Equipment (don’t just think VAX, realize Digital also created the Altavista search engine, the first 64bit chip, and StrongARM – a high performance precursor for handhelds and smart phones — all innovations capitalized upon by others, and not in New England) — now the largest employer is Walmart.  Needless to say, we are setting the foundation for a quite different future at this point.

If Vardi is right, it is not clear that in 2045 we will find robots and machine intelligence doing the innovation, but clearly they  will be doing much of the operation work.  If AI’s emerge into the domain of innovation and consciousness, etc. then it is not clear they will be subservient to human needs.  So, we can expect that humans will be the key source of innovation to meet human needs for the foreseeable future.

This does not assure “full employment” as we currently define it.  It is possible that robots will free humans to pursue many activities previously considered “leisure” … for example farming… oh, excuse me, gardening, or art, or other creative pursuits.  However, this also  assumes that people will have their basic needs met (food, shelter, etc.) that permits these pursuits — and the economic models in most countries today seem to involve increased income for management and shareholders and increased unemployment for the work force.  Countries with large populations of unemployed, hungry young adults tend to have significant instabilities.  Oddly revolutions meet all of Pink’s motivational factors: Autonomy, Mastery and Purpose (perhaps a bit weak on mastery until they have been active for a while.)

One social consideration for the evolution of technology is aligning the future of employment, and the economic system to increase satisfaction, rather than dis-satisfaction.  Do you expect to be satisfied with our future?