Call for Papers – Robotics and Social Implications – Joint Special Issue

IEEE Technology and Society Magazine and IEEE Robotics and Automation Magazine are pleased to announce a Joint Special Issue for March 2018.

Due dates for authors are as follows:

1 May 2017: Submission deadline
1 August 2017: First decision communicated to authors
20 November 2017: Final acceptance decision communicated to authors
10 December 2017: Final manuscripts uploaded by authors

Additional information about each call for papers is available below. For further inquiries, please email Katina Michael at: katina@uow.edu.au.

#1: Robotics and Social Implications in IEEE Technology and Society Magazine (M-T&S).

Guest Editors: Ramona Pringle (Ryerson University), Diana Bowman (Arizona State University), Meg Leta Jones (Georgetown University), and Katina Michael (University of Wollongong)

Robots have been used in a variety of applications, everything from healthcare to automation. Robots for repetitive actions exude accuracy and specificity. Robots don’t get tired, although they do require maintenance, they can be on 24×7, although stoppages in process flows can happen frequently due to a variety of external factors. It is a fallacy that robots don’t require human inputs and can literally run on their own without much human intervention. And yet, there is a fear surrounding the application of robots mostly swelled by sensational media reports and the science fiction genre. Anthropomorphic robots have also caused a great deal of concern for consumer advocate groups who take the singularity concept very seriously.

It is the job of technologists to dispel myths about robotics, and to raise awareness and in so doing robot literacy, the reachable limits of artificial intelligence imbued into robots, and the positive benefits that can be gained by future developments in the field. This special will focus on the hopes of robot application in non-traditional areas and the plausible intended and unintended consequences of such a trajectory.

Engineers in sensor development, artificial consciousness, components assemblage, visual and aesthetic artistry are encouraged to engage with colleagues from across disciplines- philosophers, sociologists and anthropologists, humanities scholars, experts in English and creative writing, journalists and communications specialists- to engage in this call. Multidisciplinary teams of researchers are requested to submit papers addressing pressing socio-ethical issues in order to provide inputs on how to build more robust robotics that will address citizen issues. For example:

  • How can self-driving cars make more ethical decisions?
  • How can co-working with robots becoming an acceptable practice to humans?
  • How might there be more fluent interactions between humans and robots?
  • Can drones have privacy-by-design incorporated into their controls?

This issue calls for technical strategic-level and high-level design papers that have a social science feel to them, and are written for a general audience. The issue encourages researchers to ponder on the socio-ethical implications stemming from their developments, and how they might be discussed in the general public.

Visit the IEEE Technology and Society Magazine submission portal.
#2: Socio-ethical Approaches to Robotics Development in IEEE Robotics and Automation Magazine

Guest Editors: Noel Sharkey (University of Sheffield), Aimee van Wynsberghe (University of Twente), John C. Havens (The Global Initiative for Ethical Concerns in the Design of Autonomous Systems), and Katina Michael (University of Wollongong).

Converging approaches adopted by engineers, computer scientists and software developers have brought together niche skillsets in robotics for the purposes of a complete product, prototype or application. Some robotics developments have been met with criticism, especially those of an anthropomorphic nature or in a collaborative task with humans. Due to the emerging role of robotics in our society and economy, there is an increasing need to engage social scientists and more broadly humanities scholars in the field. In this manner we can furthermore ensure that robots are developed and implemented considering the socio-ethical implications that they raise.

This call for papers, supposes that more recently, projects have brought on board personnel with a multidisciplinary background to ask those all important questions about “what if” or “what might be” at a time that the initial idea generation is occurring to achieve a human-centered design. The ability to draw these approaches into the “design” process, means that areas of concern to the general public are addresses. These might include issues surrounding consumer privacy, citizen security, individual trust, acceptance, control, safety, fear of job loss and more.

In introducing participatory practices into the design process, preliminary results can be reached to inform the developers of the way in which they should consider a particular course of action. This is not to halt the freedom of the designer, but rather to consider the value-laden responsibility that designers have in creating things for the good of humankind, independent of their application.

This call seeks to include novel research results demonstrated on working systems that incorporate in a multidisciplinary approach technological solutions which respond to socio-ethical issues. Ideally this RAM paper is complemented by a paper submitted in parallel to T&SM that investigates the application from a socio-ethical viewpoint.

Visit The IEEE Robotics and Automation Magazine submission portal.

Humans, Machines, and the Future of Work

De Lange Conference X on Humans, Machines, and the Future of Work
December 5-6, 2016 at Rice University, Houston, TX
For details, registration, etc. See  http://delange.rice.edu/

 

  • What advances in artificial intelligence, robotics and automation are expected over the Next 25 years?
  • What will be the impact of these advances on job creation, job destruction and wages in the labor market?
  • What skills are required for the job market of the future?
  • Can education prepare workers for that job market?
  • What educational changes are needed?
  • What economic and social policies are required to integrate people who are left out of future labor markets?
  • How can we preserve and increase social mobility in such an environment?

 

AI Ethics

A growing area reflecting the impact of technology on society is ethics and AI.  This has a few variations… one is what is ethical in terms of developing or applying AI, the second is what is ethical for AI’s.  (Presumably for an AI to select an ethical vs unethical course of action either it must be programmed that way, or it must learn what is ethical as part of it’s education/awareness.)

Folks playing in the AI Ethics domain include a recent consortia of industry players (IBM, Google, Facebook, Amazon and Microsoft), the IEEE Standards folks, and the White House (with a recent white paper).

This is a great opportunity for learning about the issues in the classroom, to develop deep background for policy and press folks — concerns will emerge here — consider self driving cars, robots in warfare or police work, etc.  and of course the general public where misconceptions and misinformation are likely.  We see many movies where evil technology is a key plot device, and get many marketing messages on the advantages of progress.  Long term challenges for informed evolution in this area will require less simplistic perspectives of the opportunities and risks.

There is a one day event in Brussels, Nov. 15, 2016 that will provide a current view on some of the issues, and discussions.

 

Self Driving Car Ethical Question

There is a classical ethical question, “The Trolley Problem” which has an interesting parallel in the emerging world of self-driving vehicles.  The original problem posits a situation where 5 persons will be killed if you do not take action, but the action you take will directly kill one person. There are interesting variations on this outlined on the above wikipedia page.

So, we now have the situation where there are 5 passengers in a self driving car.  An oncoming vehicle swerves into the lane, and will kill the passengers in the car. The car can divert to the sidewalk, but a person there will be killed if that is done.  Note the question here becomes “how do you program the car software for these decisions?“.  Which is to say that the programmer is making the decision well in advance of any actual situation.

Let’s up the ante a bit.  There is only one person in the car, but 5 on the sidewalk. If the car diverts 5 will die, if not just the one passenger will die. Do you want your car to kill you to save those five persons?  What if it is you and your only  child in car? (Now 2 vs 5 deaths). Again, the software developer will be making the decision, either consciously, or by default.

What guidelines do we propose for software developers in this situation?

Hard Hitting Robots (Not) — and Standards

ISO is developing standards for the contact between collaborative robots and humans working in close proximity.  Which raises a question of how hard a robot can hit you, legally. Of course this also raises concerns in industry about liability, work-place-safety legislation etc.

There is nothing new here in reality.  Humans have been working in collaborative environments with machines, animals and even other humans. In the U.S. some of the first workplace limitations were actually triggered by animal cruelty legislation applied to child labor. And of course as our experience has increased with different types of equipment, so have the sophistication of work-place protections.   Industry leaders working in these areas should be pro-active in helping to set standards, both to have a voice in the process, and to protect workers.  Well considered standards provide protection for employers as well as workers.  Of course when insufficient diversity of perspectives establish the standards, they can end up with an imbalance.

In my own experience, which involved significant work with standards (POSIX at IEEE and ISO levels) industry is wise to invest in getting a good set of standards in place, and users/consumers are often under-represented.  IEEE and IETF are two places where technologists can participate as individuals, which provides a better diversity in my experience. ISO operates as “countries”, with some countries under the lead of corporate interests, others academic, some government agencies. In general I suspect we get better standards out of the diversity possible from forums like IEEE — but with industry hesitant to fund participation by individual technologists, these forums may lack sufficient resources.

One of these days, you may get a pat on the back from a robot near you. Perhaps even for work well done in the standards process.

Robotics Commission

Ryan Calo, UW School of Law, published a recent Brookings Institute Report “The Case for a National Robotics Commission“.  He argues that robotics is sufficiently complex that policy makers (legislative and/or federal commissions) cannot expect to have the expertise to make informed policy recommendations, laws and determinations.  He sites various examples from driverless cars to Stephen Colbert’s Twitter bot @RealHumanPraise

While I agree with Ryan’s observation about the challenge governments face trying to make informed decisions related to technology issues, I fear “robotics” is too focused of scope.  Similar issues emerge with medical devices, baggage sorting systems and automated phone systems.

The field of Software Engineering is moving towards licensed (Professional Engineering) status in various US states at this time, and that distinction  will help establish criteria for some of the related applications.  Essentially any health or safety related application (cars, medical devices, etc.) should have review/endorsement by a licensed software engineer (as is the case of civil engineering, mechanical engineering and electrical engineering.)  That folks might be writing software for critical systems and not be familiar with the concepts surfaced in the Software Engineering Body of Knowledge (which is the basis for state licensing exams, the IEEE CS certification program, and a number of IEEE/ISO standards) is a disturbing reality.

Similar considerations exist in the critically  related areas of robotics, sensors, intelligent vehicles — and no doubt will emerge with artificial intelligence over time. Technology domains are moving rapidly on all fronts. Processes to track best practices, standards, university curriculum, vendor independent certifications, licensing, etc. at best lag behind and often get little or no industry/academic support. Identifying knowledge experts is difficult in more static fields. Many of the issues facing policy makers span fields — is it software, hardware, mechanical, etc?

So while the concept of a robotics commission may help get the discussion going, in reality we need a rich community of experts spanning a range of technology fields who are prepared to join in the discussion/analysis as complex issues arise.  Drawing these from pools of corporate lobbyists, or other agenda laden sources is problematic. It may be that partnership between agencies and professional societies may provide such a pool of experts.  Even here the agenda risks are real, but at least there can be some balance between deep-pocket established interests and emerging small companies where disruptive innovation considerations can be taken into account.

What forums exist in your country/culture/environment to help inform policy and regulatory action in technology areas?  How can government draw on informed experts to help?

Employment and Robots

A delightful quote in the Aug. 19 Wall St Journal letters responding to the increased versitility and use of robots, the author (Channing Wagg) quotes Walter Reuther, then Head of the United Autoworkers union in as follows:  “One of the management people {during a 1950’s tour of a Ford Plant} with a slightly gleeful tone in his voice said to me ‘how are you going to collect union dues from all these machines?’  And I replied, ‘You know, that is now what’s bothering me. I’m troubled by the problem of how to sell automobiles to these machines.'”  

Also today we have an NPR OnPoint discussion on the ‘Changing American Dream‘. The issue here is the lack of jobs for our recent graduates — of college or high school, and how it is affecting the expectations of todays young folks.   In many ways this is a response to the question I raised in an April posting here, “do you expect to be satisfied with our future?”  As pointed out there, Al Gore in his recent book “The Future” suggests that there may not be enough jobs to provide “full employment” for the work force world wide.  Also this week was a posting by Robert Reich, on “Why the Anger?” Where he asserts “The last time America was this bitterly divided was in the 1920s, which was the last time income, wealth, and power were this concentrated.

Let me state that a different way.   Over the last few years, the US Stock Market has been a bull on steroids. Driven by growing corporate revenues and profits.  The one percent has been getting richer, and even us retired folks with 401k accounts have been benefiting.  However job recovery has been weak.  Why?  Productivity.  Specifically increased productivity that allows for growing production with fewer employees.  Salaries have not gone up to match that, which means greater corporate pay, bonuses, stock dividends, etc.

A thought experiment:

What if we double productivity next week — world wide?  This means twice as much of everything at no increase in cost.  Here are a bunch of possible outcomes:

  1. We double the pay for everyone, but risk having over supply of many items
  2. We cut work hours by 1/2 and double the pay for workers – now folks earn the same amount (for the same level of output), and we do not have oversupply
  3. We lay off 1/2 our workers, and double the dividends, bonuses and higher level salaries — now we do not have over supply, and the disparity that Reich observes increases significantly.
  4. Other suggestions are welcome …

This is effectively what we have done over  the last few decade or two.  Expanding markets in China, India, et al have absorbed much of the potential surplus, and will continue to do so for a while (other resources such as power and natural materials assumed). So far we seem to have followed along the line of option 3 when a sufficient market was not available.

If we continue to increase productivity, decrease employment we eventually hit Gore’s ‘Future’ boundary, where our available “person-hours”  significantly exceeds paid employment hours needed.  Which aggravates Reich’s situation of increased angry populations. (Wars have proven to be an effective way to reduce the number of unemployed young persons. Revolutions are another option. Democracy may yield other forms of significant transformation — consider 30 hour work weeks, mandatory age 50 retirement along with the social safety nets to make these viable — i.e. taxes.)

We are the technologists.  Productivity is our mantra.  Quality of life is our purpose. But there is a tipping point here.  We may have passed it in some areas already — and that may be why our new graduates are having a problem finding work.   I’m not an advocate for trying to stop the technology momentum. But, we better start taking a realistic look at where this future is going in social terms.  I do not think the current U.S. model of 4% unemployment, 40 Hour work weeks, and increasing salary/income disparity is viable for the long term.  It is also the case that for-profit corporations cannot address this — oddly it violates their responsibility of acting in the interests of their shareholders.  The “market” cannot address this either since the direction we are going cannot be used to grow jobs or or decrease prices.  (Those with market models to suggest are encouraged to provide visions of how the market might work here.)