Killer Robots (again?)

The International Joint Conference on Artificial Intelligence in July announced an open letter on Autonomous Weapons: an Open Letter from AI & Robotics Researchers which has probably broken the 20,000 signatures mark by now. (Wouldn’t you like your name on a letter signed by Stephan Hawking and Elon Musk, among other impressive figures?)    This touches on the cover topic of SSIT’s Technology and Society Magazine article in Spring of 2009 whose cover image just about says it all:spg09cov

The topic of this issue is Lethal Robots.  The letter suggests that letting AI software decide when to initiate fatal actions was not a good idea.  Specifically, “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.”

Unfortunately, I can’t exactly think of any way to actually prevent the development of such systems by organizations that would like to pursue the items listed above for which the killer robots are ideally suited.  Perhaps you have some thoughts?  How can we make these not just “not beneficial” but discourage their development? Or is that possible?

SSIT is a sponsor of a new IEEE Collaboratec community on Collaboratec CyberEthics and CyberPeace. I encourage you to join this community (which is not limited to IEEE members) and contribute to a discussion there.

Humans in a Post Employment World?

There are many sources suggesting that productivity (including robotics and A.I. interfaces) will increase enough to have a significant impact on future employment world wide.   This includes:

Geoff Colvin, in his new ‘underrated’ book suggests that even in a world where most if not all jobs can be done by robots, humans are social animals and will prefer human interactions in some situations.  The Atlantic, focuses on what the future may include for jobless persons when that is the norm.  “The Jobless don’t spend their time socializing or taking up new hobbies. Instead they watch TV or sleep.”  A disturbing vision of a world which currently includes, according to this article, 16% of American men ages 25-54.  The article did not discuss the potential for younger men who see limited future opportunity to turn to socially problematic activities from crime and drugs to radicalization and revolution.

As with any challenge, the first step is recognizing there is a problem. This may be more difficult in the U.S. where work is equated with status, personal identity (“I am a <job title here>”), and social responsibility.  One suggestion is the creation of civic centers where folks can get together and “meet, learn skills, bond around sports or crafts, and socialize.” These might be combined with maker-spaces and start-up incubators that become a catalyst for creator-consumer-funder collaborations.

So — what’s your future “job” — will you be in the “on-demand” economy?  Perhaps engaging in the maker-world? — How might this future differ in various countries? Will Europe or India or ?? yield different responses to a situation that is expected to affect global economies over this century?

Apocalypse Deterrence

The Center for the Study of Existential Risk (CSER.org) at Cambridge (U.K.) is focusing on how to protect humanity from the downside(s) of technology.  By “Existential” they are not referring to Camus, but to the elimination of Homo Sapiens — i.e. our existence.

Their concerns include the question of AI*’s that might have both sufficient power and motivation to disrupt humanity, and genetic engineering that could either make us obsolete, or get out of hand and make us extinct.

Who Cares? … well some fairly knowledgeable folks are involved, including:

  • Stephen Hawlking
  • Jaan Tallinn
  • Elon Musk
  • George Church

I suspect that some SSIT folks may find it useful to monitor CSER’s newsletter and consider how their concerns and issues relate to SSIT’s activities. — Grist for the Mill as it were.

 

Who’s flying this drone?

Looking out for humankind as intelligent technologies take charge

Guest Blog post by Jeanne Dietsch, Founder, Sapiens Plurum.

Until recently, I was an unmitigated technophile. Back in 1980, just after the first Apple PC was introduced, I tried to write a thesis on “The Future of the Computer as a Mass Medium.” My Master’s committee unanimously declared that such a thing would never occur. Two tech start-ups later, I was jeered onstage by London telecom executives because I predicted that Internet commerce would grow by 1000% over the next 5 years. And indeed I was wrong… on the low side.

More and more futurists now find themselves on the conservative side of reality. Consider these startling examples of actuality exceeding expectation: 1) Researchers stunned their compatriots by “solving” Texas Hold’em Poker, including probabilistic reasoning and bluffing strategies. Software cued with nothing but the rules of the game and monetary loss aversion became unbeatable by playing more hands of poker during two years than all humankind throughout the history of the game. Michael Bowling and colleagues now expect to optimize any constrained process with a clear outcome in the same massively parallel manner. 2) Researchers at the University of Washington can now play videogames telepathically. Using off-the-shelf tech, a videogame viewer controls a viewless player’s hand just by thinking about it. And Washington insiders hint US DoD has been performing similar studies for some time. 3) Nanobots will actually be tested this year to find and destroy cancerous cells and repair damaged ones. Projections that such minute machines would collaborate to keep us alive “forever” previously lay in the distant mists.

Since I sold our intelligent robotics start-up in 2010, I have been studying the accelerating evolution of technology. The tectonic tremors building up alarm not only me, but physicist Stephen Hawking, inventor Elon Musk, MIT professor Max Tegmark and thousands of others who signed Max’s open letter Research Priorities for Robust and Beneficial Artificial Intelligence. And it is not just AI, but its combination with other radical advances that portends vast, almost unimaginable change… and benefits. The question, as always, is: benefits for whom? Who’s driving this drone and where is it headed?

We know that military, political and economic gain will set its course unless someone creates a vision with loftier goals. Our challenge, then, is how to intervene to keep humankind, and human kindness, in the pilot’s seat, and piloting software. This is the reason I started Sapiens Plurum (the wisdom of many). Sapiens Plurum advocates for the interests of humankind in a world of increasingly powerful technology. The strategy behind Sapiens Plurum and Sapiens Plurum News assumes that any top-down policy consensus will be too little, too late, and largely unenforceable. By instead working to educate the general public, in particular, the young, we hope to create a demand-side force that will create bottom-up norms for humane and human-enhancing technologies. Hence, our priorities are to:

  1. Help people understand the potential impact of rising technologies on their lives
  2. Encourage people to choose technologies that put them in control to improve our lives
  3. Advocate for technologies that benefit humankind rather than exploit them

Can you help us by disseminating awareness at your organization or joining ours? We are seeking volunteer regional leaders and board members at SapiensPlurum.org.

About the Author: Jeanne Dietsch was founder and CEO of MobileRobots Inc. She served on the IEEE Industrial Activities Board of RAS 2007-2011 and wrote a column for IEEE Robotics & Automation magazine 2009-2012. She is a Harvard graduate in sci-tech policy, a group-thinking facilitator and founder of Sapiens Plurum, an advocacy organization looking out for the interests of humankind.

Computer Consiousness

Christof Koch in an interview with MIT’s Technology Review suggests that computer consciousness is a matter of complexity, and perhaps the way that complexity is implemented.

With the recently released movie on Alan Turing (The Imitation Game) , the public is, once again, exposed to the basic concept … and Turing’s insight that “if it interacts like an intelligent, conscious being, then maybe it is one.”   A more ironic concept since the movie pushes Turing a bit further on the autistic scale than is likely — and causes the attentive audience to ask if Turing is conscious (he clearly is intelligent.)

This concept is often confused with the question of “what makes us human” or “how do we know that other entity is human?” … which is not “is conscious?”  or “is intelligent”. A WSJ column “Why Digital Gurus Get Lost in the ‘Uncanny Valley‘” touches on this, pointing out that we use a number of unconscious clues to make this decision.  (Also why Pixel hires folks with acting backgrounds.)

There is a danger here.  If we judge these characteristics by certain clues — like the angle of a dog’s head, the big eyes, (eyes are significant here) .. and so forth, we may dismiss intelligent/conscious entities who fail our (unconscious?) tests.   Of course they may fail to recognize us as having these characteristics for parallel reasons.

The good news is that our current primary path for detecting intelligent life is with SETI, and since all of those communications are very “Imitation Game” like, we won’t have the chance to mess it up with our “Uncanny Valley” presumptions.

 

Robotics Commission

Ryan Calo, UW School of Law, published a recent Brookings Institute Report “The Case for a National Robotics Commission“.  He argues that robotics is sufficiently complex that policy makers (legislative and/or federal commissions) cannot expect to have the expertise to make informed policy recommendations, laws and determinations.  He sites various examples from driverless cars to Stephen Colbert’s Twitter bot @RealHumanPraise

While I agree with Ryan’s observation about the challenge governments face trying to make informed decisions related to technology issues, I fear “robotics” is too focused of scope.  Similar issues emerge with medical devices, baggage sorting systems and automated phone systems.

The field of Software Engineering is moving towards licensed (Professional Engineering) status in various US states at this time, and that distinction  will help establish criteria for some of the related applications.  Essentially any health or safety related application (cars, medical devices, etc.) should have review/endorsement by a licensed software engineer (as is the case of civil engineering, mechanical engineering and electrical engineering.)  That folks might be writing software for critical systems and not be familiar with the concepts surfaced in the Software Engineering Body of Knowledge (which is the basis for state licensing exams, the IEEE CS certification program, and a number of IEEE/ISO standards) is a disturbing reality.

Similar considerations exist in the critically  related areas of robotics, sensors, intelligent vehicles — and no doubt will emerge with artificial intelligence over time. Technology domains are moving rapidly on all fronts. Processes to track best practices, standards, university curriculum, vendor independent certifications, licensing, etc. at best lag behind and often get little or no industry/academic support. Identifying knowledge experts is difficult in more static fields. Many of the issues facing policy makers span fields — is it software, hardware, mechanical, etc?

So while the concept of a robotics commission may help get the discussion going, in reality we need a rich community of experts spanning a range of technology fields who are prepared to join in the discussion/analysis as complex issues arise.  Drawing these from pools of corporate lobbyists, or other agenda laden sources is problematic. It may be that partnership between agencies and professional societies may provide such a pool of experts.  Even here the agenda risks are real, but at least there can be some balance between deep-pocket established interests and emerging small companies where disruptive innovation considerations can be taken into account.

What forums exist in your country/culture/environment to help inform policy and regulatory action in technology areas?  How can government draw on informed experts to help?

Enslaved by Technology?

A recent “formal” debate in Australia, We are Becoming Enslaved by our Technology addresses this question (90 min).  A look at the up side and down side of technological advances with three experts addressing both sides of the question.

One key point made by some of the speakers is the lopsided impact that technology may have towards government abuse.  One example is captured in the quote “a cell phone is a surveillance device that also provides communications”  (quoted by Bernard  Keene)  In this case one who benefits from continuous location, connectivity, app and search presence.

Much of the discussion focuses on the term “enslave” … as opposed to “control”.  And also on the question of choice … to what degree do we have “choice”, or perhaps are trying to absolve our responsibility by putting the blame on technology.

Perhaps the key issue is the catchall “technology”.  There are examples of technology, vaccines for example, where the objectives and ‘obvious’ uses are beneficial (one can envision abuse by corporations/countries creating vaccines.) And then the variations in weapons, eavesdropping, big-data-analysis vs privacy, etc.  Much of technology is double-edged – with impacts both “pro and con” (and of course individuals have different views of what a good impact.)

A few things are not debatable (IMHO):
1. the technology is advancing rapidly on all fronts
2. the driving interests tend to be corporate profit, government agendas and in some cases inventor curiosity and perhaps at times altruistic benefits for humanity.
3. there exists no coherent way to anticipate the unintended consequences much less predict the abuses or discuss them in advance.

So, are we enslaved? …. YOU WILL RESPOND TO THIS QUESTION! (Oh, excuse me…)

 

Turing Test – 2014

Chatbot “Eugene Goostman“, created by a team of Russian & Ukrainian software developers: Vladimir Veselov, Eugene Demchenko, Sergey Ulasen has “passed” the Turing Test (as of June 6, 2014).  I’d like to report my own interactions with Eugene, but the links a current instantiation seem to be ‘temporarily’ disrupted — I suspect for purposes of monetizing the recent notoriety.

Ray Kurzweil, in his 1999 book “The Age of Spiritual Machines” predicts that  by 2019 there would be “widespread reports of computers passing the Turing Test although these tests do not meed the criteria established by knowledgeable observers.”  It appears that Ray is right so far with claims of earlier successes, and deprecation of this event as being sufficient.

I won’t duplicate notes of prior posts on AI, but will point out that “practical” applications of Chatbots and other AI type software exist and will have ‘social impact’.  One impact will be the expansion of online interaction that can provide useful responses to consumers, students, etc.  Fooling the public is not needed (and may be unwise) … at times having a clearly ‘computer generated voice’ (audio or otherwise) helps set the expectations of the humans interacting with the system.  However, we can expect increasingly sophisticated capabilities along these lines.

What uses/services would you suggest as priority applications for a fairly robust ‘chat-bot’?

 

Is the Singularity Silly?

In the SSIT LinkedIn discussion a pointer was posted to a provocative article “The Closing of the Scientific Mind” from Commentary Magazine. It raises many issues, including skepticism about the “Singularity” and the Cult of Kurzweil (a delightfully evocative concept as well).  One comment posted in that thread suggested that the Singularity was ‘silly’ … which is not a particularly useful observation in terms of scholarly analysis.  That author sought to dismiss the concept as not deserving real consideration, a concept that deserves real consideration (IMHO).

First, let me provide a reference to the singularity as I envision it. The term originated with Vernor Vinge’s paper for a 1993 NASA Conference. It identifies a few paths (including Kurzweil’s favorite: machine intelligence) towards a ‘next generation’ entity that will take control of it’s own evolution such that our generation of intelligence can no longer “see” where it is going. Like a dog in a car, we would be along for the ride, but not have any idea of where we are really going.  Vinge includes biological approaches as well as machine (and perhaps underestimates bio-tech approaches) which establishes the concept beyond the “software/hardware” discussion in Commentary Magazine.

Why might we be able to dismiss (ignore) this concept?

  1. Because God will not allow man to create a  more powerful being. This has precedence in the tower of Babel. Interestingly in that situation God’s fear is that if man is not stopped he will become capable of anything.  (Mileage with your deities may vary.)
    I have not received any tablets or messages specific to the singularity from God, so I will not presume to state what She may or may-not allow. Presumably this will become evident in time and the interpretations of some are unlikely to dissuade others from their research and progress.
  2. It is impossible — Here we need to consider what “It” is. Clearly the creation of a conscious/intelligent being is possible (the current U.S. Congress not withstanding) because we appear to have at least one instance of this.  And since engineering of the current species into Homo Nextus is one of Vinge’s paths, we have the advantage of starting from that given.  So for the bio-tech path(s), presumably the “Impossible” is the significant differentiation needed to move beyond Homo Sapian understanding. Personally I suspect a species of folks who understand quantum mechanics might qualify. There are indications that this has happened before.  Once upon a time there were three or more humanoid species on earth (Neanderthal, Erectus and Sapians)  and indications they interacted (and interbred.)  One suspects that the Neanderthal’s were left scratching their heads as the Sapians started doing many strange things — which ones are the topic of diverse fiction stories.
    The AI/Machine Intelligence path certainly requires a larger leap of faith to either assert it as a certainty, or its impossibility. Note that “faith” is covered under point 1.
  3. It is too complex — This certainly has merits in both the bio and AI approaches.  It would seem axiomatic that a being at stage two cannot design and build a being at stage three.  However, Evolution is the counter point to this (Creationists please see point one) … wherein more complex and in some cases more intelligent beings have emerged from ‘lower’ forms.  Interestingly it is via “Genetic Algorithms”  that John Koza has created patentable devices that are outside of his expertise to create, and in some cases with capabilities he (and others) cannot explain.  An apparent pre-cursor to the singularity, since one would expect similar observations to occur when (if) that arises.
    Often this argument devolves to “since I (often expressed as ‘we’) cannot imagine how to do it, then neither can you”.
    Technology advances both surprise and dismay me — we should have had flying cars already, and where did 3-D printing come from? Vinge anticipates machine intelligence in 2023 and Kurzweil 2018-43; and of course, like the Turing Test, we won’t necessarily agree when it “happened”.

I can envision one bio-tech path towards a new species, it is outlined in Greg Stock’s TED talk on upgrading to humanity 2.0: We add a couple of chromosomes for our kids. I’m thinking two — one has “patches”, for the flaws in our current genome (you know, pesky things like susceptibility to diabetes, breast cancer and Alzheimers), the second has the mods and apps that define “The Best that We Can Be” at least for that month of conception.  Both of these will warrant upgrades over time, but by inserting the double hit of both into the the genome (going from 46 chromosomes to 50)  the “Haves” will be able to reproduce without intervention if they wish (of course not as successfully with have-nots due to a mis-match in the chromosome counts.)  Will such a Homo Nextus species actually yield a singularity?  Perhaps.  Is this a good idea?  — Well that is debatable — your comments welcomed.

Abstract-IEEE ISTAS 13

Early in the 21st Century, Intelligence will Underlie Everything of Value

Ray Kurzweil KurzweilAI, United States

At the onset of the 21st century, it will be an era in which the very nature of what it means to be human will be both enriched and challenged, as our species breaks the shackles of its genetic legacy, and achieves inconceivable heights of intelligence, material progress, and longevity. The paradigm shift rate is now doubling every decade, so the twenty-first century will see 20,000 years of progress at today’s rate. Computation, communication, biological technologies (for example, DNA sequencing), brain scanning, knowledge of the human brain, and human knowledge in general are all accelerating at an even faster pace, generally doubling price-performance, capacity, and bandwidth every year. Three-dimensional molecular computing will provide the hardware for human-level “strong” AI well before 2030. The more important software insights will be gained in part from the reverse-engineering of the human brain, a process well under way. While the social and philosophical ramifications of these changes will be profound, and the threats they pose considerable, we will ultimately merge with our machines, live indefinitely, and be a billion times more intelligent…all within the next three to four decades.

Keywords: intelligence innovation brain society philosophy computation communication biology human