Teaching Computers to Lie

A recent article on the limitations of computer “players” in online games is that they don’t know about lying.   No doubt this is true.  Both the detection of lies (which means anticipating them, and in some sense understanding the value of mis-representation to the other party) and the ability to use this capability are factors in ‘gaming’.  This can be both entertainment games, and ‘gaming the system’ — in sales, tax evasion, excusing failures, whatever.

So here is a simple question: Should we teach computers to lie?
(unfortunately, I don’t expect responses to this question will alter the likely path of game creators, or others who might see value in computers that can lie.)   I will also differentiate this from using computers to lie.  I can program a computer so that it overstates sales, understates losses, and many other forms of fraud.  But in this case it is my ethical/legal lapse, not a “decision” on the part of the computer.

Apocalypse Deterrence

The Center for the Study of Existential Risk (CSER.org) at Cambridge (U.K.) is focusing on how to protect humanity from the downside(s) of technology.  By “Existential” they are not referring to Camus, but to the elimination of Homo Sapiens — i.e. our existence.

Their concerns include the question of AI*’s that might have both sufficient power and motivation to disrupt humanity, and genetic engineering that could either make us obsolete, or get out of hand and make us extinct.

Who Cares? … well some fairly knowledgeable folks are involved, including:

  • Stephen Hawlking
  • Jaan Tallinn
  • Elon Musk
  • George Church

I suspect that some SSIT folks may find it useful to monitor CSER’s newsletter and consider how their concerns and issues relate to SSIT’s activities. — Grist for the Mill as it were.

 

Human Germ-line Modification Hiatus Proposed (too late?)

Nobel laureates David Baltimore and Paul Berg have recommended pausing active modification of the human germ-line cells until experts can convene a conference to consider the implications of this activity.  (WSJ 4/9/2015 “Let’s Hit Pause Before Altering Humankind”)   They point out that this parallel’s a similar action in 1975 when the emergence of recombinant DNA technology triggered a conference on that topic.

This is a bit afield from IEEE’s domain of affairs, but quite relevant to the Society on Social Implications of Technology dialogs. Let me outline key concepts they put forward to help build a common vocabulary, and then focus on parallel’s in IEEE’s areas of work.

They point out the advent of a bio-tech (CRISPER/Cas9) that simplifies the modification of germ-line DNA alterations that are “quite precise with no undesired changes in the genome.” They point out that modifications can be within an individual without inheritability (somatic cell alteration.) They can be applied to germ-cells, affecting all future generations from that line either to eliminate a defect (therapeutic germ-line alteration.) Although they point out that similar benefits for the next generation may be attainable via embryo-selection methodology.  Finally there is the potential for “voluntary germ-line alteration”, to increase traits parents currently consider desirable. They point out that “we often do not know well enough the total range of consequences of a given gene alteration, potentially creating unexpected physiological alterations that would extend down through generations to come.” (A.k.a. the law of unintended consequences.)  Ergo they recommend a moratorium and conference to address the implications involved.

This is an excellent example parallel to IEEE’s Code of Ethics which includes “to improve the understanding of technology; its appropriate application, and potential consequences.” Actually, it goes one step further in taking action to manage potential consequences before they are fully realized.

If we look at the fields where IEEE’s technologists are engaged (with computing, robotics and bio-medical systems included, there are few areas we don’t touch), there are some interesting examples.  There is some discussion (although no suggested moratoriums) in areas like self-driving or remotely controllable cars, some of these fields are outgrowths of simple ‘improvements’, such as automatic breaking systems or parallel parking.  Others are unintended consequences of remote monitoring services.

Observation #1: we (technologists, our employers, and indirectly stockholders and customers) may not be applying sufficient diligence in considering potential consequences.  In part we may not be providing the time and incentives for quality engineering of quality products. A quality product should not be subject to hacking that can affect public safety and health for example.

Observation #2: The bio-genetics world is miles ahead of our technology in their limited understanding of what may result from their work.  For example, the concept of emerging artificial intelligence and it’s impact is getting coverage in science fiction, and even some awareness in research and industry, but we have very little insight on the potential consequences of passing over some nebulous lines in paths that lead towards intelligent and./or conscious systems.

What other areas do you see that might warrant some serious consideration before we proceed?

[April 24th, Chinese researchers indicate they have completed a trial of this concept, with some ‘off target’ effects.]

Computer Consiousness

Christof Koch in an interview with MIT’s Technology Review suggests that computer consciousness is a matter of complexity, and perhaps the way that complexity is implemented.

With the recently released movie on Alan Turing (The Imitation Game) , the public is, once again, exposed to the basic concept … and Turing’s insight that “if it interacts like an intelligent, conscious being, then maybe it is one.”   A more ironic concept since the movie pushes Turing a bit further on the autistic scale than is likely — and causes the attentive audience to ask if Turing is conscious (he clearly is intelligent.)

This concept is often confused with the question of “what makes us human” or “how do we know that other entity is human?” … which is not “is conscious?”  or “is intelligent”. A WSJ column “Why Digital Gurus Get Lost in the ‘Uncanny Valley‘” touches on this, pointing out that we use a number of unconscious clues to make this decision.  (Also why Pixel hires folks with acting backgrounds.)

There is a danger here.  If we judge these characteristics by certain clues — like the angle of a dog’s head, the big eyes, (eyes are significant here) .. and so forth, we may dismiss intelligent/conscious entities who fail our (unconscious?) tests.   Of course they may fail to recognize us as having these characteristics for parallel reasons.

The good news is that our current primary path for detecting intelligent life is with SETI, and since all of those communications are very “Imitation Game” like, we won’t have the chance to mess it up with our “Uncanny Valley” presumptions.

 

Is the Singularity Silly?

In the SSIT LinkedIn discussion a pointer was posted to a provocative article “The Closing of the Scientific Mind” from Commentary Magazine. It raises many issues, including skepticism about the “Singularity” and the Cult of Kurzweil (a delightfully evocative concept as well).  One comment posted in that thread suggested that the Singularity was ‘silly’ … which is not a particularly useful observation in terms of scholarly analysis.  That author sought to dismiss the concept as not deserving real consideration, a concept that deserves real consideration (IMHO).

First, let me provide a reference to the singularity as I envision it. The term originated with Vernor Vinge’s paper for a 1993 NASA Conference. It identifies a few paths (including Kurzweil’s favorite: machine intelligence) towards a ‘next generation’ entity that will take control of it’s own evolution such that our generation of intelligence can no longer “see” where it is going. Like a dog in a car, we would be along for the ride, but not have any idea of where we are really going.  Vinge includes biological approaches as well as machine (and perhaps underestimates bio-tech approaches) which establishes the concept beyond the “software/hardware” discussion in Commentary Magazine.

Why might we be able to dismiss (ignore) this concept?

  1. Because God will not allow man to create a  more powerful being. This has precedence in the tower of Babel. Interestingly in that situation God’s fear is that if man is not stopped he will become capable of anything.  (Mileage with your deities may vary.)
    I have not received any tablets or messages specific to the singularity from God, so I will not presume to state what She may or may-not allow. Presumably this will become evident in time and the interpretations of some are unlikely to dissuade others from their research and progress.
  2. It is impossible — Here we need to consider what “It” is. Clearly the creation of a conscious/intelligent being is possible (the current U.S. Congress not withstanding) because we appear to have at least one instance of this.  And since engineering of the current species into Homo Nextus is one of Vinge’s paths, we have the advantage of starting from that given.  So for the bio-tech path(s), presumably the “Impossible” is the significant differentiation needed to move beyond Homo Sapian understanding. Personally I suspect a species of folks who understand quantum mechanics might qualify. There are indications that this has happened before.  Once upon a time there were three or more humanoid species on earth (Neanderthal, Erectus and Sapians)  and indications they interacted (and interbred.)  One suspects that the Neanderthal’s were left scratching their heads as the Sapians started doing many strange things — which ones are the topic of diverse fiction stories.
    The AI/Machine Intelligence path certainly requires a larger leap of faith to either assert it as a certainty, or its impossibility. Note that “faith” is covered under point 1.
  3. It is too complex — This certainly has merits in both the bio and AI approaches.  It would seem axiomatic that a being at stage two cannot design and build a being at stage three.  However, Evolution is the counter point to this (Creationists please see point one) … wherein more complex and in some cases more intelligent beings have emerged from ‘lower’ forms.  Interestingly it is via “Genetic Algorithms”  that John Koza has created patentable devices that are outside of his expertise to create, and in some cases with capabilities he (and others) cannot explain.  An apparent pre-cursor to the singularity, since one would expect similar observations to occur when (if) that arises.
    Often this argument devolves to “since I (often expressed as ‘we’) cannot imagine how to do it, then neither can you”.
    Technology advances both surprise and dismay me — we should have had flying cars already, and where did 3-D printing come from? Vinge anticipates machine intelligence in 2023 and Kurzweil 2018-43; and of course, like the Turing Test, we won’t necessarily agree when it “happened”.

I can envision one bio-tech path towards a new species, it is outlined in Greg Stock’s TED talk on upgrading to humanity 2.0: We add a couple of chromosomes for our kids. I’m thinking two — one has “patches”, for the flaws in our current genome (you know, pesky things like susceptibility to diabetes, breast cancer and Alzheimers), the second has the mods and apps that define “The Best that We Can Be” at least for that month of conception.  Both of these will warrant upgrades over time, but by inserting the double hit of both into the the genome (going from 46 chromosomes to 50)  the “Haves” will be able to reproduce without intervention if they wish (of course not as successfully with have-nots due to a mis-match in the chromosome counts.)  Will such a Homo Nextus species actually yield a singularity?  Perhaps.  Is this a good idea?  — Well that is debatable — your comments welcomed.