Predictive Analytics – Rhinos, Elephants, Donkeys and Minority Report

The  IEEE Computer Society published “Saving Rhinos with Predictive Analytics” in both IEEE Intelligent Systems, and in the more widely distributed ‘Computing Edge‘ (a compendium of interesting papers taken from 13 of the CS publications and provided to members and technologists at no cost.  The article describes how data based analysis of both rhino and poacher activity in concert with AI algorithms can focus enforcement activities in terms of timing and location and hopefully save rhinos.

For those outside of the U.S., the largest population of elephants (Republicans) and donkeys (Democrats) are in the U.S.– these animals being symbols for the respective political parties, and now on the brink of the 2016 presidential primaries, these critters are being aggressively hunted — ok, actually sought after for their votes.  Not surprisingly the same tools are used to locate, identify and predict the behaviour of these persons.   When I was young (1964) I read a book called The 480, which described the capabilities of that timeframe for computer based political analysis and targeting of “groups” required to win an election. (480 was the number of groupings of the 68 million voters in 1960 to identify which groups you needed to attract to win the election.)   21st century analytics are a bit more sophisticated — with as many as 235 million groups, or one per potential voter (and over 130 million voters likely to vote.).  A recent kerfuffle between the Sanders and Clinton campaign over “ownership/access” to voter records stored on a computer system operated by the Democratic National Committee reflects the importance of this data.  By cross connecting (data mining) registered voter information with external sources such as web searches, credit card purchases, etc. the candidates can mine this data for cash (donations) and later votes.  A few percentage point change in delivering voters to the polls (both figuratively, and by providing rides where needed) in key states can impact the outcome. So knowing each individual is a significant benefit.

Predictive Analytics is saving rhinos, and affecting the leadership of super powers. But wait, there’s more.  Remember the movie “Minority Report” (2002). This movie started on the surface with apparent computer technology able to predict future crimes by specific individuals — who were arrested to prevent the crimes.  (Spoiler alert) the movie actually proposes a group of psychics were the real source of insight.  This was consistent with the original story (Philip K Dick) in 1956, prior to The 480, and the emergence of the computer as a key predictive device.  Here’s the catch, we don’t need the psychics, just the data and the computers.  Just as the probability of a specific individual voting for a specific candidate or a specific rhino getting poached in a specific territory can be assigned a specific probability, we are reaching the point where aspects of the ‘Minority Report’ predictions can be realized.

Oddly, in the U.S., governmental collection and use of this level of Big Data is difficult due to privacy illusions, and probably bureaucratic stove pipes and fiefdoms.   These problems do not exist in the private sector.  Widespread data collection on everybody at every opportunity is the norm, and the only limitation on sharing is determining the price.  The result is that your bank or insurance company is more likely to be able to predict your likely hood of being a criminal, terrorist, or even a victim of a crime than the government.  Big Data super-powers like Google, Amazon, Facebook and Acxiom have even more at their virtual fingertips.

Let’s assume that sufficient data can be obtained, and robust AI techniques applied to be able to identify a specific individual with a high probability of a problematic event — initiating or victim of a crime in the next week.  And this data is implicit or even explicit in the hands of some corporate entity.  Now what?  What actions should said corporation take? What probability is needed to trigger such actions? What liability exists for failure to take such actions (or should exist)?

These are issues that the elephants, and donkeys will need to consider over the next few years — we can’t expect the rhinos to do the work for us.  We technologists may also have a significant part to play.

Self Driving Car Ethical Question

There is a classical ethical question, “The Trolley Problem” which has an interesting parallel in the emerging world of self-driving vehicles.  The original problem posits a situation where 5 persons will be killed if you do not take action, but the action you take will directly kill one person. There are interesting variations on this outlined on the above wikipedia page.

So, we now have the situation where there are 5 passengers in a self driving car.  An oncoming vehicle swerves into the lane, and will kill the passengers in the car. The car can divert to the sidewalk, but a person there will be killed if that is done.  Note the question here becomes “how do you program the car software for these decisions?“.  Which is to say that the programmer is making the decision well in advance of any actual situation.

Let’s up the ante a bit.  There is only one person in the car, but 5 on the sidewalk. If the car diverts 5 will die, if not just the one passenger will die. Do you want your car to kill you to save those five persons?  What if it is you and your only  child in car? (Now 2 vs 5 deaths). Again, the software developer will be making the decision, either consciously, or by default.

What guidelines do we propose for software developers in this situation?

Killer Robots (again?)

The International Joint Conference on Artificial Intelligence in July announced an open letter on Autonomous Weapons: an Open Letter from AI & Robotics Researchers which has probably broken the 20,000 signatures mark by now. (Wouldn’t you like your name on a letter signed by Stephan Hawking and Elon Musk, among other impressive figures?)    This touches on the cover topic of SSIT’s Technology and Society Magazine article in Spring of 2009 whose cover image just about says it all:spg09cov

The topic of this issue is Lethal Robots.  The letter suggests that letting AI software decide when to initiate fatal actions was not a good idea.  Specifically, “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.”

Unfortunately, I can’t exactly think of any way to actually prevent the development of such systems by organizations that would like to pursue the items listed above for which the killer robots are ideally suited.  Perhaps you have some thoughts?  How can we make these not just “not beneficial” but discourage their development? Or is that possible?

SSIT is a sponsor of a new IEEE Collaboratec community on Collaboratec CyberEthics and CyberPeace. I encourage you to join this community (which is not limited to IEEE members) and contribute to a discussion there.

Humans in a Post Employment World?

There are many sources suggesting that productivity (including robotics and A.I. interfaces) will increase enough to have a significant impact on future employment world wide.   This includes:

Geoff Colvin, in his new ‘underrated’ book suggests that even in a world where most if not all jobs can be done by robots, humans are social animals and will prefer human interactions in some situations.  The Atlantic, focuses on what the future may include for jobless persons when that is the norm.  “The Jobless don’t spend their time socializing or taking up new hobbies. Instead they watch TV or sleep.”  A disturbing vision of a world which currently includes, according to this article, 16% of American men ages 25-54.  The article did not discuss the potential for younger men who see limited future opportunity to turn to socially problematic activities from crime and drugs to radicalization and revolution.

As with any challenge, the first step is recognizing there is a problem. This may be more difficult in the U.S. where work is equated with status, personal identity (“I am a <job title here>”), and social responsibility.  One suggestion is the creation of civic centers where folks can get together and “meet, learn skills, bond around sports or crafts, and socialize.” These might be combined with maker-spaces and start-up incubators that become a catalyst for creator-consumer-funder collaborations.

So — what’s your future “job” — will you be in the “on-demand” economy?  Perhaps engaging in the maker-world? — How might this future differ in various countries? Will Europe or India or ?? yield different responses to a situation that is expected to affect global economies over this century?

Apocalypse Deterrence

The Center for the Study of Existential Risk (CSER.org) at Cambridge (U.K.) is focusing on how to protect humanity from the downside(s) of technology.  By “Existential” they are not referring to Camus, but to the elimination of Homo Sapiens — i.e. our existence.

Their concerns include the question of AI*’s that might have both sufficient power and motivation to disrupt humanity, and genetic engineering that could either make us obsolete, or get out of hand and make us extinct.

Who Cares? … well some fairly knowledgeable folks are involved, including:

  • Stephen Hawlking
  • Jaan Tallinn
  • Elon Musk
  • George Church

I suspect that some SSIT folks may find it useful to monitor CSER’s newsletter and consider how their concerns and issues relate to SSIT’s activities. — Grist for the Mill as it were.

 

Who’s flying this drone?

Looking out for humankind as intelligent technologies take charge

Guest Blog post by Jeanne Dietsch, Founder, Sapiens Plurum.

Until recently, I was an unmitigated technophile. Back in 1980, just after the first Apple PC was introduced, I tried to write a thesis on “The Future of the Computer as a Mass Medium.” My Master’s committee unanimously declared that such a thing would never occur. Two tech start-ups later, I was jeered onstage by London telecom executives because I predicted that Internet commerce would grow by 1000% over the next 5 years. And indeed I was wrong… on the low side.

More and more futurists now find themselves on the conservative side of reality. Consider these startling examples of actuality exceeding expectation: 1) Researchers stunned their compatriots by “solving” Texas Hold’em Poker, including probabilistic reasoning and bluffing strategies. Software cued with nothing but the rules of the game and monetary loss aversion became unbeatable by playing more hands of poker during two years than all humankind throughout the history of the game. Michael Bowling and colleagues now expect to optimize any constrained process with a clear outcome in the same massively parallel manner. 2) Researchers at the University of Washington can now play videogames telepathically. Using off-the-shelf tech, a videogame viewer controls a viewless player’s hand just by thinking about it. And Washington insiders hint US DoD has been performing similar studies for some time. 3) Nanobots will actually be tested this year to find and destroy cancerous cells and repair damaged ones. Projections that such minute machines would collaborate to keep us alive “forever” previously lay in the distant mists.

Since I sold our intelligent robotics start-up in 2010, I have been studying the accelerating evolution of technology. The tectonic tremors building up alarm not only me, but physicist Stephen Hawking, inventor Elon Musk, MIT professor Max Tegmark and thousands of others who signed Max’s open letter Research Priorities for Robust and Beneficial Artificial Intelligence. And it is not just AI, but its combination with other radical advances that portends vast, almost unimaginable change… and benefits. The question, as always, is: benefits for whom? Who’s driving this drone and where is it headed?

We know that military, political and economic gain will set its course unless someone creates a vision with loftier goals. Our challenge, then, is how to intervene to keep humankind, and human kindness, in the pilot’s seat, and piloting software. This is the reason I started Sapiens Plurum (the wisdom of many). Sapiens Plurum advocates for the interests of humankind in a world of increasingly powerful technology. The strategy behind Sapiens Plurum and Sapiens Plurum News assumes that any top-down policy consensus will be too little, too late, and largely unenforceable. By instead working to educate the general public, in particular, the young, we hope to create a demand-side force that will create bottom-up norms for humane and human-enhancing technologies. Hence, our priorities are to:

  1. Help people understand the potential impact of rising technologies on their lives
  2. Encourage people to choose technologies that put them in control to improve our lives
  3. Advocate for technologies that benefit humankind rather than exploit them

Can you help us by disseminating awareness at your organization or joining ours? We are seeking volunteer regional leaders and board members at SapiensPlurum.org.

About the Author: Jeanne Dietsch was founder and CEO of MobileRobots Inc. She served on the IEEE Industrial Activities Board of RAS 2007-2011 and wrote a column for IEEE Robotics & Automation magazine 2009-2012. She is a Harvard graduate in sci-tech policy, a group-thinking facilitator and founder of Sapiens Plurum, an advocacy organization looking out for the interests of humankind.

Computer Consiousness

Christof Koch in an interview with MIT’s Technology Review suggests that computer consciousness is a matter of complexity, and perhaps the way that complexity is implemented.

With the recently released movie on Alan Turing (The Imitation Game) , the public is, once again, exposed to the basic concept … and Turing’s insight that “if it interacts like an intelligent, conscious being, then maybe it is one.”   A more ironic concept since the movie pushes Turing a bit further on the autistic scale than is likely — and causes the attentive audience to ask if Turing is conscious (he clearly is intelligent.)

This concept is often confused with the question of “what makes us human” or “how do we know that other entity is human?” … which is not “is conscious?”  or “is intelligent”. A WSJ column “Why Digital Gurus Get Lost in the ‘Uncanny Valley‘” touches on this, pointing out that we use a number of unconscious clues to make this decision.  (Also why Pixel hires folks with acting backgrounds.)

There is a danger here.  If we judge these characteristics by certain clues — like the angle of a dog’s head, the big eyes, (eyes are significant here) .. and so forth, we may dismiss intelligent/conscious entities who fail our (unconscious?) tests.   Of course they may fail to recognize us as having these characteristics for parallel reasons.

The good news is that our current primary path for detecting intelligent life is with SETI, and since all of those communications are very “Imitation Game” like, we won’t have the chance to mess it up with our “Uncanny Valley” presumptions.

 

Robotics Commission

Ryan Calo, UW School of Law, published a recent Brookings Institute Report “The Case for a National Robotics Commission“.  He argues that robotics is sufficiently complex that policy makers (legislative and/or federal commissions) cannot expect to have the expertise to make informed policy recommendations, laws and determinations.  He sites various examples from driverless cars to Stephen Colbert’s Twitter bot @RealHumanPraise

While I agree with Ryan’s observation about the challenge governments face trying to make informed decisions related to technology issues, I fear “robotics” is too focused of scope.  Similar issues emerge with medical devices, baggage sorting systems and automated phone systems.

The field of Software Engineering is moving towards licensed (Professional Engineering) status in various US states at this time, and that distinction  will help establish criteria for some of the related applications.  Essentially any health or safety related application (cars, medical devices, etc.) should have review/endorsement by a licensed software engineer (as is the case of civil engineering, mechanical engineering and electrical engineering.)  That folks might be writing software for critical systems and not be familiar with the concepts surfaced in the Software Engineering Body of Knowledge (which is the basis for state licensing exams, the IEEE CS certification program, and a number of IEEE/ISO standards) is a disturbing reality.

Similar considerations exist in the critically  related areas of robotics, sensors, intelligent vehicles — and no doubt will emerge with artificial intelligence over time. Technology domains are moving rapidly on all fronts. Processes to track best practices, standards, university curriculum, vendor independent certifications, licensing, etc. at best lag behind and often get little or no industry/academic support. Identifying knowledge experts is difficult in more static fields. Many of the issues facing policy makers span fields — is it software, hardware, mechanical, etc?

So while the concept of a robotics commission may help get the discussion going, in reality we need a rich community of experts spanning a range of technology fields who are prepared to join in the discussion/analysis as complex issues arise.  Drawing these from pools of corporate lobbyists, or other agenda laden sources is problematic. It may be that partnership between agencies and professional societies may provide such a pool of experts.  Even here the agenda risks are real, but at least there can be some balance between deep-pocket established interests and emerging small companies where disruptive innovation considerations can be taken into account.

What forums exist in your country/culture/environment to help inform policy and regulatory action in technology areas?  How can government draw on informed experts to help?

Enslaved by Technology?

A recent “formal” debate in Australia, We are Becoming Enslaved by our Technology addresses this question (90 min).  A look at the up side and down side of technological advances with three experts addressing both sides of the question.

One key point made by some of the speakers is the lopsided impact that technology may have towards government abuse.  One example is captured in the quote “a cell phone is a surveillance device that also provides communications”  (quoted by Bernard  Keene)  In this case one who benefits from continuous location, connectivity, app and search presence.

Much of the discussion focuses on the term “enslave” … as opposed to “control”.  And also on the question of choice … to what degree do we have “choice”, or perhaps are trying to absolve our responsibility by putting the blame on technology.

Perhaps the key issue is the catchall “technology”.  There are examples of technology, vaccines for example, where the objectives and ‘obvious’ uses are beneficial (one can envision abuse by corporations/countries creating vaccines.) And then the variations in weapons, eavesdropping, big-data-analysis vs privacy, etc.  Much of technology is double-edged – with impacts both “pro and con” (and of course individuals have different views of what a good impact.)

A few things are not debatable (IMHO):
1. the technology is advancing rapidly on all fronts
2. the driving interests tend to be corporate profit, government agendas and in some cases inventor curiosity and perhaps at times altruistic benefits for humanity.
3. there exists no coherent way to anticipate the unintended consequences much less predict the abuses or discuss them in advance.

So, are we enslaved? …. YOU WILL RESPOND TO THIS QUESTION! (Oh, excuse me…)

 

Turing Test – 2014

Chatbot “Eugene Goostman“, created by a team of Russian & Ukrainian software developers: Vladimir Veselov, Eugene Demchenko, Sergey Ulasen has “passed” the Turing Test (as of June 6, 2014).  I’d like to report my own interactions with Eugene, but the links a current instantiation seem to be ‘temporarily’ disrupted — I suspect for purposes of monetizing the recent notoriety.

Ray Kurzweil, in his 1999 book “The Age of Spiritual Machines” predicts that  by 2019 there would be “widespread reports of computers passing the Turing Test although these tests do not meed the criteria established by knowledgeable observers.”  It appears that Ray is right so far with claims of earlier successes, and deprecation of this event as being sufficient.

I won’t duplicate notes of prior posts on AI, but will point out that “practical” applications of Chatbots and other AI type software exist and will have ‘social impact’.  One impact will be the expansion of online interaction that can provide useful responses to consumers, students, etc.  Fooling the public is not needed (and may be unwise) … at times having a clearly ‘computer generated voice’ (audio or otherwise) helps set the expectations of the humans interacting with the system.  However, we can expect increasingly sophisticated capabilities along these lines.

What uses/services would you suggest as priority applications for a fairly robust ‘chat-bot’?