Teaching Computers to Lie

A recent article on the limitations of computer “players” in online games is that they don’t know about lying.   No doubt this is true.  Both the detection of lies (which means anticipating them, and in some sense understanding the value of mis-representation to the other party) and the ability to use this capability are factors in ‘gaming’.  This can be both entertainment games, and ‘gaming the system’ — in sales, tax evasion, excusing failures, whatever.

So here is a simple question: Should we teach computers to lie?
(unfortunately, I don’t expect responses to this question will alter the likely path of game creators, or others who might see value in computers that can lie.)   I will also differentiate this from using computers to lie.  I can program a computer so that it overstates sales, understates losses, and many other forms of fraud.  But in this case it is my ethical/legal lapse, not a “decision” on the part of the computer.

If the Computer Said it, it must be True!

Well, maybe not.  “What Happens When GPS Can’t Find You?” is a commercial concern raised by a Wall St. Journal article.  Needless to say a business in today’s world is at risk if the GPS location associated with it is wrong, or just the path that is required to get there is not correct.  Consumers at best are frustrated, and may simply write off that operation.  In this case it is often not the business’s fault, but one in the GPS location service, or route mapping.

Behind this is a more pervasive and serious problem.  Often there is no way to “fix” these problems from the perspective of the consumer or the an affected business.  You may know the data is wrong, the route doesn’t work, and correcting the error(s) is not a straight forward path, and certainly not easy enough that the “crowd-source” solution would work. That is, many people might find the error, and if there were a simple way to “report” the problem, after the “nth” report, an automated fix (or review) could be triggered.

This is not just  GPS problem. I’ve found many web sites are validating addresses against equally flawed sources (perhaps even the USPS).  I can send mail to my daughter (and she gets it), I’ve even seen the mailbox on the side of her street. By one of the web sites I used to deliver items to her location is rejecting the address as “not known”… and of course there is no way to report the error. A related problem is entering an address in “just the right way” — am I in “Unit A101” or “Apt. A 101″ or maybe Apt A101”, note that the delivery folks can handle all of these, but the online ordering system can’t.  Technology design consideration: track such ‘failures’, and after some number, check the validation process, or better have a button such as “I know this is right, so please update the database”.

Online operations are losing business, as well as brick-and-mortar activities due to online “presumptions” of correctness .. and no corrective processes available.  It’s one thing when the word processor marks your spelling as “wrong”, but lets you keep it anyway.  It is another when medications or essential services can’t reach your location because the GPS or delivery address is not in the database, or is listed incorrectly.

Information and media authentication for a dependable web

Guest author: Prof. Alessandro Piva (Bio Below)

The wide diffusion of the web and its accessibility through mobile devices has radically changed the way we communicate and the way we collect information about the world we live in. The social impact of such changes is enormous and includes all aspects of our lives, including the shape of social relationships and the process whereby we form our opinions and how we share them with the rest of the world. At the same time, web surfers and citizens are no more passive recipients of services and information. On the contrary, the Internet is more and more populated with contents directly generated by the users, who routinely share information with each other according to a typical peer-to-peer communication paradigm.

The above changes offer a unique opportunity for a radical improvement of the level of democracy of our society, since, at least in principle, every citizen has the ability to produce globally-accessible, first-hand information about any fact or event and to contribute with his/her ideas to general discussions while backing them up with evidence and proofs retrieved from the Internet.

The lack of a centralized control contributes to increase the democratic nature of the Internet, however, at the same time it makes the Internet a very fragile ecosystem, that can be easily spoiled. The ease with which false information can be diffused on the web, and the possibility of manipulating digital contents through easy-to-use and widely diffused content processing tools, casts increasing doubt on the validity of the information gathered “on-line” as an accurate and trustworthy representation of reality.

The need to restore and maintain trust in the web as one of our primary sources of information is evident.

Within the IEEE Signal Processing Society, the Information Forensics and Security (IFS) Technical Committee is involved in the signal processing aspects of this issue, with a particular attention to multimedia data (see the IEEE Signal Processing Magazine special issue on Digital Forensics, Vol 26, Issue 2, March 2009). It is a fact that multimedia data play a very special role in the communication of facts, ideas and opinions: images, videos and sounds are often the preferred means to get access to information, because of their immediacy and supposed objectivity. Even today, it is still common for people to trust what they see, rather than what they read. Multimedia Forensics (MF) deals with the recovery of information that can be directly used to measure the trustworthiness of digital multimedia content. The IFS Technical Committee organized the First Image Forensics Challenge, that took place in 2013, to provide the research community an open data set and protocol to evaluate the latest image forensic techniques.

However, MF tools alone are not the solution to the authentication issue: several key actions must be undertaken involving technological, legal and societal aspects.

What are your opinions about this topic?

Are we irremediably condemned to base our opinions, beliefs and social activity on information whose reliability cannot be determined?

Do you think that the involvement of a critical mass of researchers with different background – technological, legal and social  – could find a solution?

Are you interested in working on this topic?

===================

Author: Prof. Alessandro Piva

IEEE Signal Processing Society Delegate on the SSIT Board of Governors

Associate Professor @ Department of Information Engineering – University of Florence (Italy)

Alessandro Piva is Associate Professor at the Department of Information Engineering of the University of Florence. He is also head of FORLAB – Forensic Science Laboratory – of the University of Florence. His research interests lie in the areas of Information Forensics and Security, and of Image and Video Processing. In the above research topics he has been co-author of more than 40 papers published in international journals and 100 papers published in international conference proceedings. He is IEEE Senior Member, and he is IEEE Information Forensics and Security Technical Committee Associate Member; he has served on many conference PCs, and as associate editor of the IEEE Trans. on Multimedia, IEEE Trans. on Information Forensics and Security, and of the IEEE Trans. on Circuits and Systems for Video Technology. Other professional details appear at: http://lesc.det.unifi.it/en/node/177

Self Driving Car Ethical Question

There is a classical ethical question, “The Trolley Problem” which has an interesting parallel in the emerging world of self-driving vehicles.  The original problem posits a situation where 5 persons will be killed if you do not take action, but the action you take will directly kill one person. There are interesting variations on this outlined on the above wikipedia page.

So, we now have the situation where there are 5 passengers in a self driving car.  An oncoming vehicle swerves into the lane, and will kill the passengers in the car. The car can divert to the sidewalk, but a person there will be killed if that is done.  Note the question here becomes “how do you program the car software for these decisions?“.  Which is to say that the programmer is making the decision well in advance of any actual situation.

Let’s up the ante a bit.  There is only one person in the car, but 5 on the sidewalk. If the car diverts 5 will die, if not just the one passenger will die. Do you want your car to kill you to save those five persons?  What if it is you and your only  child in car? (Now 2 vs 5 deaths). Again, the software developer will be making the decision, either consciously, or by default.

What guidelines do we propose for software developers in this situation?

Apocalypse Deterrence

The Center for the Study of Existential Risk (CSER.org) at Cambridge (U.K.) is focusing on how to protect humanity from the downside(s) of technology.  By “Existential” they are not referring to Camus, but to the elimination of Homo Sapiens — i.e. our existence.

Their concerns include the question of AI*’s that might have both sufficient power and motivation to disrupt humanity, and genetic engineering that could either make us obsolete, or get out of hand and make us extinct.

Who Cares? … well some fairly knowledgeable folks are involved, including:

  • Stephen Hawlking
  • Jaan Tallinn
  • Elon Musk
  • George Church

I suspect that some SSIT folks may find it useful to monitor CSER’s newsletter and consider how their concerns and issues relate to SSIT’s activities. — Grist for the Mill as it were.

 

Technology In the Classroom?

The Wall Street Journal has a Pros/cons article on this question … which is at the core of Social Impact of Technology in Education.

My son-in-law teaches a university class where students get the “lecture” portion online, and come into class to work on projects/homework. My granddaughter has online assignments regularly, many key tests are done online, and they don’t get ‘snow days’ — in case of inclimate weather they stay home and login. Programs like the Kahn Academy, and a number of Universities offer courses free to “audit”.

At the same time, kids need the real world collaboration, social experience, ideally no bullying, and ideally sufficiently strong (positive) peer groups that help them develop a bunch of skills that are real world based.

What are the key references you find informative on the question of how we educate the next generation?

Police Cameras

My daughter is attending a citizen police academy. They discussed the challenges that police cameras (body, squad car, interview rooms, traffic monitoring, etc.) present — and these related, in part, to the objectives of having such cameras.

1) When an officer is apprehending a suspect, a video of the sequence covers a topic that is very likely to be raised in court (in the  U.S. where fairly specific procedures need to be followed during an arrest.)  Evidence related to this has to follow very specific rules to be admissible.  An example of this concept is in the Fort Collins Colorado police FAQ where they provide some specifics. This process requires managed documentation trails by qualified experts to assure the evidence can be used.  There are real expenses here beyond just having a camera and streaming/or transferring the sequences to the web. Web storage has been created that is designed to facilitate this management challenge. Note that even if the prosecution does not wish to use this material, the defense may do so, and if it is not managed correctly, seek that charges be dismissed. (For culture’s where defendants are not innocent until proven guilty and/or there is not a body of case or statutory defendants rights this may sound odd, but in the U.S. it is possible for a blatantly guilty perpetrator to have charges against him dropped due to a failure to respect his rights.)

2) There are situations where a police officer is suspected of criminal actions. For real time situations (like those in the news recently), the same defendants rights need to be respected for the officer(s) involved. Again close management is needed.

Note that in these cases, there are clear criminal activities that the police suspect at the time when the video is captured, and managing the ‘trail of evidence’ is a well defined activity with a cost and benefit that is not present without the cameras.

The vast majority of recorded data does not require the chain-of-evidence treatment. If a proper request for specific data not associated with an arrest results in data that is used in court, it is most likely to be by a defendant, and the prosecutor is unlikely to challenge the validity of the data since it deprecates their own system.

Of course there are other potential uses of the data.  It might contain information relevant to a divorce actions (the couple in the car stopped for the ticket – one’s spouse wants to know why the other person was in the car); or the images of bystanders at a site might impact the apparent privacy of such persons. (Although in general no right of privacy is recognized in the U.S. for persons in public.)

The Seattle police are putting some video on YouTube, after applying automated redaction software to protect the privacy of individuals captured in the frame. Just the presence of the video cameras can reduce both use of force and citizen complaints.

There are clearly situations where either the police, or the citizens involved, or both would find a video recording to be of value, even if it did not meet evidentiary rules.  Of course the concern related to such rules is the potential for in-appropriate editing of the video to transform it from an “objective” witness to bias it in one direction or another.

We have the technology— should we use it?  An opinion piece by Jay Stanley in SSIT’s Technology and Society journal outlines some of these issues in more detail.

ISTAS 2015 – Nov 11, 12; Dublin Ireland

The International Symposium on Technology and Society (ISTAS), held annually.

Papers (5,000 – 6,000 words) using the ISTAS2015 Template must be registered on the conference portal by the deadline of 31 May 2015.  Workshop proposals have a 8 June 2015 deadline (see site for details)

 

 

Diversity– the key to the 21st century

Recently Intel announced that it was not just investing in expanding the diversity of it’s work force, but also that executive compensation would be tied to success here.  My own research based on social capital (see concepts of Robert Putnam) development indicates that diversity is a key to innovation, so Intel’s emphasis makes sense.

But, diversity is a two way street. Each individual can expand their personal “diversity index” (I just created that term for this discussion) by expanding their range of contacts, classes, readings, etc.)  The 21st century will be dominated by multidisciplinary requirements — and technology fields will often be a key component here.  There are very few aspects of society that are not influenced (if not dominated) by computer technology.  Another entire area of interactions is emerging in health care, biology, genomics and evolution.

Productive employees, citizens and innovators will cultivate their awareness in these diverse areas so they can be effective at contributing to, or critiquing the challenges we will face.  I anticipate employers (at least enlightened ones) will recognize and seek individuals with this diversity.

The Wall St. Journal, 11 Nov 2013, argues that focusing too much in college can backfire — students (parents, advisers – and even committees creating new majors or certificates) can be lured by last years job market and end up with limited (or non-existent) opportunities. Part of this is the inability to predict the future job market, but another key aspect is the reality that the exciting growth careers ten years from now do not exist today.

This is a opportunity for every professional (technologist or not).  IEEE has a key strength here with the diversity of fields it addresses.  You can participate in (some) IEEE meetings where folks in the room are experts in intelligent vehicles, solar power, medical technology, software engineering, sensors, etc, etc, etc. If you take the time to build your network to include these folks your potential diversity expands dramatically.  Other IEEE meetings will span every part of the globe, and some will span in both dimensions (Sections Congress for example).

There are paths outside of IEEE as well, and folks who take the time to develop these experiences, contacts, and understandings will bring critically needed insight to the table wherever they work.

 

Hard Hitting Robots (Not) — and Standards

ISO is developing standards for the contact between collaborative robots and humans working in close proximity.  Which raises a question of how hard a robot can hit you, legally. Of course this also raises concerns in industry about liability, work-place-safety legislation etc.

There is nothing new here in reality.  Humans have been working in collaborative environments with machines, animals and even other humans. In the U.S. some of the first workplace limitations were actually triggered by animal cruelty legislation applied to child labor. And of course as our experience has increased with different types of equipment, so have the sophistication of work-place protections.   Industry leaders working in these areas should be pro-active in helping to set standards, both to have a voice in the process, and to protect workers.  Well considered standards provide protection for employers as well as workers.  Of course when insufficient diversity of perspectives establish the standards, they can end up with an imbalance.

In my own experience, which involved significant work with standards (POSIX at IEEE and ISO levels) industry is wise to invest in getting a good set of standards in place, and users/consumers are often under-represented.  IEEE and IETF are two places where technologists can participate as individuals, which provides a better diversity in my experience. ISO operates as “countries”, with some countries under the lead of corporate interests, others academic, some government agencies. In general I suspect we get better standards out of the diversity possible from forums like IEEE — but with industry hesitant to fund participation by individual technologists, these forums may lack sufficient resources.

One of these days, you may get a pat on the back from a robot near you. Perhaps even for work well done in the standards process.