Robot Friends

The Wall St. Journal has a piece “Your Next Friend Could Be a Robot“, which is talking about a device in your home, not a disembodied associate on Facebook. The initial example is “embodied” as a speaker/microphone in Amazon’s Echo Dot, but also includes similar devices from Google, cell phones and even Toyota.  So what?

Folks, the article focuses on a 69 year old woman living alone, have a relationship with the devices.  These are 24/7 connected to the Internet, with a back end AI voice recognition/response system. (The article asserts it’s not AI because it’s not conscious, which is a different consideration.) … Apparently “double digit” percentages of interactions with Alexa (Amazon’s non-AI personality) are “non-utilitarian” presumably not triggering orders for Amazon products.

The good news: folks feel less lonely, more connected, and have “someone” there 24/7 — and responding to queries (with pre-programed answers) such as “what are the laws of robotics” — see Reddit’s list of fun questions.  … but

The bad news — it’s not clear what happens when you tell Alexa to call 911, or that you have fallen down and can’t get up, etc.  While there are “wakeup” and “sleep” words you can use, just the fact that a wakeup word can be recognized indicates that a level of 24/7 monitoring is in place.  No doubt this can be hacked, and tapped, and otherwise abused.

What is Amazon’s liability if you tell Alexa you need help and no effective response occurs?  No doubt time and lawsuits will tell.

Internet Resilience

The Internet is a widespread tool reflecting, to some degree, free speech and freedom of the ‘press’.  As such, it is a threat to entities that wish to suppress these, or make them subservient to other priorities. A recent report on DefenseOne.com outlines the ways in which some countries have been able to put an “on-off” switch in place, and use this.  The trick is having all or most of the traffic going though a small number of (authorized) intermediate nodes where the pug can be pulled.

Countries like Egypt and China have such bottlenecks.  Countries with large numbers of intermediate nodes connected outside the country include Canada, Germany and the Netherlands.  Surprisingly Russia has a very large number of such connections — explained by the article as a complexity designed to make tracking cyber-crime nearly impossible.

Hacking Medical Devices

Johnson & Johnson recently disclosed that one if its insulin pumps might be subject to hacking.   This follows assertions about pacemakers and implanted defibrillators might also be subject to attack.  No doubt some wireless medical devices will have security vulnerabilities with at least software if not hardware attack vectors.

The motives for attack are perhaps equally important in any case. Hacking a fleet of cars can have widespread visibility and will be associated with a different set of motives than a personal attack via a medical device.  However, murder or assassination are potential uses for these types of flaws.

“No instances of medical-device hacking have been disclosed.” according to the related WSJ article. Of course, when a diabetic dies of an insulin excess or deficit, murder by hacking might not be on the post mortum evaluation list.  The abuses here are (hopefully) rare, but the lack of disclosure does not imply the lack of a successful attack.

Killing Mosquitoes

The elimination of Malaria (438,000 deaths per year) and a number of other deadly/debilitating diseases (Zika, dengue fever, yellow fever, etc.) is often a war against the mosquitoes that carry these diseases.   Bill Gates has designated the mosquito “the deadliest animal in the world“, and fighting these diseases is a top priority for the Gates Foundation.  Another wealthy ExMicrosoft wizard, Nathan Myhrvold, has developed a prototype laser to zap the bugs selectively. And a recent Wall St. Journal article suggests a variety of genetic engineering attacks that are in development. With the spread of these diseases beyond their traditional “range”, their impact will increase as will the needs of a broader range of countries.

There are a number of Technology/Society impacts of interest here.  First, any objective for which there are multiple, diverse approaches that are likely to reach the objective are likely to be accomplished — don’t bet on the bugs here (I know, “Jurassic Park” seeks to make the point that “Nature will Find a Way” … and that is often true, but humans have been very effective at driving the extinction of so many species that geologists have declared this a new age, the Anthropocene.)

Second, if anyone wonders how to change the world, the answer clearly is technology — from DDT impregnated sleeping nets, lasers and genetic engineering we are talking tech— and “engineering thinking”. (My granddaughter has a T shirt: front side “A-Stounding”, back side “Stoundings: persons who like to solve problems rather than cause them.” )  I call those folks Technologists.  Bugs Beware — you have a whole generation of Robotics Competition and Mindcraft modders headed your way.

Third — is this a good idea?  Note, there are significant variations.  Some approaches target just one species (Aedes aegypti, at least outside of it’s forest habitat origin), others target a wider range of species, others focused areas.)  One recurrent human failure is anticipating consequences of our actions.  What animals depend on these critters for dinner, and so forth up the food chain. What plants depend on these for pollination?  We abound in ignorance on such matters, and we find it easier to fund the research for eradication than for understanding.

So .. should we eliminate the deadliest animal on earth?  (let me qualify, other than Homo Sapiens.)

 

To GO or Not to GO?

Pokemon Go has become a delightful and disturbing experiment in the social impact of technology. This new “Free” software for smart phones implements an augmented reality, overlaying the popular game on the real world. Fans wander the streets, byways, public, and in some cases private spaces following the illusive characters on their smart phone to capture them, or “in world”, or to collect virtual items.  The uptake has been amazing, approaching Twitter in terms of user-hours in just days after introduction. It has also added $12 billion to Nintendo’s stock value (almost double).

Let’s start with “Free”, and $12 billion dollars. The trick is having a no-holds barred privacy policy. Not surprising, the game knows who you are and where you are. It also can access/use your camera, storage, email/phone contacts, and potentially your full Google account (email contents, Drive contents, etc.)  Them money comes because all of this is for sale, in real time. (“While you track Pokemon, Pokemon Go tracks you”, USA Today, 12 July 16) Minimally you can expect to see “Luremodules” (a game component) used to bring well vetted (via browser history, email, call history, disk content, etc.) customers into stores that then combine ad-promotions with in-store characters. Perhaps offering your favorite flavor ice cream, or draw you into a lawyer’s office that specializes in the issues you have been discussing on email, or a medical office that …well you get the picture, and those are just the legitimate businesses.  Your emails from your bank may encourage less honest folks to lure you into a back alley near an ATM machine .. a genre of crime that has only been rumored so far.

The July 13th issue of USA Today outlines an additional set of considerations. Users are being warned by police, property owners, and various web sites for various reasons. The potential for wandering into traffic is non-trivial while pursuing an illusive virtual target, or a sidewalk obstruction, or over the edge of the cliff (is there a murder plot hiding in here?) Needless to say playing while driving creates a desperate need for self-driving cars. Since the targets change with time of day, folks are out at all hours, in all places, doing suspicious things. This triggers calls to police. Some memorial sites, such as Auschwitz and the Washington DC Holocaust Memorial Museum have asked to be exluded from the play-map. There are clearly educational opportunities that could be built into the game — tracing Boston’s “freedom trail”, and requiring player engagement with related topics is a possible example. However, lacking the explicit consideration of the educational context, there are areas where gaming is inappropriate. Also, some public areas are closed after dark, and the game may result in players trespassing in ways not envisioned by the creators, which may create unhealthy interactions with the owners, residents, etc. of the area.

One USA Today article surfaces a concern that very likely was missed by Nintendo, and is exacerbated by the recent deaths of black men in US cities, and the shooting of police in Dallas. “For the most part, Pokemon is all fun and games. Yet for many African Americans, expecially men, their enjoyment is undercut by fears they may raise suspicion with potentially lethal consequences.”  Change the countries and communities involved and similar concerns may emerge in other countries as well. This particular piece ends with an instance of a black youth approaching a policeman who was also playing the game, with a positive moment of interaction as they helped each other pursue in-game objectives.

It is said every technology cuts both ways.  We can hope that experience, and consideration will lead both players and Nintendo to evolve the positive potential for augmented reality, and perhaps with a bit greater respect for user privacy.

Ethics of Killing with Robots

The recent murder of police officers in Dallas, finally terminated by the lethal use of a robot to kill the shooter has triggered an awareness of related ethical issues. First, it must be understood that the robot was under human control during it’s entire mission, so in this particular case it reflects a sophisticated “projection of power” with no autonomous capability. The device might have been a drone controlled remotely and simply paralleled the current use of drones (and no doubt other devices) as weapons.

We already have examples of “autonomous” devices as well. Mines, both land and ocean (and eventually space) all reflect devices “programmed to kill” that operate with no human at the trigger. If anything, a lethal frustration with these devices is that they are too dumb, killing long after their intended use.

I saw a comment in one online discussion implying that robots are or would be programed with Asimov’s first law: “Do not harm humans”. But of course this is neither viable at this time (it takes an AI to evaluate that concept) nor is it a directive that is likely to be implemented in actual systems. Military and police applications are among the most likely for robotic systems of this kind, and harming humans may be a key objective.

Projecting lethal force at a distance may be one of the few remaining characteristics of humans (since we have found animals innovating tools, using language and so forth). Ever since Homo Whomever (pre Sapians as I understand it), tossed a rock to get dinner we have been on this slippery slope. The ability to kill a target from a ‘position of safety’ is essentially the basic design criteria for many weapon systems.  Homo Whomever may have also crossed the autonomous Rubicon with the first snare or pit-fall trap.

Our challenge is to make sure our systems designers and those acquiring the systems have some serious ethical training with practical application.  Building in the safeguards, expiration dates, decision criteria, etc. should be essential aspects of lethal autonomous systems design. “Should” is unfortunately the case, it is unlikely in many scenarios.

Teaching Computers to Lie

A recent article on the limitations of computer “players” in online games is that they don’t know about lying.   No doubt this is true.  Both the detection of lies (which means anticipating them, and in some sense understanding the value of mis-representation to the other party) and the ability to use this capability are factors in ‘gaming’.  This can be both entertainment games, and ‘gaming the system’ — in sales, tax evasion, excusing failures, whatever.

So here is a simple question: Should we teach computers to lie?
(unfortunately, I don’t expect responses to this question will alter the likely path of game creators, or others who might see value in computers that can lie.)   I will also differentiate this from using computers to lie.  I can program a computer so that it overstates sales, understates losses, and many other forms of fraud.  But in this case it is my ethical/legal lapse, not a “decision” on the part of the computer.

Ethics and Entrepreneurs

The Wall St. Journal outlined a series of the ethical issues facing start-up, and even larger tech companies: “The Ethical Challenges Facing Entrepreneurs“.  Having done time in a few similar situations, I can attest to the temptations that exist.  Here are a few of the key issues:

  • The time implications of a startup – many high-tech firms expect employees to be “there” far more than 40 hours per week. Start-ups are even more demanding, with the founders likely to have a period of their lives dominated by these necessities – families, relationships and even individual health can suffer.  What do you owe your relationships, or even yourself?
  • Not in the article, but in the news: in the U.S. many professional employees are “exempt” from overtime pay.  This means they can be expected to work “when needed” but often it seems to be needed every day and every week, yielding 60 hour work weeks (and 50% fewer employees needed to accomplish the work.)  I did this for most of my life, but also got stock options and bonus pay that allowed me to retire early … I see others in low paying jobs, penalized for not being “part of the team” as an exempt employee even when they have no work to actually perform.  Start-ups can project the “founder’s passion” onto others who may not have anywhere near the same share of potential benefit from the outcome.  This parallels a point in the article on “Who is really on the team?” — how do you share the pie when things take off?  Do you ‘stiff’ the bulk of the early employees and keep it to yourself? Or do you have some millionaire administrative assistants? It sets the personality of your company, trust me, I’ve seen it both ways.
  •  Who owns the “IP”? — it would be easy if we were talking patents and copyrights (ok, maybe not easy, technologists often get short-changed when their inventions are the foundation of corporate growth and they find they are looking for a new job.) — But there are lots of grey areas — was a spin-out idea all yours, or did it arise from the lunch table discussion? And what do you do when the company rejects your ideas (often to maintain their own focus, which is laudable).  So is your new start-up operation really free and  clear of legacy IP?
  • Mis-representation is a non-trivial temptation.  Entrepreneurs are looking for venture capital, for customers, for ongoing investors, and eventually to the business press (“xyz corporation fell short of expectations by 13% this quarter”.)  On one hand, if you are not optimistic and filled with hopeful expectations you can’t get off the ground. But ultimately, a good story will meet the test of real data, and along with this your reputation with investors, suppliers, customers, and in the worst case, the courts.  There is a difference between “of course our product has ‘abc'” (when you know it doesn’t), and “if that’s what it takes, we will make it with ‘abc'”. I’ve seen both – it’s a pain to do those overtime hours to make it do ‘abc’ because the sales person promised it. It is more of a pain to deal with the lawyers when it wasn’t ever going to be there. Been there, done that, got the t-shirt (but not the book I’m glad to say.)
  • What do you do with the data?  A simple example – I worked for a company developing semi-conductor design equipment, we often had the most secret designs from customers to work out some bug they discovered. While one aspect of this is clear (it’s their’s), there are more subtle factors like some innovative component, implicit production methods or other pieces that a competitor or even your own operation may find of value.
  • What is the company role in the community? Some startups are 24/7 focused on their own operation. Some assume employees, and even the corporation should engage beyond the workplace.  Again, early action in this area sets the personality of an organization.  Be aware that technologists are often motivated by purpose as much as money – so being socially conscious may be a winning investment.
  • What is the end game? — Now that you have yours, what do you do with it? — Here I will quote one of the persons mentioned in the article: “The same drive that made me an entrepreneur now drives me to try to save the world.”

I will suggest that this entrepreneur will apply the same ethical outlook at the start of the game as he/she does at the end of the game.

 

It’s 10PM do you know what your model is doing?

“Customers like you have also …”  This concept appears explicitly, or implicitly at many points in the web-of-our-lives, aka the Internet. Specific corporations, and aggregate operations are building increasingly sophisticated models of individuals.  Not just “like you”, but “you”! Prof. Pedro Domingos at UW  in his book “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World” suggests this model of you may become a key factor of your ‘public‘ interactions.

Examples include having Linked-in add a “find me a job” button that will conduct interviews with relevant open positions and provide you a list of the best.  Or perhaps locating a house, a car, a spouse, …well, maybe somethings are better done face-2-face.

Apparently a Asian firm, “Deep Knowledge” has appointed a virtual director to their Board. In this case it is a construct designed to detect trends that the human directors might miss.  However, one suspects that Apple might want a model of Steve Jobs around for occasional consultation, if not back in control again.

Privacy and Security

Guest Post from: Marvi Islam

Let me start it with privacy and link it to security. Well, all of us know about the privacy settings on Facebook and we like them so much as we can hide from our family members, the things we do and the people we’re with. But wait, what about security? How is privacy linked to security?

Let’s leave the digital platform and move our focus towards our daily lives. We need security in our banks, schools, public places and even in our homes and parks. But have you ever wondered what price we pay for this non-existent blanket of security? Privacy.  Let me reiterate –  security at the price of privacy. Those cute little things we see on the ceilings of our school corridors; we call them “CCTV” –  they are installed for our security. But security from? No one bothers to ask. Maybe they (the authorities) want to tape everything in case something bad happens so that they can go through the tapes and catch perps red-handed. But they are taping every single thing and we don’t take this as them breaching our privacy?

A number of times these tapes have been misused causing niggling unpleasantries and yet it’s ok. There’s a famous proverb in Hindi that translates to this,“You have to sacrifice one thing to get another”. Here we sacrifice our privacy to get security. With self-driving cars grabbing all the attention, there goes more data to stay connected and apparently, “secure”.

Similarly, some companies check what their employees are up to and what they are doing on their computers while they are at work. This, from the company’s perspective is to avoid plausible breach of sensitive data but is such constant monitoring even ethical? So, does it really have to be a tradeoff? Security for privacy and vice versa?

Marvi Islam is from Islamabad, Pakistan and studies at Capital University of Science and Technology, Islamabad. https://www.facebook.com/marvi.islam