Ethics and Entrepreneurs

The Wall St. Journal outlined a series of the ethical issues facing start-up, and even larger tech companies: “The Ethical Challenges Facing Entrepreneurs“.  Having done time in a few similar situations, I can attest to the temptations that exist.  Here are a few of the key issues:

  • The time implications of a startup – many high-tech firms expect employees to be “there” far more than 40 hours per week. Start-ups are even more demanding, with the founders likely to have a period of their lives dominated by these necessities – families, relationships and even individual health can suffer.  What do you owe your relationships, or even yourself?
  • Not in the article, but in the news: in the U.S. many professional employees are “exempt” from overtime pay.  This means they can be expected to work “when needed” but often it seems to be needed every day and every week, yielding 60 hour work weeks (and 50% fewer employees needed to accomplish the work.)  I did this for most of my life, but also got stock options and bonus pay that allowed me to retire early … I see others in low paying jobs, penalized for not being “part of the team” as an exempt employee even when they have no work to actually perform.  Start-ups can project the “founder’s passion” onto others who may not have anywhere near the same share of potential benefit from the outcome.  This parallels a point in the article on “Who is really on the team?” — how do you share the pie when things take off?  Do you ‘stiff’ the bulk of the early employees and keep it to yourself? Or do you have some millionaire administrative assistants? It sets the personality of your company, trust me, I’ve seen it both ways.
  •  Who owns the “IP”? — it would be easy if we were talking patents and copyrights (ok, maybe not easy, technologists often get short-changed when their inventions are the foundation of corporate growth and they find they are looking for a new job.) — But there are lots of grey areas — was a spin-out idea all yours, or did it arise from the lunch table discussion? And what do you do when the company rejects your ideas (often to maintain their own focus, which is laudable).  So is your new start-up operation really free and  clear of legacy IP?
  • Mis-representation is a non-trivial temptation.  Entrepreneurs are looking for venture capital, for customers, for ongoing investors, and eventually to the business press (“xyz corporation fell short of expectations by 13% this quarter”.)  On one hand, if you are not optimistic and filled with hopeful expectations you can’t get off the ground. But ultimately, a good story will meet the test of real data, and along with this your reputation with investors, suppliers, customers, and in the worst case, the courts.  There is a difference between “of course our product has ‘abc'” (when you know it doesn’t), and “if that’s what it takes, we will make it with ‘abc'”. I’ve seen both – it’s a pain to do those overtime hours to make it do ‘abc’ because the sales person promised it. It is more of a pain to deal with the lawyers when it wasn’t ever going to be there. Been there, done that, got the t-shirt (but not the book I’m glad to say.)
  • What do you do with the data?  A simple example – I worked for a company developing semi-conductor design equipment, we often had the most secret designs from customers to work out some bug they discovered. While one aspect of this is clear (it’s their’s), there are more subtle factors like some innovative component, implicit production methods or other pieces that a competitor or even your own operation may find of value.
  • What is the company role in the community? Some startups are 24/7 focused on their own operation. Some assume employees, and even the corporation should engage beyond the workplace.  Again, early action in this area sets the personality of an organization.  Be aware that technologists are often motivated by purpose as much as money – so being socially conscious may be a winning investment.
  • What is the end game? — Now that you have yours, what do you do with it? — Here I will quote one of the persons mentioned in the article: “The same drive that made me an entrepreneur now drives me to try to save the world.”

I will suggest that this entrepreneur will apply the same ethical outlook at the start of the game as he/she does at the end of the game.

 

It’s 10PM do you know what your model is doing?

“Customers like you have also …”  This concept appears explicitly, or implicitly at many points in the web-of-our-lives, aka the Internet. Specific corporations, and aggregate operations are building increasingly sophisticated models of individuals.  Not just “like you”, but “you”! Prof. Pedro Domingos at UW  in his book “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World” suggests this model of you may become a key factor of your ‘public‘ interactions.

Examples include having Linked-in add a “find me a job” button that will conduct interviews with relevant open positions and provide you a list of the best.  Or perhaps locating a house, a car, a spouse, …well, maybe somethings are better done face-2-face.

Apparently a Asian firm, “Deep Knowledge” has appointed a virtual director to their Board. In this case it is a construct designed to detect trends that the human directors might miss.  However, one suspects that Apple might want a model of Steve Jobs around for occasional consultation, if not back in control again.

If the Computer Said it, it must be True!

Well, maybe not.  “What Happens When GPS Can’t Find You?” is a commercial concern raised by a Wall St. Journal article.  Needless to say a business in today’s world is at risk if the GPS location associated with it is wrong, or just the path that is required to get there is not correct.  Consumers at best are frustrated, and may simply write off that operation.  In this case it is often not the business’s fault, but one in the GPS location service, or route mapping.

Behind this is a more pervasive and serious problem.  Often there is no way to “fix” these problems from the perspective of the consumer or the an affected business.  You may know the data is wrong, the route doesn’t work, and correcting the error(s) is not a straight forward path, and certainly not easy enough that the “crowd-source” solution would work. That is, many people might find the error, and if there were a simple way to “report” the problem, after the “nth” report, an automated fix (or review) could be triggered.

This is not just  GPS problem. I’ve found many web sites are validating addresses against equally flawed sources (perhaps even the USPS).  I can send mail to my daughter (and she gets it), I’ve even seen the mailbox on the side of her street. By one of the web sites I used to deliver items to her location is rejecting the address as “not known”… and of course there is no way to report the error. A related problem is entering an address in “just the right way” — am I in “Unit A101” or “Apt. A 101″ or maybe Apt A101”, note that the delivery folks can handle all of these, but the online ordering system can’t.  Technology design consideration: track such ‘failures’, and after some number, check the validation process, or better have a button such as “I know this is right, so please update the database”.

Online operations are losing business, as well as brick-and-mortar activities due to online “presumptions” of correctness .. and no corrective processes available.  It’s one thing when the word processor marks your spelling as “wrong”, but lets you keep it anyway.  It is another when medications or essential services can’t reach your location because the GPS or delivery address is not in the database, or is listed incorrectly.

T&S Magazine December 2015

cover dec 15

IEEE Technology and Society Magazine

Volume 34, Number 4, December 2015

Departments

President’s Message
3 Improving Our “Engineering-Crazed” Image
Greg Adamson

Book Reviews
4 Hackers, Geniuses, and Geeks
6 Faxed: The Rise and Fall of the Fax Machine

Editorial
9 Reflecting on the Contribution of T&S Magazine to the IEEE
Katina Michael

Open Letter
15 Technology and Change

Interview
16 On the Road with Rick Sare… and Google Glass

Viewpoint
17 Shakespeare, Social Media and Social Networks
Fernando A. Crespo, Sigifredo Laengle, Paula Baldwin Lind and Víctor Hugo Masías

Leading Edge
20 Corporate Individualism – Changing the Face of Capitalism
László G. Lovászy

23 Multimedia and Gaming Technologies for Telerehabilitation of Motor Disabilities
Andrea Proietti, Marco Paoloni, Massimo Panella, Luca Liparulo and Rosa Altilio

31 MoodTrek – A New App to Improve Mental HealthCare
Ganesh Gopalakrishna and Srivam Chellappan

33 Alternative Planning and Land Administration for Future Smart Cities
Soheil Sabri, Abbas Rajabifard, Serene Ho, Mohammad-Reza Namazi-Rad, and Christopher Pettit

Commentary
36 Pharmaco-Electronics Emerge
Joseph R. Carvalko

41 Blockchain Thinking*
Melanie Swan

63 Information Paradox*
Levent V. Orman

Fiction
54 Held Captive in the Cyberworld
Michael Eldred

Last Word
104 Digitus Secundus: The Swipe
Christine Perakslis

Features
74_ The Value of Accountability in the Cloud*
Wouter M.P. Steijn and Maartje G.H. Niezen

83_ Shaping Our Technological Futures*
Reihana Mohideen and Rob Evans

88_ Driver Distraction from Dashboard and Wearable Interfaces*
Robert Rosenberger

100_ Are Technologies Innocent?*
Michael Arnold and Christopher Pearce

*Refereed articles.

On the cover: Blockchain Thinking. English Wikipedia/The Opte Project/Creative Commons Attribution 2.5 Generic license.

Ethics of Virtual Reality

The Jan. 4, 2016 Wall St Journal has an article “VR Growth Sparks Questions About Effects on Body, Mind” pointing out, as prior publications have, that 2016 is likely to be the Year of VR. The U.S. Consumer Electronics Show is starting this week in Las Vegas, where many neat, new and re-packaged concepts will be strongly promoted.

The article points to issues of physical health – nasua is one well documented potential factor. But work has been taking place on residual effects (how soon should you drive after VR?), how long to remain immersed before you ‘surface’, etc. Perhaps the key consideration is degree to which our bodies/brains accept the experiences of VR as real — altering our thinking and behaviour. (Prof. Jeremy Bailenson, director of Stanford’s Virtual Human Interaction Lab confirms this is one impact.)

All of the pundits point out that every new technology has it’s potential uses/abuses. But that does not excuse the specific considerations that might apply to VR.  A point raised in the article “Scares in VR are borderline immoral”. There is a line of technology from “watching” to “first person” to “immersion” that should be getting our attention.  The dispute over “children impacted by what they watch on TV”, moving to first-person shooter video games, to VR is sure to occur.  But in VR, you can be the victim as well. I first encountered the consideration of the after effects of rape in a video game environment at an SSIT conference some years ago.  Even with the third party perspective in that case, the victim was traumatized. No doubt VR will provide a higher impact.  There are no-doubt lesser acts that can be directed at a VR participant that will have greater impact in VR than they might with less immersive technology.

This is the time to start sorting out scenarios, possible considerations for vendors of technology, aps and content, and also to watch for the quite predictable unexpected effects.  Do you have any ‘predictions’ for 2016 and the Year of VR?

 

Predictive Analytics – Rhinos, Elephants, Donkeys and Minority Report

The  IEEE Computer Society published “Saving Rhinos with Predictive Analytics” in both IEEE Intelligent Systems, and in the more widely distributed ‘Computing Edge‘ (a compendium of interesting papers taken from 13 of the CS publications and provided to members and technologists at no cost.  The article describes how data based analysis of both rhino and poacher activity in concert with AI algorithms can focus enforcement activities in terms of timing and location and hopefully save rhinos.

For those outside of the U.S., the largest population of elephants (Republicans) and donkeys (Democrats) are in the U.S.– these animals being symbols for the respective political parties, and now on the brink of the 2016 presidential primaries, these critters are being aggressively hunted — ok, actually sought after for their votes.  Not surprisingly the same tools are used to locate, identify and predict the behaviour of these persons.   When I was young (1964) I read a book called The 480, which described the capabilities of that timeframe for computer based political analysis and targeting of “groups” required to win an election. (480 was the number of groupings of the 68 million voters in 1960 to identify which groups you needed to attract to win the election.)   21st century analytics are a bit more sophisticated — with as many as 235 million groups, or one per potential voter (and over 130 million voters likely to vote.).  A recent kerfuffle between the Sanders and Clinton campaign over “ownership/access” to voter records stored on a computer system operated by the Democratic National Committee reflects the importance of this data.  By cross connecting (data mining) registered voter information with external sources such as web searches, credit card purchases, etc. the candidates can mine this data for cash (donations) and later votes.  A few percentage point change in delivering voters to the polls (both figuratively, and by providing rides where needed) in key states can impact the outcome. So knowing each individual is a significant benefit.

Predictive Analytics is saving rhinos, and affecting the leadership of super powers. But wait, there’s more.  Remember the movie “Minority Report” (2002). This movie started on the surface with apparent computer technology able to predict future crimes by specific individuals — who were arrested to prevent the crimes.  (Spoiler alert) the movie actually proposes a group of psychics were the real source of insight.  This was consistent with the original story (Philip K Dick) in 1956, prior to The 480, and the emergence of the computer as a key predictive device.  Here’s the catch, we don’t need the psychics, just the data and the computers.  Just as the probability of a specific individual voting for a specific candidate or a specific rhino getting poached in a specific territory can be assigned a specific probability, we are reaching the point where aspects of the ‘Minority Report’ predictions can be realized.

Oddly, in the U.S., governmental collection and use of this level of Big Data is difficult due to privacy illusions, and probably bureaucratic stove pipes and fiefdoms.   These problems do not exist in the private sector.  Widespread data collection on everybody at every opportunity is the norm, and the only limitation on sharing is determining the price.  The result is that your bank or insurance company is more likely to be able to predict your likely hood of being a criminal, terrorist, or even a victim of a crime than the government.  Big Data super-powers like Google, Amazon, Facebook and Acxiom have even more at their virtual fingertips.

Let’s assume that sufficient data can be obtained, and robust AI techniques applied to be able to identify a specific individual with a high probability of a problematic event — initiating or victim of a crime in the next week.  And this data is implicit or even explicit in the hands of some corporate entity.  Now what?  What actions should said corporation take? What probability is needed to trigger such actions? What liability exists for failure to take such actions (or should exist)?

These are issues that the elephants, and donkeys will need to consider over the next few years — we can’t expect the rhinos to do the work for us.  We technologists may also have a significant part to play.

ISTAS ’15 Dublin – Irish President Michael Higgins Delivers Opening Address

President_Higgins_151111_02491

SSIT was honored with an opening address from Michael D. Higgins, President of Ireland, at the 2015 IEEE-SSIT International Symposium on Technology and Society (ISTAS ‘15) on November 11 in Dublin, Ireland. It is the first time a head of state has addressed an ISTAS event. Full coverage of the conference will appear in the January 2016 SSIT e-newsletter and on line at ieeessit.org. The President’s remarks will be published in the March 2016 issue of T&S Magazine. A Special Issue on ISTAS ‘15 will appear in the September 2016 issue of T&S, and will be guest edited by ISTAS ‘15 conference chair, Paul Cunningham.

 

T&S Magazine September 2015 Contents

cover 1

Volume 34, Number 3, September 2015

4 President’s Message
Coping with Machines
Greg Adamson
Book Reviews
5 Marketing the Moon: The Selling of the Apollo Lunar Mission
7 Alan Turing: The Enigma
10 Editorial
Resistance is Not Futile, nil desperandum
MG Michael and Katina Michael
13 Letter to the Editor
Technology and Change
Kevin Hu
14 Opinion
Privacy Nightmare: When Baby Monitors Go Bad
Katherine Albrecht and Liz Mcintyre
15 From the Editor’s Desk
Robots Don’t Pray
Eugenio Guglielmelli
17 Leading Edge
Unmanned Aircraft: The Rising Risk of Hostile Takeover
Donna A. Dulo
20 Opinion
Automatic Tyranny, Re-Theism, and the Rise of the Reals
Sand Sheff
23 Creating “The Norbert Wiener Media Project”
J. Mitchell Johnson
25 Interview
A Conversation with Lazar Puhalo
88 Last Word
Technological Expeditions and Cognitive Indolence
Christine Perakslis

SPECIAL ISSUE: Norbert Wiener in the 21st Century

33_ Guest Editorial
Philip Hall, Heather A. Love and Shiro Uesugi
35_ Norbert Wiener: Odd Man Ahead
Mary Catherine Bateson
37_ The Next Macy Conference: A New Interdisciplinary Synthesis
Andrew Pickering
39_ Ubiquitous Surveillance and Security
Bruce Schneier
41_ Reintroducing Wiener: Channeling Norbert in the 21st Century
Flo Conway and Jim Siegelman
44_ Securing the Exocortex*
Tamara Bonaci, Jeffrey Herron, Charles Matlack, and Howard Jay Chizeck
52_ Wiener’s Prefiguring of a Cybernetic Design Theory*
Thomas Fischer
60_ Norbert Wiener and the Counter-Tradition to the Dream of Mastery
D. Hill
64_ Down the Rabbit Hole*
Laura Moorhead

Features

74_ Opening Pandora’s 3D Printed Box
Phillip Olla
81_ Application Areas of Additive Manufacturing
N.J.R. Venekamp and H.Th. Le Fever

*Refereed article.

Information and media authentication for a dependable web

Guest author: Prof. Alessandro Piva (Bio Below)

The wide diffusion of the web and its accessibility through mobile devices has radically changed the way we communicate and the way we collect information about the world we live in. The social impact of such changes is enormous and includes all aspects of our lives, including the shape of social relationships and the process whereby we form our opinions and how we share them with the rest of the world. At the same time, web surfers and citizens are no more passive recipients of services and information. On the contrary, the Internet is more and more populated with contents directly generated by the users, who routinely share information with each other according to a typical peer-to-peer communication paradigm.

The above changes offer a unique opportunity for a radical improvement of the level of democracy of our society, since, at least in principle, every citizen has the ability to produce globally-accessible, first-hand information about any fact or event and to contribute with his/her ideas to general discussions while backing them up with evidence and proofs retrieved from the Internet.

The lack of a centralized control contributes to increase the democratic nature of the Internet, however, at the same time it makes the Internet a very fragile ecosystem, that can be easily spoiled. The ease with which false information can be diffused on the web, and the possibility of manipulating digital contents through easy-to-use and widely diffused content processing tools, casts increasing doubt on the validity of the information gathered “on-line” as an accurate and trustworthy representation of reality.

The need to restore and maintain trust in the web as one of our primary sources of information is evident.

Within the IEEE Signal Processing Society, the Information Forensics and Security (IFS) Technical Committee is involved in the signal processing aspects of this issue, with a particular attention to multimedia data (see the IEEE Signal Processing Magazine special issue on Digital Forensics, Vol 26, Issue 2, March 2009). It is a fact that multimedia data play a very special role in the communication of facts, ideas and opinions: images, videos and sounds are often the preferred means to get access to information, because of their immediacy and supposed objectivity. Even today, it is still common for people to trust what they see, rather than what they read. Multimedia Forensics (MF) deals with the recovery of information that can be directly used to measure the trustworthiness of digital multimedia content. The IFS Technical Committee organized the First Image Forensics Challenge, that took place in 2013, to provide the research community an open data set and protocol to evaluate the latest image forensic techniques.

However, MF tools alone are not the solution to the authentication issue: several key actions must be undertaken involving technological, legal and societal aspects.

What are your opinions about this topic?

Are we irremediably condemned to base our opinions, beliefs and social activity on information whose reliability cannot be determined?

Do you think that the involvement of a critical mass of researchers with different background – technological, legal and social  – could find a solution?

Are you interested in working on this topic?

===================

Author: Prof. Alessandro Piva

IEEE Signal Processing Society Delegate on the SSIT Board of Governors

Associate Professor @ Department of Information Engineering – University of Florence (Italy)

Alessandro Piva is Associate Professor at the Department of Information Engineering of the University of Florence. He is also head of FORLAB – Forensic Science Laboratory – of the University of Florence. His research interests lie in the areas of Information Forensics and Security, and of Image and Video Processing. In the above research topics he has been co-author of more than 40 papers published in international journals and 100 papers published in international conference proceedings. He is IEEE Senior Member, and he is IEEE Information Forensics and Security Technical Committee Associate Member; he has served on many conference PCs, and as associate editor of the IEEE Trans. on Multimedia, IEEE Trans. on Information Forensics and Security, and of the IEEE Trans. on Circuits and Systems for Video Technology. Other professional details appear at: http://lesc.det.unifi.it/en/node/177

Self Driving Car Ethical Question

There is a classical ethical question, “The Trolley Problem” which has an interesting parallel in the emerging world of self-driving vehicles.  The original problem posits a situation where 5 persons will be killed if you do not take action, but the action you take will directly kill one person. There are interesting variations on this outlined on the above wikipedia page.

So, we now have the situation where there are 5 passengers in a self driving car.  An oncoming vehicle swerves into the lane, and will kill the passengers in the car. The car can divert to the sidewalk, but a person there will be killed if that is done.  Note the question here becomes “how do you program the car software for these decisions?“.  Which is to say that the programmer is making the decision well in advance of any actual situation.

Let’s up the ante a bit.  There is only one person in the car, but 5 on the sidewalk. If the car diverts 5 will die, if not just the one passenger will die. Do you want your car to kill you to save those five persons?  What if it is you and your only  child in car? (Now 2 vs 5 deaths). Again, the software developer will be making the decision, either consciously, or by default.

What guidelines do we propose for software developers in this situation?