What does it mean to be human?

Guest Blog from: John Benedict

“… I’d like to share a revelation that I’ve had during my time here. It came to me when I tried to classify your species, and I realized that you’re not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment; but you humans do not. You move to an area and you multiply, and multiply until every natural resource is consumed and the only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, cancer on this planet, you are a plague, and we…are the cure…”

Let’s hope it doesn’t come to that.

Eighteen years have passed since the birth of a blind child and his graduation from high school. Eighteen years ago, there were no iPods, USSR was a superpower, Japan looked to the United States for economic leadership and support, smoking was permitted on airplanes, there were no companies which researched on biotechnology and only a handful of mobility and medical specialists taught in the nation’s public schools.

In eighteen more years, today’s blind infants will graduate from a strikingly different world. What we teach these kids today will determine how well they survive in their future. We have to make educated guesses about that future (and keep guessing) to prepare them for success.

When a much earlier world changed from a hunting-and-gathering culture to an agricultural age, human relationships were redefined and concepts about space and time changed. The speed of life accelerated. Leadership shifted; old power structures were replaced by the newly empowered. Old definitions and institutions collapsed and new ones took their place.

The hunting-to-survive stage lasted for a million years, the agricultural age – another six thousand years and the Industrial age lasted three hundred years. Some futurists defined an information age and then declared it dead after forty years.The concept of a “job” was also invented by the Industrial age. It pulled the children off the farms to the cities where they had to adjust to new spatial and temporal rules. A job required an employee to be at a certain place for a set amount of time, to do repetitive tasks – to “work” at producing things that were not immediately relevant to the individual’s life. In exchange for the loss of an agricultural lifestyle, employers gave steady wages (not affected by the weather or natural rhythms).

The industrial age saw the creation of vacations, health insurance, and sick days; all resulting from the invention of the job (a new way to work). This change was traumatic for a farm-based  agricultural culture, and many resisted. Human beings no longer were “ruled” by their natural rhythms or by the seasons. Respect for the wisdom of the elders of the society declined as their power was bypassed; they no longer controlled the source of wealth, and their knowledge was irrelevant to the new age.

The rules are ever changing in this age of communication. The life cycle of a business is only seven years now. The cycle in technology is down to six months, and in the software business, if a company is to survive, it must bring new products to market within two or three months. There is hardly time to plan; certainly the present is of little help.

The amount of information in the world is doubling every eight years. One-half of everything a college student learned in his or her freshman year is obsolete by the time they graduate. The amount of knowledge we are asking a typical high school senior to learn is more information than their grandparents absorbed in a lifetime. Our decision load is growing. We are running too fast, making too many decisions too quickly about things we know too little about. How can all these grand ideas about individual web pages, global consciousness, and the coming of massively capable workstations ever be implemented when we hardly have time to eat? This is the major social question facing the beneficiaries of the communications age.

The question remains – with advancements in technology, do we have too little time for what is important and much more for what might not? Are we missing out on morals and courtesies and relying too much on an online presence? We may be called social beings, but are we stepping away from human interaction? The answers to all these are terrifying to even think about! It’s time that we reclaim what we lost.

I finish this essay as I started – with a quote from The Matrix Revolutions.

“…Illusions, Mr. Anderson. Vagaries of perception. Temporary constructs of a feeble human intellect trying desperately to justify an existence that is without meaning or purpose. And all of them as artificial as the Matrix itself, although… only a human mind could invent something as insipid as love…”

The machines may be right but our entire purpose is built on something as insipid as love.

John Benedict is from Hyderabad, India and works with Amazon, India.

Multitasking vs Focus

Guest Blog from: Chris Fallon

“What do you want to watch on T.V.?”
“I don’t know.  Just put on whatever.”
You fire up the television.  The latest episode of said show begins.  No sooner than the theme music starts your phone comes out.  For the next half hour you sit there looking at your phone checking back in with the program on t.v. every once-in-a-while.  

Sound familiar?

We have become masters of multitasking.  Or, at least, we crave the constant distraction of multitasking.

The ability to juggle tasks is often esteemed in our society.  “Sara is a good multitasker, she can deal with a lot on her plate.”  It seems like an ideal trait for an employee: The ability to effortlessly move from task to task.  Technology is seemingly training us for this ability as it demands and divides our attention.

The problem is that multitasking generally produces worse results for a given task.  Psychology Today has a study summary that looks at how multitasking decreases efficiency.  Additionally there exist studies showing that technology based mutlitasking can harm memory retention and possibly even change brain structure. According to one study out of Wilfrid Laurier University: “[T]hose who preferred to task-switch had more distracting technologies available and were more likely to be off-task than others. Also, those who accessed Facebook had lower GPAs than those who avoided it.”  

I’ve long held a theory that true masters of any field are single-minded. Famed psychologist Martin Seligman calls it being in a state of “flow” which he describes as: “Being one with the music, time stopping, and the loss of self-consciousness during an absorbing activity”  Time and time again I see this state of flow mentioned, if not by name, by the best-of-the-best in their field.

Concentrate; put all your eggs in one basket, and watch that basket...”
Andrew Carnegie

I never could have done what I have done without the habits of punctuality, order, and diligence, without the determination to concentrate myself on one subject at a time…
Charles Dickens

My ability to concentrate and work toward that goal has been my greatest asset.
Jack Nicklaus

Singleness of purpose is one of the chief essentials for success in life, no matter what may be one’s aim.
John D. Rockefeller

That doesn’t sound much like our iPad to iPhone to T.V. back to iPad daily routine does it?

I am of the opinion that the less we crave distraction, the less we mindlessly vacillate between our tech devices, the less we spread ourselves thin, the better.  Concentrate on one task at a time and see if you don’t reap the rewards.

Are you happier or more productive now with the multitasking that technology encourages, or do you prefer a single task focus?

Chris Fallon lives in Raleigh North Carolina and is the marketing director for axcontrol.com

Health App Standards Needed

Guest Blog from: John Torous MD, Harvard

Last year, the British National Health Service (NHS) thought it was showing the world how healthcare systems can best utilize smartphone apps – but instead provided a catastrophic example of a failure to consider the social implications of technology. The demise of the NHS ‘App Library’ now serves as a warning of the perils of neglecting the technical aspects of mobile healthcare solutions – and serves as a call for the greater involvement of IEEE members at the this evolving intersection of healthcare and technology.

The NHS App Library offered a tool where patients could look online to find safe, secure, and effective smartphone apps to assist with their medical conditions. From major depressive disorder to diabetes, app developers submitted apps that were screened, reviewed, and evaluated by the NHS before being either approved or rejected for inclusion in the App Library. Millions of patients came to trust the App Library as a source for high quality and secure apps. Until one day in October 2015 the App Library was gone. Researchers had uncovered serious privacy and security vulnerabilities, with these approved apps actually leaving patient data unprotected and exposed. Further data highlighting that many approved apps also lacked any clinical evidence added to the damage. Overnight the NHS quietly removed the website (http://www.nhs.uk/pages/healthappslibrary.aspx) although the national press caught on and there was a public outcry.

As an IEEE member and a MD, I see both the potential and peril of mobile technologies like apps for healthcare. Mobile technologies like smartphone apps offer the promise of connecting millions of patients to immediate care, revolutionizing how we collect real time symptom data, and in many cases offering on the go and live health monitoring and support. But mobile technologies also offer serious security vulnerabilities, leaving sensitive patient medical information potentially in the public sphere. And without standards to guide development, the world of medical apps has become a chaotic and treacherous space. Simply go to Apple or Android app stores and type in ‘depression’ and observe what that search returns. A sea of snake oils, apps that have no security or data standards as well as no clinical evidence are being marketed directly to those who are ill.

The situation is especially concerning for mental illnesses. Many mental illnesses may be thought of in part as behavioral disorders and mobile technologies like smartphones have the potential to objectively record these behavioral symptoms. Smartphones also have to potential to offer real time interventions via various forms of e-therapy. Thus mobile technology holds the potential to transform how we diagnose, monitor, and even treat mental illnesses. But mental health data is also some of the most sensitive healthcare data that can quickly ruin lives if improperly disclosed or released. And the clinical evidence for the efficacy of smartphone apps for mental illness is still nascent. Yet this has not held back a sea of commercial apps that are today directly available for download and directly marketed to those whose illness may at times impair clear thinking and optimal decision making.

If there is one area where the societal and social implications of technology are actively in motion and needing guidance, mobile technology for mental healthcare is it. There is an immediate need for education and standards regarding consumer facing mobile collection, transmission, and storage of healthcare data. There is also a broader need for tools to standardize healthcare apps so that data is more unified and there is greater interoperability. Apple and Android each have their own healthcare app / device standards via Apple’s ReseachKit and Android’s Research Stalk – but there is a need for more fundamental standards. For mobile mental health to reach its promised potential of transforming healthcare, it first needs an internal transformation. A transformation led in part by the IEEE Society on Social Implications of Technology, global mental health campaigns (changedirections.org), forward thinking engineers, dedicated clinicians, and of course diverse patients.

If you are interested in tracking standards and developments in this area, please join the LinkedIn Mobile Mental Health App Standards group at: http://is.gd/MHealthAppGroup


 

John Torous MD is an IEEE member and currently a clinical fellow in psychiatry at Harvard Medical School. He has a BS in electrical engineering and computer sciences from UC Berkeley and medical degree from UC San Diego. He serves as editor-in-chief for the leading academic journal on technology and mental health, JMIR Mental Health (http://mental.jmir.org/), currently leads the American Psychiatric Association’s task force on the evaluation of commercial smartphone apps, co-chairs the Massachusetts Psychiatric Society’s Health Information Technology Committee.

Employee Cell Phone Tracking

An employee in California was allegedly fired for removing a tracking APP from her cell phone that was used to track her on-the-job and after-hours travel and locations.  The APP used was XORA (now part of Clicksoft).
Here are some relevant, interesting points.

  • Presumably the cell phone was provided by her employer.  It may be that she was not required to have it turned on when she was off hours.
    (but it is easy to envision jobs where 24 hour on-call is expected)
  • There are clear business uses for the tracking app, which determined time of arrival/departure from customer sites, route taken, etc.
  • There are more intrusive aspects, which stem into the objectionable when off-hours uses are considered: tracking locations, time spent there, routes, breaks, etc. — presumably such logs could be of value in divorce suits, legal actions, etc.

Consider some variations of the scenario —

  1. Employee fired for inappropriate after hours activities
  2. Detection of employees interviewing for other jobs,
    (or a whistle blower, reporting their employer to authorities)
  3. Possible “blackmail” using information about an employees off hour activities.
  4. What responsibility does employer have for turning over records in various legal situations?
  5. What are the record retention policies required?  Do various privacy notifications, policies, laws apply?
  6. What if the employer required the APP to be on a personal phone, not one that was supplied?

When is this type of tracking appropriate, when is it not appropriate?

I’ve marked this with “Internet of Things” as a tag as well — while the example is a cell phone, similar activities occur with in-car (and in-truck) monitoring devices, medical monitoring devices, employer provided tablet/laptop, and no doubt new devices not yet on the market.

Who is Driving My Car (revisited)

Apparently my auto insurance company was not reading my recent blog entry.  They introduced a device, “In-Drive” that will monitor my driving habits and provide a discount (or increase) in my insurance rates.

There are a few small problems. The device connects into the diagnostic port of the car, allowing it to take control of the car (brakes, acceleration, etc.) or a hacker to do this (see prior Blog entry). It is connected to the mothership (ET phones home), and that channel can be used both ways, so the hacker that takes over my car can be anywhere in the world.  I can think of three scenarios where this is actually feasible.

  1. Someone wants to kill the driver (very focused, difficult to detect).
  2. Blackmail – where bad guys decide to crash a couple of cars, or threaten to, and demand payment to avoid mayhem (what would the insurance company CEO say to such a demand?)  (Don’t they have insurance for this?)
  3. Terrorism – while many cyber attacks do not yield the requisite “blood on the front page” impact that terrorists seek, this path can do that — imagine ten thousand cars all accelerating and losing brakes at the same time … it will probably get the desired coverage.

As previously mentioned, proper software engineering (now a licensable profession in the U.S.) could minimize this security risk.

Then there is privacy.  The  insurance company’s privacy policy does not allow them to collect the data that their web page claims this device will collect — so clearly privacy is an after thought in this case.  The data collected is unclear – they have a statement about the type of data collected, and a few FAQ’s later, have a contradictory indication that the location data is only accurate within a forty square mile area, except maybe when it is more accurate.  What is stored, for what period of time, accessible to what interested parties (say a divorce lawyer) or with what protections is unclear.  A different insurance company, Anthem, encountered a major attack that compromises identity information (at least) for a large number of persons.  I’m just a bit skeptical that my auto insurance company has done their analysis of that situation and upgraded their systems to avoid similar breaches and loss of data. For those wondering what types of privacy policies might make sense, I encourage you to view the OECD policy principles and examples.  Organizations that actually are concerned with privacy  would be covering all of these bases at least in their privacy statements. (Of course they can do this and still have highly objectionable policies, or change their policies without notice.)

Phony Cell Towers (who, why, …)

Popular Science Magazine had an article on “Who is running the phony cell phone towers” along with a map of some 20 plus that had been located.  These “towers” look like a local service tower to all cell phones in range and can capture some “meta data” (phone #, ID, location info) without any need to decrypt actual calls, but could also do that with some additional effort.

Variations of this technology, “Stingray” and “Triggerfish” are available for sale, perhaps with some limitations on buyers — at least for major manufactures like Harris.   How these are being used in the U.S. is being carefully protected according to a 2011 Wall Street Journal article. Popular Science indicates that a unit could be constructed for as little as $2000 by a knowledgeable hacker (at a maker-space near you no doubt), but did not point to any kits, plans or software available on the net at this time.

While the question posed by Popular Science and some other publications related to this recent survey of phony towers is “who is doing it?” — a more relevant observation is that any entity with resources and interest can do so in any country.  It is probably illegal in most if not all countries, at least for non-governmental agencies, but with a low cost, low profile and difficult to detect characteristics you can bet it is being done.  There are phones that can detect, and reject these tower connections, which is what the really bad guys might use (or disposable phones that they trash after every use which might be cheaper.)

While the “NSA” data collection revelations have sparked a lot of interest, and apparent “surprise” from foreign country officials — this potentially more “democratic” capability (everyone can do it) has not gotten the same press.  Of course the opportunity for abuse is much greater with a comprehensive program managed by government entities, but the opportunity is there for unscrupulous actors to monitor our cellular presence (note just having your phone “on” provides for this tracking, no calls required.)

Technology has addressed the “how, what, when and where” issues, the “who and why” answers will vary from country to country and perhaps a new form of paparazzi as well.

Enslaved by Technology?

A recent “formal” debate in Australia, We are Becoming Enslaved by our Technology addresses this question (90 min).  A look at the up side and down side of technological advances with three experts addressing both sides of the question.

One key point made by some of the speakers is the lopsided impact that technology may have towards government abuse.  One example is captured in the quote “a cell phone is a surveillance device that also provides communications”  (quoted by Bernard  Keene)  In this case one who benefits from continuous location, connectivity, app and search presence.

Much of the discussion focuses on the term “enslave” … as opposed to “control”.  And also on the question of choice … to what degree do we have “choice”, or perhaps are trying to absolve our responsibility by putting the blame on technology.

Perhaps the key issue is the catchall “technology”.  There are examples of technology, vaccines for example, where the objectives and ‘obvious’ uses are beneficial (one can envision abuse by corporations/countries creating vaccines.) And then the variations in weapons, eavesdropping, big-data-analysis vs privacy, etc.  Much of technology is double-edged – with impacts both “pro and con” (and of course individuals have different views of what a good impact.)

A few things are not debatable (IMHO):
1. the technology is advancing rapidly on all fronts
2. the driving interests tend to be corporate profit, government agendas and in some cases inventor curiosity and perhaps at times altruistic benefits for humanity.
3. there exists no coherent way to anticipate the unintended consequences much less predict the abuses or discuss them in advance.

So, are we enslaved? …. YOU WILL RESPOND TO THIS QUESTION! (Oh, excuse me…)

 

Comparing British and Japanese Perceptions of a Wearable Ubiquitous Monitoring Device

Comparing British and Japanese Perceptions of a Wearable Ubiquitous Monitoring Device
T&S Paper by Stuart Moran, Toyoaki Nishida, and Keiichi Nakata Winter 2013
Mixed Reality Lab., Univ. Nottingham, Nottingham, UK

Abstract: Ubiquitous Monitoring (UM) describes the continuous collection of data on a large scale, enabled by embedded, mobile, wireless, and sensory technologies. This data will enable the envisioned applications of Ubiquitous Computing. Research has shown that monitoring can affect user behavior , which is problematic for ubiquitous computing because the data collected may not fully reflect the reality. Hence, any services provided may not fully align with user expectations or needs. One proposed solution is the use of deterministic models to predict the behaviors of users prior to deployment, reducing the undesirable effects of monitoring. The Perceptions of System Attributes-Behavioral Intention (PSA-BI) model was specifically designed for this purpose [1]. While the model has been validated, the moderating effect of culture has not yet been explored. As such, we present here results from a study carried out in the U.K. exploring the relationships in the PSA-BI model. This is then compared with a structural model from a previous study in Japan, allowing us to explore any potential differences and similarities.

Asynchronous Adaptations to Complex Social Interactions

Asynchronous Adaptations to Complex Social Interactions*
T&S Paper by Sally Applin and Michael Fischer Winter 2013
Centre for Social Anthropology & Comput., Univ. of Kent, Canterbury, UK

Abstract: The permeation of the mobile platform is creating a shift in community behavior. What began with a few individuals, has now quickly replicated as many people communicate not only through mobile phones, but through smartphones that are multi-functioning communications computers. Mobile devices have broadened people’s capability and reach, and within that context, people have adapted their behavior to adjust to communications “on the go.” In this article we explore how multiplexed networked individuated communications are creating new contexts for human behavior within communities, particularly noting the shift from synchronous to asynchronous communication as an adaptation.