Can We Trust For-Profit Corporations to Protect Our Privacy?

By on June 29th, 2017 in Editorial & Opinion, Magazine Articles, Societal Impact

Can we trust for-profit corporations to protect our privacy? Are these compatible concepts and institutions? These are difficult questions to answer. Corporations can be complex entities. The ones that provide the technologies, social networks, and electronic gadgets our privacy depends on are no exception. Nevertheless, despite their complexity, all for-profit corporations possess a couple of significant properties that may be sufficient to raise some doubts about their compatibility with privacy in any strong sense. Let us turn to Milton Friedman, one of the fathers and architects of modern capitalism and its legal and economic structures. He introduced the fundamental characteristics of a modern for-profit company, in an interview with Joel Bakan for his book, aptly titled The Corporation:

  1. corporation is the property of its stockholders. Its interests are the interests of its stockholders. Now, beyond that, should it spend the stockholders’ money for purposes which it regards as socially responsible but which it cannot connect to its bottom line? The answer I would say is no [1, p. 34].

There is one (moral) imperative, Friedman continues, which is to increase shareholder profit.

Corporate social responsibility in Friedman’s words, is only justifiable as in so far as it takes the form of hypocritical window dressing, as a calculated cost to maximize profits. Genuine care, in comparison, is costly. In this view of corporate social responsibility, according to Friedman, hypocrisy is rendered virtuous and genuine care immoral [1, p. 34]. Because of a corporation’s legal and institutional boundaries, a corporation has to do whatever is necessary to abide by the profit-maximization imperative. As a result. as Joel Bakan concludes, a corporation has to be understood as pathological entity, as an

“externalising machine, in the same way that a shark is a killing machine (…) There isn’t any question of malevolence of will; the enterprise has within it, and the shark as within it, those characteristics that enable it to do that for which it is designed” [1, p. 70].

(See also [2] for further discussion of the pathological character of corporations.)


Only a very few [companies] decided to shut down operations rather than to jeopardize their users’ privacy.

While Friedman was asked about corporate social responsibility (CSR) in this particular interview, everything he said also holds true if you replace “CSR” with “privacy.” Like CSR, privacy can only ever be pursued by a corporation as a form of window dressing, in an effort to increase, and/or protect profits. It can never be seen as an intrinsic, genuine goal in itself. This was evidenced on a rather large scale when Edward Snowden blew the whistle on the current, global surveillance scheme involving numerous governmental agencies and corporations [3]. As suggested by Snowden’s revelations, whenever corporations were faced with the decision to either protect the privacy and interests of their clients, but endanger their continued operation and profits, or to give in to governmental pressure and be able to continue their businesses as usual, virtually none of the corporations chose to protect their clients. Only a very few decided to shut down operations rather than to jeopardize their users’ privacy. In fact, the only one I am aware of is Lavabit, a small-scale secure email provider used by Edward Snowden at the time [4]. All of the major technology players – Facebook, Microsoft, Apple, Google, and so on – verifiably chose profit over protection of privacy [5], [6]. Moreover, the reality is that these corporations were, (and continue to be) the prime enablers that made the current state of global surveillance possible in the first place [7]. This is because of their use of profit ensuring, but inherently insecure proprietary software (which always leaves the corporation in control) [8], as well as what Shoshana Zuboff calls “Surveillance Capitalism” business practices [9], which are based on the corporations’ commodification of people’s lives and data. This renders as potentially questionable Apple’s recent “strong stance against the FBI,” when it refused to help “break their (i.e., Apple’s) encryption.” One could suspect, for example, that had the FBI not dragged the matter into the public sphere, Apple may have complied in much the same way it already did and continues to do [10], [11]. One could speculate further that Apple, being a rather clever company, pulled a marketing judo move, and turned an unfortunate situation into a very PR-effective opportunity to display their deep concern for the privacy of consumers, coincidentally just before the launch of their newest iPhone [12], [13].

All of the major technology players – Facebook, Microsoft, Apple, Google, and so on – verifiably chose profit over protection of privacy.


What has been discussed so far might already suffice to prove, if not a full-out incompatibility, then at least a certain potential conflict between for-profit corporations and notions of privacy in a strong sense. Yet, there is another dimension to the corporations’ inherent characteristics worth illuminating: their glaring autocracy and authoritarianism. While in the physical world, constitutional law and principles of democracy serve to protect citizens’ rights (such as the protection against unwarranted search and seizure), in the digital space such mechanisms currently are largely absent. There, in places like’Facebookistan,” as Rebeka Mackinac renamed Facebook, “sovereignty and power are ill-defined and highly contested. The reality is that the corporations and governments that build, operate and govern cyberspace are not being held sufficiently accountable for their exercise of power over the (digital) lives and identities of people who use digital networks. They are sovereign entities operating without the consent of the networked” [13, p.14]. Furthermore, these sovereign entities are not Plato’s quasi-dictatorial but benevolent philosopher-kings. They are proper, authoritarian dictators with selfish motives and very little to no democratic legitimacy or oversight at all [14]. Which, as I see it, means that within the status quo, where pathological, authoritarian for-profit corporations design, produce, and govern the technologies our privacy depends on, there is very little prospect for the protection of anyone’s privacy.


How then, are we to deal with all of this? How can we amend the status quo? Is there really no way to coerce for-profit corporations to take account of the interests of private citizens? Could they not, through sufficient consumer pressure, be forced to make it a rational goal within their utility functions to “care” or at least protect certain aspects of users’ privacy? Maybe, and some would say this pressure is exactly the reasons for Apple’s earlier mentioned “strong stance” against the FBI.

Unfortunately, such a “solution” would a) not change the fact that a corporation would simply be forced to accept privacy as an unfortunate but necessary expense to continue business, not as a true goal in itself, and b) even if the argument appears valid in principle, in practice it would probably turn out differently and mostly in the form of the aforementioned window dressing. The recent emissions scandal that erupted around VW, but has since spread to many other car manufacturers (proving the systemic, not individual nature of this problem) [15], [16], can be considered evidence for this claim. There too, one could have put forward the consumer pressure argument, which could force VW to produce more environmentally friendly cars. In other words, that option is out of the window. What about moving to free and open source software (FOSS) alternatives? Again, in principle, this would be possible. But anyone who has, like me, tried to get friends and family to switch from Facebook, Skype, Whatsapp, and so on to Diaspora, TOX, and Signal knows how incredibly difficult, if not completely impossible a task that is. The corporate solutions are well polished, easy to use, and already widely established. Unfortunately, most FOSS alternatives possess few if any of these properties and remain unattractive by comparison. How about quitting these technologies all together you may ask? Well, while members of older generations might actually consider this an option, no teenager could ever afford to even dream of boycotting any of these technologies [17, ch. 9]. The very minute they quit, they would be rendered social outcasts. Consequently, to point to consumer choices and the power of the individual user is illusory. There is simply is no choice in the matter.

The only solution that appears workable is to tackle the problem from an institutional angle. From that perspective, for example, why not think of the various online social networks the same way we think about our roads, bridges, and water supply. These technologies are publicly financed and governed, mostly because they are considered basic and integral to human life. Therefore, it makes sense to ensure their provision through public funding. In the 21st century, one could make an argument that the same basic necessity also applies to certain technologies. On the technical level of the Internet, this has already been recognized to a certain degree— as evidenced by the most recent Internet Corporation for Assigned Names and Numbers (ICANN) meeting in Marrakesh. During this meeting, almost two thousand participants from around the world decided to further improve their (already nonprofit) independence by moving away from U.S. oversight and towards a fully international, community-based, democratic model of governance [18], [19]. In other words, towards a model that is exactly the opposite from the window dressing, autocratic way that for-profit corporations are providing the use of their technologies.

If this makes sense on the technical, cable, and server level, maybe it is time to think about independent, International, publicly funded, and democratically legitimized institutions to either run and provide, or at least oversee and finance the lower level digital infrastructures, the social networks, and the messaging apps, etc., that we rely on as well. Many of the privacy-respecting and protecting FOSS projects suffer from a critical lack of funding to improve usability and attractiveness. Why not initiate a broad scale subsidies scheme to finance these projects collectively? Currently we are, quite literally, paying for these services with the very thing we would like to see protected: our information and data. Would it not be worth considering paying with our taxes instead, and keeping our information and data? What do we have to lose? Well yes, a lot of individual, private interest groups would probably lose quite a bit of money. But this, I argue, would be massively outweighed by what we as a society could win as we reclaim our digital lives, our data, and our right to privacy. And yes, any such endeavor would surely constitute a rather large enterprise. But considering that, for example, the United States alone already appears to have sufficient tax funds to build a massive data center large enough to spy on and process all of the technologies we are talking about [20], surely, at least in principle, collectively, we could build something of similar scale to actually benefit people around the world.

Is there really no way to coerce for-profit corporations to take account of the interests of private citizens?

Of course, this is only one possible idea out of many, and it may not even be a good one. Yet it would be fallacious to conclude that, simply because an ideal solution is yet to be put forward, the status quo would be vindicated or should be considered without alternatives. No matter what the “best” solutions for these technologies’ designs, production, and governance will be, it may at least be a good idea now, at the outset of a prospective, much more technologically defined future of humanity, to stop and think whether for-profit corporations are the best choice to shape this future in general, and for the future of privacy in particular.


Wilhelm E. J. Klein is a Ph.D. candidate at the School of Creative Media, City University of Hong Kong, Kowloon Tong, Hong Kong. Email:


Full article: