Whose ethics? False dichotomies of business & government in the machine learning debate

In their book Big Data, Viktor Mayer-Schönberger and Kenneth Cukier use the film Minority Report as a speculative lens to envision a society where decisions are driven by predictive algorithms drawing from a database of personal information. In a chapter titled “Risks”, they warn that “as troubling as the ability of business and government to know our personal information may be, a newer problem emerges with big data: the use of predictions to judge us” (157). Mayer-Schönberger and Cukier follow in the tradition of paranoid science fiction writers of the mid-20th century and sensational journalism of the 21st, forecasting that predictions generated by algorithms that draw from databanks of past behaviour will be used to to punish people for what they might do in the future. There are no second chances. No mercy from the machine.

And of course there isn’t! Not in the frustrating paradigm that insists that, where technology is, humanity cannot also be. The moment someone mentions “machine learning,” the very notion of human agency seems to go out the window, to the point where Mayer-Schönberger and Cukier propose in all seriousness to create “a new caste of big-data auditors” called “algorithmists” (184) to oversee organizations’ data-analysis algorithms and thereby ensure that human agency is respected. As Dr Quamen pointed out last class, it should be the job of the CEO (who is a bona-fide human being) in the first place to ensure the paramount priority of human ethics in the development of algorithms and data use for the company’s purposes. It is a question of business ethics and, I would add, critical thinking about what we mean when we refer to “the system”.

Minority Report "precog"
Shh, it’s an allegory. Just go with it.

As Johanna Drucker expounds on at length, data is not and has never been abstract, free-floating information, detached from human intervention; neither are the construction and methods of algorithms. Technophobia ignores the fact that the technological systems we fear and revile were created, structured, installed, and activated by humans in the first place. Someone agreed to this. Someone human.

This post is not an attempt to refute the fact that some deeply scary and totalitarian technologies are being developed in our world today that misuse and abuse citizens’ private data. I’m with you, believe me. Instead, I’m picking a bone with the trend in thinking that many seem to default to in situations where machine learning is used to collect and analyze user data: referring to the tech in use as the result of an abstract and inhuman system, often as a result of dictates by a faceless government (or lack thereof). Anxieties over A.I. are, at this moment, not only naive, but to blame in our shifting of the ethical responsibility for data use from human individuals to the systems themselves.

Mayer-Schönberger and Cukier gravely impart that “penalties based on propensities” a la Minority Report‘s system of precrime is the inherent danger lying dormant within the phenomena of personal data collection and its use (151). Governments, they conclude, should be held accountable to their data-use; government organizations should not be involved in the business of gathering personal information, or in analyzing it in order to pass judgment on individual citizens – police profiling, for example (159-160). “Algorithmists,” on the other hand, would “provide [a] market-oriented approach to problems to head off more intrusive forms of regulation” (180): a business solution to government problem. A human ethical standoff against the machinations of an unfeeling regulatory entity.

In a terrifying example of big-data use to profile citizens, Rick Falkvinge reports that China has recently introduced a universal credit score, which is generated by an app called “Sesame Credit” linked to a user’s social media network. Sesame Credit’s algorithms generate an individual’s credit score based on users’ personal information such as buying habits (buy a dishwasher, your credit goes up; buy video games, and it goes down), as well as social media behaviours. Falkvinge reports that “things that will make your score deteriorate include posting political opinions without prior permission, talking about or describing a different history than the official one, or even publishing accurate up-to-date news from the Shanghai stock market collapse (which was and is embarrassing to the Chinese regime).”

What Falkvinge most takes issue with, however, is the fact that a user’s credit score is impacted by their friends’ behaviour: if they display any “negative” behaviours, the machine adjusts the original user‘s credit score lower, and can directly affect quality of life – sanctions for “bad behaviour” include limited internet access, or being barred from influential jobs such as reporter, CEO, or government official. Falkvinge states that “as a result, this will very effectively isolate and neuter anybody who posts unofficial political opinions or unofficial history facts.”1

Totalitarian governments’ access to citizens’ private data is indeed a danger, but I’m not buying the “business=good, government=bad” dichotomy that Mayer-Schönberger and Cukier are trying to sell me. Notice how government surveillance and regulation of its citizens’ freedom is made possible by the data collection of private corporations. “This Chinese credit score,” Falvinge reports, “was introduced by Alibaba and Tencent, China’s IT giants who run the Chinese equivalents of all social networks, and who therefore have any and all data about you.” In Minority Report, Tom Cruise’s character is hounded by police, yes, but is nearly given away by the use of his personal data for targeted advertising purposes.

I suppose my point here is: look at how business and government work together in this case, and in the case of Sesame Credit. Pearl-clutching over consent is all very well and good, but when consent does not even need to be manufactured by governmental organizations because people so desperately want to give away their personal data to social media businesses,2 what is the question again? Collaboration is a buzzword in business circles: collaborators show initiative, drive, loyalty to the company. It’s also an epithet for an individual that betrays their fellows to a hostile, oppressive government.

Governments are not people, but people do make up governments. Sometimes, citizens are even able to elect the people who make up those governments – and in those cases, the hand-wringing over the loss of the human element in automated systems strikes me as particularly disingenuous. This is not to downplay legitimate concerns over the ethics of data use and the employment of algorithms to comb through personal information. But business is not internet users’ salvation from government data gather and (ab)use; they are more likely to be facilitating it. It is pertinent to remember that in both business and government, there are humans behind the wheel, and that “only human” is a cliché for a reason. Assuming that “individual human” automatically and always corresponds to “highest ethical standards” is a good way to make a grave error in judgment.

In his famous essay “Computing Machinery and Intelligence”, Alan Turing frames his proposal of the imitation game with the question “can machines think?” – the article includes a series of anticipated arguments against “thinking machines” and Turing’s answers to them.

Near the end of this, Turing cites Ada Lovelace’s thoughts on the capacity of Charles Babbage’s Analytical Engine to produce original material. The Analytical Engine is often referred to as the ancestor of contemporary computing machinery; Lovelace writes that “The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform”.

It is generally agreed that we need to teach machines to perform to ethical standards. “We” meaning “we humans”. And, hopefully, for our sake, “we ethical humans”.  Am I naive to expect accountability from the humans who order the machines to perform?


1. By the way, CNN reports that Facebook purchased patent rights to a similar program in August 2015.

2. Go through your Facebook friends and check which of them have listed phone numbers. Or emails. Or place of birth, place of current residence, education, current and past workplaces, significant life events (perhaps even ones that occurred before Facebook was around), or keep large albums of up-to-date photos.

3. Mayer-Schönberger and Cukier’s “algorithmist” will operate much like a consulting plumber or electrician, giving pointers to renovating homeowners on how to correctly construct the addition and swooping in to take a look at a leaky faucet or wiring issue. I know nothing of how businesses are run, but if one is in the business of constructing algorithms and the gathering and use of data, perhaps its CEO or other higher-ups should know how to do their jobs in the first place, instead of hiring a consultant and assuming someone else will take care of it. “Algorithmist” indeed.



Drucker, Johanna. “Humanities Approaches to Graphical Display.” Digital Humanities  Quarterly 5.1 (2011). Web. 22 Oct 2015. <http://www.digitalhumanities.org/dhq/vol/5/1/000091/000091.html>.

Mayer-Schönberger, Viktor, and Kenneth Cukier. Big Data: A Revolution That Will TransformHow We Live, Work, and Think. New York: Houghton-Mifflin Harcort, 2013. Print.

Turing, Alan. “Computing Machinery and Intelligence.” Mind: A Quarterly Review of Psychology and Philosophy 59.236 (1950): 433-460. Electronic. 16 Sept 2015. University of Alberta <http://mind.oxfordjournals.org/>.

Leave a Reply

Your email address will not be published. Required fields are marked *