I. Introduction
In a forthcoming bookFootnote 1 I shall argue that, although it took a long time for free speech and expression to be extended to all members of a society, nonetheless, through different ages and cultures from antiquity to the present, the award of free speech, even to some, was seen as resulting in speech of a beneficial sort.
I further argue that John Stuart Mill supported free speech on the grounds of the beneficial effects it can have, centrally, learning from each other in discussion, and that he made this very explicit in Chapter 2 of his On Liberty of 1859, and subsequent writers have followed him. If understanding this can lead to voluntary self-restraint in speech, this should normally be better than legal constraint for keeping speech beneficial. Conversely, speech that interferes with these benefits, as can, for example, contemptuous religious abuse, does not secure the benefits for which free speech has been valued.
I finish the book with a corollary about a problem of our time, which is what I shall summarize here. Legal constraint is, after all, required in connection with the funding methods chosen by some, not all, social media. It is needed if social media profits are based on using personal data left online by users, in order to create personal profiles of them, and on using the profiles to help advertisers and propagandists target people with different disinformation according to what their profiles suggest will motivate them. I try to spell out the bad effects of this and to counter the suggestion that if and when social media produce such effects, they can still claim to be supporting free speech as it has been understood in the past. With reference to government documents, I propose legislation and cite one possible method of enforcement.
The online social media have brought us great benefits in almost instantaneous access to information and to conversation across national boundaries. I think the overall benefits could remain great if the undesirable problems from funding operations were addressed. Methods of operation change all the time, so that readers will need to check how they have changed. Although I shall mention Facebook most often among the social media, this is chiefly because its pioneering techniques have been the most discussed in the literature. Many other social media use some of the same techniques, which indeed started well before the creation of Facebook.
II. The Funding Methodology of Some Social Media
Some social media make record-breaking profits from the use of personal profiles constructed by inference from the footprints left by their users on their own platforms, and on the platforms of other companies with whom an exchange of personal data has been arranged.Footnote 2 Personal data can be bought and sold, but they can also be exchanged free of charge between companies, if that is more profitable. One company might be able to pool, for exchange purposes, its data about what information people have searched, another about what they have contemplated buying, and another about what they have said online. Advertisers or propagandists will pay social media companies for targeting different individuals with different messages, according to what their different profiles suggest will attract them. Commercial advertising is controlled, but propagandists may supply disinformation, as we shall see below in the context of personalized messages designed to swing elections.
Profiles are also compiled from the responses sought by Facebook from 2010 onward, and after that by many other social media companies, to queries about what messages people like or dislike or want to share with others. This could reveal much more than purchasing interests.
Personal profiles can be of different sorts. Commercial companies may only want advertisements targeted to profiles that show interests relevant to sales. For propagandists, it might be more useful to have character and susceptibilities targeted. Facebook says that it profiles only interests, not characteristics, according to page 39 of a report by the UK Information Commissioner’s Office.Footnote 3 But according to page 38 of the report, Facebook, like Twitter, Google, and Snap (page 41), did not make clear that advertising included political advertising. So readers are left unclear whether political advertising did or did not assess character or susceptibilities as well as interests. On page 40, note 19, the report states that homosexual interest was targeted with advertisements by Facebook, according to a study by Madrid University. This illustrates that even commercial profiling of interests can be intrusive.
The founder of Facebook, Mark Zuckerberg, prefers to speak of trading rather than selling for some purposes. He described the advertising process as “showing the ads to the right people without that data changing hands and going to the advertiser.” I think this may mean that users’ personal data, and perhaps also the resulting profiles, are not sold to advertisers, along with advertising space, but only used to create and send to customers the advertisement for which the advertiser pays. But Michael Kosinski has objected that, although Facebook does the targeting, it can be the advertiser who creates the initial profile of the class of people it wants to reach, in which case something is sold to the advertiser: knowledge of which individuals fit that profile. As he puts it,
If a ski shop pays Facebook to show an ad only to women, then Facebook automatically reveals to the ski shop that all people who clicked on the link [in response to the advertisement] must be women. And in practice, the advertisers make requests that are much more nuanced. They may ask Facebook to show their ad to “liberal Latina women without college education who live in San Antonio and recently got married.” And then they might place a separate ad that is shown only to “conservative African-American women with college educations who live in Austin and are single.” When you click on an ad and are sent to an advertiser’s website, the advertiser knows which ad you saw and thus which bucket you fall in.Footnote 4
Facebook offers help to advertisers, not only by profiling existing clients, but also by finding new clients by using the profiles of “look-alike” groups of people, who seem to have traits similar to existing clients and might therefore be easy to interest.Footnote 5 Facebook does not define precisely how it measures similarity, nor does it disclose its algorithms for selecting the look-alike groups.Footnote 6
Personal profiles may be compiled from various data. Since social media exchange personal data with each other and with other companies, one company might be able to pool its data about what people have searched, another about what they have contemplated buying, and another about what they have said online. In the UK, the exchange of personal data between companies was described in the Final Report of the House of Commons Digital, Culture, Media and Sport Committee of February 2019, entitled Disinformation and “Fake News.”Footnote 7 It quoted emails from within the Facebook company, three of them (paragraphs 97, 98, and 105) about maximizing revenue by exchanging data, and written by the founder Mark Zuckerberg himself in October and November 2012.Footnote 8 Facebook allowed (paragraph 84) access to profiles to 5,200 “apps,” including Netflix, AirBnB, and the taxi app Lyft, and the price for access in Autumn 2013 (paragraph 100) was U.S. $250,000 a year, or “equivalent contributions.”Footnote 9 In addition, access to Facebook users’ profiles was allowed (paragraph 27), in return for exchange of information, to such major Internet companies as Microsoft, Amazon, and Spotify. The Final UK Report of 2019 also spoke of earlier confrontations between the Federal Trade Commission in the United States and Facebook about the exchange of personal profiles in return for equivalent contributions.Footnote 10
It will emerge below in connection with propaganda for swinging elections that another technique was used with Facebook’s knowledge by someone (Wylie) who confessed to using an Internet application that could not only harvest the personal data of Facebook users, but also enhance it many times over by harvesting the data of Facebook “friends” of any user, to create personality profiles of that user. Facebook thus knew that somebody was creating personality profiles from their data, and asked him only for assurance that it was for academic purposes. Their subsequent objection was that they did not know that he was selling the profiles commercially.
Personal profiles can also be compiled from other sources. One source, used by diverse parties, is the tracking devices called web bugs, which had been installed long before Facebook, even in the 1990s, in websites and emails in order to track user activity.Footnote 11 Web bugs can collect our personal data if we insouciantly click on them, and can then return advertising messages to us. Facebook also collects data and profiles from unregulated data-brokering companies.Footnote 12 Data brokers are companies that buy up data, often inaccurate, from a large range of sources and sell it to people who want information about you.Footnote 13 The personal profiles based on these mixed sources can be supplemented by inferences from the personal data so gained.
III. Jeopardizing the Democratic Vote
One problem about trading in profiles with advertisers is that the advertisers are not only the commercial advertisers whom the name might suggest. They can include propagandists who want to swing elections without voters knowing what has happened. The information contained in personal profiles is unprecedentedly rich, and propagandists can use differentiated disinformation targeted to different voters to play on individual susceptibilities ascertained from the profiles. The facilitation of targeted propaganda has been carried out not only by Facebook but by a number of social media companies, and I shall mention below Twitter, also founded in 2006.
Another result of targeting different content to different recipients on the basis of different profiles is that people are kept in separate echo chambers, as they have been called, hearing what echoes their own opinions, rather than learning the truth. Truth is a casualty, and this matters particularly when the content is political.
When personal profiles based on free speech are used to target voters, the practice can make a mockery of the democratic vote. One reason why the practice is so damaging is that democracy depends on the readiness of voters whose candidates lose to accept that more of their fellow citizens have freely chosen to vote the other way. If their fellow citizens’ vote has not been free, but manipulated, where is the basis for accepting the result? What has happened in this case is that profiles based on our free speech have been used to nullify a particularly important act of free speech, one’s vote. It is significant that one’s vote is called one’s voice in Russian.
IV. Better Methodologies for Funding Social Media
There are other social media that do not seek record-breaking profits by trading in personal profiles, and there are private search engines that to different degrees protect our searches from being tracked, such as DuckDuckGo, Start page, and Search Encrypt. They need to be encouraged. Some of them may use the earlier method of funding social media by subscription from users, which would be better still if subscriptions were varied according to ability to pay in different countries or parts of a country. But there are other acceptable methods of funding too. DuckDuckGo does not use your personal data, when you search for information. It allows advertisers to promote items connected not with your personal data, but with the search term you used. And if your search results in a sale from Amazon or eBay, it also gains a commission from those sellers.
It is important that the public should know the method used. Tim Berners-Lee, the founder of the World Wide Web, was still suggesting in 2017,Footnote 14 that social media on the Internet might avoid depending on advertisers for their costs, if they depended instead on subscriptions and micropayments from users. When in 2014 Facebook bought up the company WhatsApp, the latter company at that time respected the privacy of users’ data, by funding itself not through selling personal data to advertisers, but through a very modest user subscription of 99 cents per year—less than one dollar. Facebook, however, is reported to have bought up WhatsApp to gain the lucrative personal data on its users, without initially charging for its use at all, and to have pooled the data of the two companies.Footnote 15 The data explicitly mentioned here were data on the behavior of users. In May 2017, the European Union fined Facebook 122 million dollars for “misleading” statements about the takeover of WhatsApp.Footnote 16 Facebook’s founder Mark Zuckerberg is also allegedFootnote 17 to have assured the founders of WhatsApp that he would not share their users’ data with other applications, and to have told the European Union, which approved the merger, that he had no way of matching the two sets of users. But it is reportedFootnote 18 that when Facebook found a way, it connected the two sets of users and shared with advertisers the profiles of WhatsApp users.
It is a mistaken idea that the collection of personal data is something good in itself.Footnote 19 But on the other hand, not all collection of personal data is illegitimate. I myself would have been interested to know whether the books in a series I started were catering, as intended, to nonspecialists as well as specialists, and it could have helped me to cater better to nonspecialists if I had known who was buying the books. I have no personal objection to Amazon recording the history of books I have bought online from them, in order to tell me of other books I might like, so long as they are not giving the data to others.
V. How Much is Revealed to Users about the Construction of Their Profiles?
It already seems somewhat surprising that Facebook did not ask its users if it might trade in their personal data. Of course, many users might have agreed to the trading, at least before they knew how their personal data would be re-used, because they were getting a very great benefit, free access to conversation across the world on the new social media. But as they were not asked, many would have been aware only of the benefit. I believe that only a minority of users know what makes it possible for their use of social media to be free of charge to them. I think the others would be well advised to read Facebook’s own Data Policy,Footnote 20 which succinctly answers such questions as, “What kinds of information do we collect?” and “How do we use this information?” Some readers may find that the answers to these questions about the kinds of private information that is collected and how it is used are an astonishing revelation. Other readers will not be surprised at all, but many may well begin to feel anxious.
Facebook users are allowed to see what has been retained from the personal data they enter, but until recently not any inferences made from the data to create a profile of them.Footnote 21 Nor are users allowed to see the algorithms used to select the different messages to be shown to different profiles.Footnote 22 However, recently, in response to complaints about privacy violation, Facebook has introduced two privacy mechanisms, as described in at least two papers in 2018.Footnote 23 One mechanism is a button asking with reference to an advertisement received by a user, “Why am I seeing this?” Another is an attribute, or list of attributes, inferred by Facebook to belong to a given user, with some explanation of the inference. The median number of attributes inferred by Facebook in the tests run by one study was 310. But both of Facebook’s innovations have been judged by the articles in question to be of distinctly limited value. It is said that Facebook has provided to advertisers a choice of 200,000 attributes they might want to target in their advertisements, and that advertisers have been known to choose up to 105 attributes at a time for targeting.Footnote 24 But despite these numbers, as regards the “Why am I seeing this?” button, Facebook does not often list more than one attribute to explain why a given advertisement is being seen. The attribute chosen tends to be one widespread among users, so that more intrusive attributes, such as family and relationships, including “open relationships” are not so likely to be revealed.Footnote 25 Facebook does, on the other hand, provide an “Ad preference page” on which it lists all the attributes it has inferred about a user. These include especially the user’s interests, which may have been inferred from the user’s expression of “likes,” interest shown by the user’s investigating particular advertisements or advertiser web pages, or by downloading of particular “apps” (applications). Also listed is behavioral and demographic information about the user. It is not, however, specified what page was “liked” or what advertisement investigated. So the explanation of attributes inferred is rather vague.Footnote 26
Besides targeting certain attributes, advertisers since 2012 have been allowed to target individuals uniquely identified by their email addresses, phone numbers, names, and zip (or postal) codes.Footnote 27 But it is not revealed to the user which of these identifying features has been used.Footnote 28
When Facebook draws personal data from data brokers, using in the United States the four firms Epsilon, DLX, Experian, and Acxiom, it does not specify any inferred attributes.Footnote 29
One extra transparency, however, is that in the “newsfeed” which Facebook feeds to its different users, it does distinguish advertisements by marking them as “sponsored.”Footnote 30
I believe that at least with profiles such as can be used for political purposes, the law should require users to be allowed to see the inferences made about them and the various resulting profiles, and it should be made clear to all users that such data and inferences are being used to make profiles of them and for what purposes.
I also believe it should be made easy for suitable national watchdogs, such as the UK’s Information Commissioner’s Office and Electoral Commission, to see who is purchasing personal profiles for advertising or propaganda, and the advertisements or propaganda should carry information on who is sponsoring them. On August 2, 2019, Jim Waterson in The Guardian Footnote 31 alleged that this was evaded in propaganda presented on Facebook by Lynton Crosby’s Australian lobbying firm, CTF Partners. According to some of their past and present employees, along with internal documents, the firm was alleged to work initially by creating a Facebook page purporting to be an independent news source, but containing what was called disinformation on behalf of its paying customers. The customers were said to include coal and tobacco companies, opponents of cyclists, political advocates of an extreme form of British exit (“Brexit”) from the European Union, and one side in a war with a strategy of killing civilians. According to this account, the Facebook page was given names such as “Middle East Diplomat” to suggest impartiality, or “Make Greens Honest” to query their reception of subsidies for ecological practice. It could then present what looked like a grassroots movement of support, which had in fact been coordinated by employees of CTF Partners. Facebook was further alleged to waive its protection against “coordinated inauthentic behavior,” if the coordination was made under the name of one person who could be called a “business manager.” Some messages of support reportedly came in from personal email accounts, others from encrypted email accounts to avoid being traced, some bearing only initials, not names. The names of some CTF writers were reported to be known to Facebook, but not passed on to the public. According to Waterson’s report, the resulting propaganda was then targeted at millions of people, using Facebook’s targeting tools, including the personal profiles of supporters and Facebook’s look-alike groups, mentioned above.
VI. The Need To Sell Advertising Space within Sensational or Extremist Webpages Prioritizes Extremist Politicians, Glamorizes Violence
A different kind of problem, also serious, comes from social media selling advertising space to advertisers and placing their advertisements within sensational or extremist content, which is more widely read, so that the media can charge more. Sensational content can be supplied on extremist websites hosted by the media. One example supplied by Private Eye Footnote 32 concerned advertisements placed by Google, not Facebook, on extremist websites which in some cases supported causes opposite to those of the companies who were advertising. The extremist sites were allowed by Google to gain income when readers visited the advertisements they hosted.
Sensation can be produced by messages of hate, intimidation, indecency, descriptions of self-harm and suicide—some of which have further suicidal consequences—or by depictions of scenes of terrorist murders, which incite further terrorism. Sensational information may be collected from sources that are themselves putting out false information. One effect that has been noticed in political news on social media during election campaigns is that candidates who looked to the middle ground and sought consensus gained fewer followers than polarizing candidates who were emotionally exciting. If so, the social media, even if unintentionally, have favored extremist election results.Footnote 33 Another effect is glamorizing violence and self-harm.
VII. Why It Is Difficult for Social Media To Remove Genocidal Content, Appeals to Burn Members of Racial Groups, Scenes of Mass Murder, and Other Extremism
There are genuine difficulties for social media in removing content when it becomes too extremist, even when social media tries. This makes the methodology dangerous even when that is not intended. The United Nations accused social media, and Facebook in particular, of playing a significant role in genocide of the Rohingya people in Myanmar, by hosting without correction the propaganda against those people, which facilitated the subsequent genocide.Footnote 34 A campaign by Jew-haters spread across certain media asking how to burn Jews. Particularly horrifying was the live streaming on Facebook on March 15, 2019, shared by Twitter and Google’s YouTube, of fifty victims, including children, being killed in a New Zealand mosque by a gun-wielding hater of Muslims, who deliberately streamed his own action to the media. What were the problems in taking such content down?
It is often hard to have enough editors of content. When Facebook hosted the propaganda that was cited as playing a significant role in subsequent genocide against the Rohingya people, it at first had no staff in Myanmar who spoke the language to serve as editors of the propaganda, although it later brought some in. That, at any rate, was reported by the U.S. Public Broadcasting Service television program, Frontline’s The Facebook Dilemma, on October 25–26, 2018. It is, in any case, hard to have enough editors of content skilled in the languages of different countries, and in Nigeria in 2019, Facebook was said to have only four.Footnote 35
The use of web robots to remove undesirable content may sometimes seem sensible, because robots can pick out certain undesirable words faster than humans. But in order to correct the recent campaign by Jew-haters who were asking how to burn Jews, Facebook is said to have brought in more human editors instead.Footnote 36 But how are enough humans to be employed? In Germany,Footnote 37 to avoid new German fines of up to 50 million Euros (roughly £44 million), 500 employees of Internet platforms have been enlisted to edit platform content. But can this practice be followed in all countries?
Another problem is that editing out content involves moral understanding and does not lend itself to reliable general rules that can be taught to humans or programmed into web robots. An illustration of this difficulty was Facebook’s temporary removal of the famous image of a naked nine-year-old girl running away in the Vietnam war, her back burnt by napalm, an important illustration of the horror of the Vietnam war. There was nothing sexually indecent about the image, but it was wrongly so treated, and was therefore removed on that ground.Footnote 38
Frontline’s The Facebook Dilemma recorded another problem: the experience of Max Schrems when he asked the European Data Protection Commission in Ireland to remove his personal data. What he found was that the data were stored in so many places that they were automatically retrieved, and reloaded after initial deletion.Footnote 39 This reloading of data also affected the very slow removal by Facebook, Twitter, and Google’s YouTube of the terrorist’s video of fifty people being killed by his gunfire in a New Zealand mosque on Friday March 15, 2019. After twenty-four hours, Facebook had removed only 1.5 million accounts that were showing the video,Footnote 40 even though the display was an encouragement to like-minded terrorists. It is an unsound practice for social media to be in control of editing requirements in cases like this.
VIII. Excuses: Freedom of Speech, Bringing People Together, Only a Channel for Speech of Others
We have noted already one of the defenses put forward for the funding methodology according to which personal data are not shown to advertisers. But there are others. Mark Zuckerberg has sometimes defended Facebook’s practices by saying that free speech requires them. In Chapter 2 of my forthcoming book, I argue that legal prohibition of speech should be an exception, but that not all speech is good. The case made by John Stuart Mill for the value of free speech, as I discussed in the introduction above, casts doubt on the value of free speech if it is used in a way that interferes with its benefits. In chapter 1 of my forthcoming book, I found similar benefits presupposed by centuries of Mill’s predecessors. I saw the benefits Mill enumerates as depending on the use of discussion to learn new truths from each other. Our learning the truth, however, is not a central purpose of all social media companies. Search engines can indeed be useful. But if funding depends on promoting advertising or propaganda with too little attention to its source, free speech will most privilege those with deep pockets, and it will not serve the value of enabling us to learn new insights from each other that Mill ascribed to free speech. According to Sacha Baron Cohen,Footnote 41 Mark Zuckerberg has also stated that one of his main goals is to “uphold as wide a definition of freedom of expression as possible.” This would surely exclude a definition that connects freedom of expression with human benefit. And indeed, according to Baron Cohen, Mark Zuckerberg did not think that Facebook should remove posts denying Hitler’s genocide of six million Jews between 1941 and 1945.
Mark Zuckerberg has also presented it as a central merit of Facebook that it brings people together. But there are now other services that do that, and it needs to be considered whether the right people are brought together and the right people kept apart. I suggested above that social media can keep people apart wrongly, dividing them into separate echo chambers on the basis of personal profiles, when they should be together. This is the very opposite of what John Stuart Mill saw as the central benefit of free speech: having people discuss and learn from different viewpoints. But equally, it is not good to put victims together with persecutors or murderers, or propagandists plying disinformation together with the voters whom they target.
Social media have sometimes supported publication of bad content on the basis of its ability to attract, by appeal to Section 230 of the U.S. Communications Decency Act, which says that a platform is not responsible for the content that passes through it.Footnote 42 But Facebook and many other media are not a mere passage for whatever passes through. Facebook deliberately feeds its users, as the name newsfeed implies, with content—sometimes sensational content—and it is said to be willing, in return for payment, to give higher prominence to an item. If so, it cannot disclaim responsibility for the content its users receive. Google also arranges content in an order, but for some purposes this is desirable.
IX. The Difficulty of Basing Reliable News on Content Designed for Attraction
Given the difficulty of basing reliable news on content designed to attract users, it is alarming to read a report from the Pew Center that 62 percent of U.S. adults get at least some of their news from social media, and of these adults, 64 percent from just one, most commonly Facebook.Footnote 43 More recently, it has been claimed that in twenty-six countries, more than half the population in 2016 were using social media as a source of news, and for more than a quarter of the young in those countries it was the main source of news.Footnote 44
X. The Ineffectiveness of Company Fines against Monopoly and Maximized Profits
The monopoly power of the biggest social media is increased by what further organizations they own or work with. Facebook owns WhatsApp and Instagram, Google owns YouTube. There are laws against monopoly because it supresses innovative competition. Fines have been imposed, not only for monopoly, but for a variety of the practices discussed. But, once achieved, monopoly, like other methods of maximizing profit, has the further effect of making it affordable to contest or pay fines and, hence, to continue with unsatisfactory practices.
The European Union in June 2017 imposed a (subsequently contested) fine of 2.4 billion Euros on Google for exercising its monopoly.Footnote 45 The European Union’s Competition Commissioner, Margrethe Vestager, announced on July 18, 2018 the imposition of a (subsequently contested) monopoly fine on Google of 4.34 billion Euros for pressuring manufacturers to prioritize the Google android phone system over those of any other companies.Footnote 46 In May 2017, the European Union fined Facebook 122 million dollars for “misleading” statements about Facebook’s takeover of WhatsApp.Footnote 47 France has also levied a tax on digital monopolies,Footnote 48 and in the UK the leader of the Labour Party Opposition suggested that the country should levy a monopoly tax on tech firms that stifle competition.Footnote 49
XI. Swinging Votes by Using Personal Profiles To Target Voters with Disinformation in Two 2016 Voting Campaigns
In the 2016 U.S. presidential election campaign of Donald Trump against his rival Hillary Clinton, one Facebook advertisement, targeted at voters revealed to have strongly Christian views, said “Press ‘Like’ to help Jesus win.”Footnote 50 It showed a white-colored Jesus Christ supporting Trump opposite the Devil, colored brown, supporting his rival for the presidency, Hillary Clinton.Footnote 51 Voters discovered to be African American and their supporters were targeted with fake news claiming that Hillary Clinton’s description, twenty years earlier, of young members of drug gangs as super-predators, was directed against African Americans.Footnote 52 It was further falsely claimed that she was receiving money from the Ku Klux Klan, an organization in the United States known for murdering African Americans.Footnote 53 Another fake slogan “The Pope endorses Trump,” finished up in Facebook’s newsfeed.
These were only a few of the disclosures about the U.S. voting campaign of 2016. A major contribution to the investigation of the UK referendum campaign that year was Carole Cadwalldr’s publication on March 18, 2018, in The Observer, re-published online by The Guardian. This recorded the confessions of a collaborator, turned whistleblower, Christopher Wylie, about his own part in targeting voters in the UK referendum campaign about whether Britain should exit from the European Union (“Brexit”). He was mentioned above as creating from Facebook’s personal data, and with its knowledge, personality profiles of users, expanded by the data of the users’ Facebook “friends.” Wylie claimedFootnote 54 he had taken part in founding (abroad for safety) a company, Aggregate IQ (AIQ), to target voters on the Internet. AIQ was alleged to have exceeded the maximum spending permitted in voting campaigns by having smaller amounts channeled through separate, but coordinated, intermediaries. Wylie said that AIQ managed Cambridge Analytica’s technology platform and did the actual targeting of voters on the basis of Cambridge Analytica’s data. The man regarded as the chief strategist of Vote Leave, Dominic Cummings, was described as saying, “Without a doubt, the Vote Leave campaign owes a great deal of its success to the work of Aggregate IQ. We couldn’t have done it without them.”Footnote 55
A further allegation implicated Russia in funding disguised accounts on Twitter, for paid suppliers of fake news about Brexit and using web robots to multiply the messages. Other Internet platforms had not at that time been investigated but an initial 419 disguised Twitter accounts were identified as coming from Russia, and 45,000 suspect messages about Brexit had come from a number of Twitter accounts created and then removed in the forty-eight hours just before and just after the Brexit referendum vote.Footnote 56
The role of Russia in UK voting about Brexit caused further anxiety when on November 4, 2019, the Chair of the UK Parliament’s Intelligence and Security Committee, Dominic Grieve, announced that the then Prime Minister, Boris Johnson, had delayed publication of Parliament’s official cross-party report on Russian interference in the 2016 referendum on Brexit and in the UK’s 2017 general election. It had been cleared, Grieve said, by security for release and given by October 17th, 2019, to Boris Johnson, who had just been elevated by his party to the post of Prime Minister. But Johnson had earlier led the campaign for Brexit before the 2016 referendum with the hilarious claim painted on his battle bus that the UK had to pay the European Union £350 million per week, which could instead be recouped for the National Health Service. He subsequently delayed publication of the Russian interference report, and on October 29th gained further delay by securing agreement to holding a general election, with a view to confirming his position and policy on Brexit, without voters having seen the report.Footnote 57 The delay caused further suspicion and sparked a lawsuit from the Bureau of Investigative Journalism.Footnote 58 But he won the election and it remains to be seen when the report will be published.
The UK Information Commissioner, in one of her reports on July 11, 2018, Democracy Disrupted, at section 3, explained why her investigation focused not only on social media companies, but also on the political parties. It was because it was the political parties that in the UK commissioned social media for advertising to voters.
To return to the swinging of votes in the U.S. presidential election campaign of 2016,Footnote 59 it is saidFootnote 60 that in May 2018 Facebook admitted that the firm Cambridge Analytica had accessed 87 million files of data predominantly from their U.S. users, but including the data of a million users in the UK. Further, according to this account, it had allowed access to users’ data on at least two hundred apps. And it was said also to be sharing users’ data with Apple, Microsoft, Amazon, and nearly sixty other Internet device makers, enabling them to download personal data, even when users had denied Facebook permission to share this information with third parties. The data explicitly mentioned here were data on the behavior of users, and it was mentioned that data from tracking the movements and location of people can also be traded. Facebook and Google were said also to have embedded their advisers in the 2016 presidential election campaign team of Trump, a proposal that Hillary Clinton’s rival presidential campaign declined.Footnote 61
Wylie’s whistleblowing on March 18, 2018 proved a new watershed, and the head of Facebook, Mark Zuckerberg, a U.S. citizen, was called, and agreed, to testify personally before the United States Congress about the estimated 87 million U.S. citizens whose personal data on Facebook were acquired by Cambridge Analytica for purposes of interfering in the 2016 Presidential election campaign.Footnote 62 He testified for nearly five hours on April 11, 2018 before the Senate’s Commerce and Judiciary Committees,Footnote 63 and the next day for four hours before the Energy and Commerce Committees of the House of Representatives. The House asked about the personal data not of 87 million, as before, but of more than a billion individuals having been extracted by the abuse of phone or email look-up features, and about the ability to acquire personal data about an individual from an individual’s friends on Facebook, even if that individual did not use Facebook.Footnote 64
In the week starting December 16, 2018, new reports to the U.S. Senate Intelligence Committee were announcedFootnote 65 about violations of privacy not previously acknowledged to the Senate. They spoke of the extent of help given by Facebook and other social media to alleged Russian interference with the 2016 U.S. presidential election, and the extent to which Facebook and others had failed to acknowledge this help. Although Russia was said to have used every major social media platform, Facebook was reported to be the most effective at targeting U.S. conservatives and African Americans. It was said that 99 percent of all uptake on its targeting in such forms as expressing “likes” (39 million of them), or implementing “shares” (31 million) to copy the posts to others, came from twenty Facebook “pages” controlled from Russia with names misleadingly implying support for America and for African Americans. Despite the major role of Facebook, its ever-growing subsidiary Instagram was judged in the reports to have gained more impact or “engagement” and to have contained more nefarious content.
The subsequent reports to the Senate Intelligence Committee claim that Facebook had revealed to the Senate by April 2018 only some of the Russian posts and accounts, while Twitter and Google submitted their information to the reporters in a form very difficult to analyze. This led the senior Democrat on the U.S. Senate Intelligence Committee to say that new laws were needed to tackle a crisis around social media.
X. European Development of Privacy Law for the Internet and its Effect on the UK and Elsewhere
In 2014, the European Court of Justice, to which at that time the UK was subject, accepted that some earlier data-retention regulations violated certain principles of privacy in the European Charter of Fundamental Human Rights, sections 7 and 8, and new legislation was required of the UK, which it passed that year. Earlier regulations had not sufficiently required, for example, that retained data be connected to a person who might throw light on crime, or could serve the purposes of protecting public security, that there should be rules about who might be allowed to investigate the data and under what procedures, and objective justification for the duration of data retention.Footnote 66
Although Federal Germany had an early Data Protection Act in 1977, influenced by a 1970 data protection statute in the German province of Hesse,Footnote 67 it is the European Union that has recently led the way in developing privacy law in Europe in relation to personal data on the Internet. Its 1995 Data Protection directive led to the UK’s 1998 Data Protection Act. The European Union’s new General Data Protection Regulation came into force beginning May 25, 2018 for all countries in the Union, at that time including the UK.Footnote 68 It requires processers who process personal data of residents of the Union, on pain of fines of up to, and in some cases over, 20 million Euros, to notify specified controllers about breaches of personal rights within seventy-two hours of finding them. Internet users may obtain from controllers electronic copies of any data held on them, and confirmation of where and for what purpose it is held. They may ask, with some restrictions, for data to be erased and processing halted,Footnote 69 although implementation of the provision for erasure appears still to have problems, judging from the experience of Max Schrems, cited above, whose deletions were immediately replaced. The remedy is in any case limited insofar as the data may already have been copied and used many times before action could be taken. Separate provision is made for personal data on suspected criminals or terrorists to continue being collected and transferred by competent authorities.
The UK in the same year, 2018, complied as a member at the time of the European Union, by introducing a new UK Data Protection Act 2018. And similar provisions for data protection may spread further. Canada’s corresponding PIPEDA provisions have been deemed adequately compliant with Europe’s, but there are proposals for revision to make them fully compliant, adding, for example, the right to having personal data erased.Footnote 70 Japan also has an act for the protection of information accepted as adequate by the European Union.Footnote 71
The European Union had already been questioning the privacy rules in the UK, then a member of the Union, surrounding government retention of the personal data of citizens for security purposes. In 2013, it was revealed by Edward Snowden that in the United States the National Security Agency (NSA) was collecting the phone records of millions of unsuspecting Americans on the basis of secret court orders, and the corresponding British organization (GCHQ or Government Communications Headquarters) was doing the same with the knowledge of the British Government. In response to a legal challenge initially brought by two British members of Parliament, the European Union’s highest court ruled in December 2016 that “general and indiscriminate retention” of emails and electronic communications by governments was illegal.Footnote 72
The UK’s conformity, however, was possibly compromised when in November 2016 it approved a new Investigatory Powers Act, requiring businesses inside and outside the UK to retain personal data on their customers. It required Internet providers to retain for twelve months a record of all websites visited by their users, and phone companies to record information on all calls, and make them accessible to various government agencies, to police and to security services. The agencies were given new rights of access, previously unauthorized, to computers and phones, and the cell phones and web records of journalists could, with a judge’s agreement, also be inspected. It was laid down that UK data can be accessed also by U.S. security services. But as of September 9, 2017, this Act was itself reported to have been referred to the European Court of Justice to consider its compatibility with European law.Footnote 73
I have had a number of occasions to cite the legal principles of the European Union and the verdicts of its courts as the most helpful for the problems of free speech under discussion. The many European regulations have caused irritation to the UK government and to many UK citizens. But when the country departs from the jurisdiction of Europe, citizens risk losing some very important protections in the area of privacy and other rights. Ultimately decisions should not be left to the sole discretion of companies that profit from providing a platform, nor to purely political interests.
Although in March 2017, the founder of the World Wide Web, Tim Berners-Lee, opposed European Union policing, preferring it to be left to the Internet companies,Footnote 74 by March 2018, after numerous revelations, he called for regulation, legal if necessary, because of the concentration of power locked in to a few platforms, including Facebook, Google, and Twitter, and their buying up of other companies.Footnote 75
XI. Response in the United States to Data Protection Law
In the United States, companies will have to deal with citizens of the European Union in accordance with Europe’s law. But in addition, a 2020 paperFootnote 76 finds that some of the individual states of the United States are interested in the fair processing of data, even though central Government is not taking it up. The U.S. State of California was the first to pass a law expanding consumer privacy rights in 2018.Footnote 77 Since then, the paper claims, fourteen more states in the United States have introduced, or passed, data protection bills,Footnote 78 although it warns that in the United States, privacy legislation never has the weight of freedom of expression, which is required as an amendment to the U.S. Constitution, whereas privacy can be traded against other interests.Footnote 79 The sector of commerce in the United States would also welcome data protection for a different reason: the European Union will not share data with countries which it does not judge to have adequate data protection. Some companies have sought to get around this by binding themselves strictly with the U.S. government, or with the U.S. Federal Trade Commission, to abide by the European privacy requirements.Footnote 80 We shall see below that Facebook is also said to be losing some of its popularity with users in the United States, because of fears about privacy.
Despite the absence of U.S. Federal privacy law judged adequate by the European Union, the paper draws attentionFootnote 81 to the influence on other countries of a much earlier U.S. report of 1973 by an advisory committee to the Secretary of the U.S. Department of Health, Education and Welfare. This recommended six “Fair Information Practices” (or FIPs) in record-keeping systems, following, and possibly influenced by, the UK’s Younger Report of 1972 on the handling of information by computers. The principles were restated and revised in 1980 in Europe by the Organisation for Economic Cooperation and Development.
XII. Trust
The UK has special safeguards on non-digital political advertisements, which before an election are not allowed on radio or TV, except in rationed election broadcasts permitted to the main political parties at election time. This too, however, is currently upset by the availability of political advertisements on the Internet, which has accordingly been questioned.Footnote 82 The British Broadcasting Company (BBC), is tightly controlled in its procedures at all times. It is obliged to seek several times a day the best version of the information it is giving.Footnote 83 And it is required to represent both sides of a case, where the case is not clear. It is not surprising that in a recent selection of UK news brands, BBC news has the highest level of trust among UK respondents surveyed, for respondents both over and under thirty-five years of age, and the highest level for both left- and center-leaning respondents, though not for the right-leaning.Footnote 84
By contrast with the BBC, only 23 percent say that they trust news on social media.Footnote 85 It might be thought that this lack of trust is a safeguard. But lack of trust does not imply disbelief. Rather, lack of trust may leave one wondering what is true and harboring suspicions equally of the true and the false. That is not a satisfactory situation.
Facebook, however, remains far ahead of any of the other social media in the Reuters Institute’s periodic checks between 2014 and 2018 of popularity as a source of news. Nonetheless, popularity can change. WhatsApp, which Facebook bought up, and which, like many other apps, allows restricted circles of users, has been gaining ground. Since 2016, it has risen on both news-charts and all-purpose-charts to second place behind Facebook, and is now 20 percent behind it. This is still well behind, but what is more interesting is the reasons that people give for discussing news on WhatsApp rather than on Facebook itself. They say: Footnote 86 “I use Facebook less, because I don’t want to have a close contact to many of my ‘friends’ there. These friends on Facebook are not important to me any more. With my inner circle of friends I communicate via WhatsApp.” “Even though you may disagree with your friend on WhatsApp, friends are able to keep that good level of respect, everybody shares their opinion, and anyone who disagrees can joke about it. It’s a lighter mood to debate news with friends on WhatsApp than on Facebook.” “Somehow WhatsApp seems a lot more private, … whereas in Facebook for some reason it just feels like it’s public. Even if you’re in Messenger.” (Facebook Messenger is Facebook’s own alternative messaging app).
Local Facebook groups are said to have been successful in the UK general election of 2017, so long as local issues remained prominent. But by November 15, 2019, it was reported that younger voters were leaving the group because of extreme views, bullying, and toxicity (believed to be coming in from outside the local area), that they preferred Twitter, and that they were abandoning Facebook also because they were put off by the number of political advertisements.Footnote 87
According to John Gramlich,Footnote 88 Facebook is losing its popularity with some users in the United States because of fears about privacy. He claims that 42 percent of Americans have taken a break from checking their account for several weeks or more and 26 percent have deleted the app from their phone in the year preceding 2018).
XIII. Facebook’s Decision-Making Processes in the Television Program, The Facebook Dilemma
A new perspective was provided on Facebook’s policy-making by the two-part television program on Public Service Broadcasting, The Facebook Dilemma, mentioned above as having been created by Frontline. It is important to add that Frontline, is an organization funded by charities, many of them devoted to quality journalism. The program involved interviews with Facebook executives, including the Director Mark Zuckerberg, as well as former executives who had left, and people who had asked Facebook for particular remedies. It was also briefly reviewed by Sarah Halpern in the New York Review of Books, in “Apologize Later.”Footnote 89 That latter title was taken from a TV shot of Mark Zukerberg saying, “It’s more useful, like, to make things happen and then apologize later.
In the television program, there seemed to be a pattern of people alerting Facebook to its being used for wrongdoing, and Facebook doing nothing about it. The program’s interpretation was that the profit motive accounted for reluctance to reduce the use of Facebook, whoever the user. Among examples offered was the apparent noncompliance with a settlement concerning privacy with the U.S. Federal Trade Commission.Footnote 90The Defense Department organization DARPA also became concerned about national security implications. But Facebook was said not to have investigated further when it was used by Russian sources for discrediting the Ukrainian government, at the time when it feared a possible Russian invasion of the Ukraine.Footnote 91
As regards the genocide mentioned above against Rohingya Muslims in Myanmar, according to the program, Facebook had received warnings for years from David Madden that it was being used to denigrate the Muslim population in the eyes of the Buddhist majority, for example with caricaturing cartoons and descriptions of them as rapists and terrorists. Despite Madden’s warnings, Facebook did not remove any content, and when the homes of the Rohingya Muslims were burnt, many killed, and 150,000 displaced, the United Nations, as already mentioned, said that Facebook had played a significant role in genocide. Sue Halpern’s review of the television program mentioned Facebook as also being used to support the massacre of Tamils in Sri Lanka.Footnote 92
In Egypt, the program described Facebook as being used to support the democratic revolution that ousted the ruler Mubarak in 2011, with a young Egyptian Google executive, Wael Ghonim, playing a prominent role. But when the counter-revolution took place and restored a different strongman, it equally used Facebook for its victory, and denigrated Wael Ghonim as an Israeli spy. Facebook profited whichever party made use of it.
In the Philippines, the program reported, President Duterte had a critic of his war against drugs, with extra-judicial killings, Maria Ressa, trolled with hate mail on Facebook through twenty-six accounts with up to ninety posts an hour, one calling for her death by repeated rape. But Facebook did not, at her request, take down any accounts or intervene, even though it employed her as a fact-checker.Footnote 93
Frontline’s program gave further indications of why Facebook was unlikely to rectify misuses of the sort described, even when it could. Everyone in the company accepted that all decisions would be made by the founder, Mark Zuckerberg, who held the majority of shareholder votes. The advertising model of profitability was revamped in 2008 after consultations by the Chief Operating Officer Sheryl Sandberg. Mark Zuckerberg also continued consulting some colleagues about how best to increase revenue further, as seen in his leaked emails of 2012, mentioned above as known to a Parliamentary committee. But as sole decision-maker he would not presumably have had the salutary constraints of having to seek a consensus or address an alternative viewpoint on revenue. He also did not have to respond to any entreaties. It was a long time since Harvard had been able to make him to take a website down.
It is true that he answered very confidently questions put to him by non-experts when he was called before Congress, while the subject was new to them.Footnote 94 But he did not normally have to answer experts like Kara Swisher, the woman expert on privacy rights on Frontline’s program. She courteously asked him to explain some of his controversial policies. When that happened, the session was interrupted by his feeling unwell. He said he had flu, although her subsequent comment was that he seemed to be having a panic attack. Either or both could be true, but what struck me was something different, that he was not used to speaking to an expert courteously asking him to explain to a large audience why he was doing something. Without the ability to explain openly in reply to salutary, but courteous, questions, and the regular practice of this, it does not seem likely that such policies can be properly re-thought. Even the man appointed as Facebook’s head of Privacy, Sandy Parakilas, was shown in the program as explaining that he left in 2012, because he found that the company had no interest in users’ privacy.
To mention one more thing that the program showed, company apologists were seen all giving similar excuses, as if they had been rehearsed, for the alleged misuses that the program presented. These problems, it was said, cannot be solved, only contained. The bad can only be minimized. This is no doubt true for some contexts. But it is an inadequate defense for playing a significant role in genocide, or for lending support to major massacres, or for jeopardizing the validity of the democratic electoral system. It is not sufficient to say of such effects that one can only minimize the occurrences.
XIV. Other Contexts of Decision-Making
The access to alternative points of view in decision-making, which Mill thought so important a benefit of free speech, seems also to be curtailed in the Facebook campus at Menlo Park in the San Francisco area. Visitors are not allowed in, unless accompanied the whole time by a friend who works there, and the main workforce is expected to remain on the campus where they have their residence, transport around campus, a big choice of places to eat, and every facility they are expected to want.Footnote 95 Indeed, an enthusiastic visitor, “Heather,” accompanied by a friend in 2018 or earlier, described the campus as “the happiest place in Tech.” But the task set to the workforce, unauthorized capture of electronic data, or “hacking,” was reinforced throughout the site by such names as “Hacker Way,” “Hacker Square,” “Hacker Company,” by stonework spelling out the word “Hack,” and the month of her visit, October, had been renamed “Hacktober.” Of course, hacking is sometimes desirable. We want governments to hack terrorist communications, and we may want others to discover by hacking whether governments and companies are illegitimately hacking our innocent communications without due reason. But the unqualified celebration of hacking might well have deterred members of the Facebook workforce from making proposals on the ethics of hacking. In fact, however, it was reported on November 7, 2019,Footnote 96 that in late October 2019, hundreds of Facebook employees wrote to Mark Zuckerberg, asking for changes to how the company handled political ads, including limits to the targeting of very small groups. It remains to be seen what changes, if any, may result. In this last case, it may turn out that communal participation will have made a difference to final decisions, but the participation does not appear to have been invited.
The lack of communal participation in decisions may have played a role in another context when Mark Zuckerberg did not reply personally to invitations to attend the UK House of Commons Digital, Media and Sport Committee’s oral evidence sessions, although that committee did bring together an International Grand Committee from nine other legislatures across the world. Mark Zuckerberg sent representatives to the committee, but it was commented in that committee’s Final Report of February 14, 2019, section 30, that the Facebook representatives had not been properly briefed on crucial issues, and did not answer the questions, nor send the answers they had promised subsequently. The committee reported that it believed the evasions deliberate. But it may have been that Facebook deliberately left its representatives out of the picture. An engineer representing Facebook before the committee, decently admitted that he felt ashamed when hearing of such things as Facebook’s refusal to remove posts stirring religious tensions in Sri Lanka.Footnote 97 Another representative, when asked why he was selected to represent Facebook, replied that he had merely volunteered.
XV. The Need for New Legislation
I have already mentioned some areas in which new legislation may be needed about fuller revelation to users and to appropriate judicial authorities of how personal data are being collected and used. It was argued that Facebook does not as yet reveal enough to users about how their profiles are constructed.
In addition, legislation would be needed to require social media companies to identify, if asked by suitable judicial authorities, advertisers who have paid them for illegal advertisements that target groups matching certain profiles and the individuals to whom they have sent such advertisements. In the UK, the Information Commissioner’s Office is already using statutory powers to seek such information. But social media companies should be under an obligation to supply it to such bodies.
The buyers of illegal advertisements, as well as the sellers, should be made legally liable, especially in cases to which the UK Information Commissioner drew attention, in which a political party or agent buys from the social media the targeting of deceptive messages on voters, in order to swing an election. In these cases, it is the political parties or agents who initiate the manipulation.
Messages of hate or intimidation sent to victims on social media not by the media themselves, but by agents making use of the media, are not so easy for the media to detect and remove. But legislation can at least require the media to show that they have taken reasonable measures for their detection and removal.
As regards interference with voting campaigns, the recommendations of UK watchdogs also need to be given effect. The Information Commission, the watchdog with the most powers, set up by Act of Parliament as early as 1998, and sponsored by the House of Commons Digital, Culture, Media, and Sport committee, was wise to put pressure on political parties as well as private companies, when protecting electoral procedures, for reasons given above. It not only made requirements of political parties, but also arranged to audit their compliance by a named date. Its powers also enabled it to collect substantial records of electoral and other practice by electioneering firms. It can itself issue fines for breaking the law, although at some point it needs to ask the courts of justice to enforce the law.
Another relevant Commission, the Electoral Commission, has been starved of powers, and has had to ask Parliament for more. The UK Government needs to address this and bring legislation before Parliament for strengthening it, but so far appears not to have acted.
We saw that the recommendations of the Information Commission’s Investigation Update were incorporated in the Final Report of the UK House of Commons Digital, Culture, Media, and Sport Committee, but both await response from the Executive, that is from the Prime Minister and senior ministers who form the Cabinet, all from the ruling party in Parliament. Until they respond, it will not be known what changes in law the Executive might submit to Parliament. The minister for Digital, Culture, Media, and Sport has made some comments, but ministers are often moved from their current posts, so it is still unclear what legislation, if any, can be expected.
In France, by contrast, in preparation for the 2019 elections to the Parliament of the European Union, President Macron proposed the creation of a “European Agency for the Protection of Democracies,” which would offer the services of experts to individual member states seeking to protect their elections. He also proposed banning foreign financing of European political parties.
In all these cases, government has a large role to play, but good proposals will not necessarily turn into law without further support. A central claim of Shoshana Zuboff’s The Age of Surveillance Capitalism is that in some Western countries a campaign of ordinary people is needed to press government into action. There cannot be such a campaign if people do not know the somewhat secretive ways of some social media companies.
XVI. The Need for Enforcement
Legislators also need to consider carefully how to enforce legislation on those trading in an illegal way in personal profiles, whether selling or buying targeted advertisements. As regards selling, compliant social media should be positively encouraged. We have seen, on the other hand, that fines, however large, do not deter companies that are themselves large enough to contest or pay the fines. The same would be true of compensation for damages sought in lawsuits. But there is in any case no possible compensation for genocide, or for jeopardizing the democratic electoral system. Recommendations were made in the Online Harms White Paper introduced by the UK government in April 2019, with a view to eventual legislation. In section 6, Enforcement, at 6.5, it was proposed that there should be liability not only of companies, but also personal liability of individual members of senior management in social media for civil fines or even criminal liability. This could make it hard for noncompliant companies to recruit executives. The burdens of personal liability are bound to be felt in a way that company liability may not be. Individuals are not in the position of companies, which can afford to pay or contest fines indefinitely, and companies may find it harder to cover the costs of their executives as well as their own. Individuals are also less likely to be willing to evade the law by moving to countries that have not ratified the legislation, because they would not be able to return with impunity.Footnote 98 The White Paper referred to personal liability already introduced into Financial Services. My attention has also been drawn to personal liability recently introduced into the scandal concerning the alleged concealment of diesel emissions from Volkswagen cars,Footnote 99 and in the case of top executives of Barclays Bank allegedly concealing a payment from Qatar to investors.Footnote 100Even in the United States, the Consumer Data Protection Act introduced to Congress by Senator Ron Wyden seeks to hold executives personally liable for certain lapses in automated decision systems, although this Act is not thought likely to be passed.Footnote 101
XVII. A Better Way
In a talk I gave on October 28, 2019, I said, “How much better it would be if the leaders of some social media companies powerful enough to influence others voluntarily initiated a reform of practice themselves.” The next day a rival company, Twitter, announced that it would no longer host political advertisements.Footnote 102 This sounded at least a welcome concession, albeit modest, assuming Twitter would still host personal tweets from politicians. But the same day, by contrast, Facebook went on to announce that it would continue political advertisements. Twitter went on to elaborateFootnote 103 that it would ban all electoral candidates, elected officials, and political parties from advertising, but allow some not-for-profit organizations to promote messages about social issues. This at least attempted the difficult task of trying to distinguish between different types of communication, as opposed to repeating a single phrase like “free speech.” Twitter’s decision also appeared to be prudent, judging from the report mentioned above,Footnote 104 that younger voters were abandoning Facebook, often because they were put off by the number of political advertisements. This may provide support for the prediction made to me eleven days earlier by a younger participant in an academic discussion that some people might prefer paying a subscription to seeing so many political advertisements.
Reforms were announced also by Google, which said that from February 2020, it would limit advertisers’ access to personal data, to stop advertisers associating individual users with such categories as religion, politics and sexual orientation. By November 21, 2019, Google had worked out a sample of further distinctions. It would ban doctored images and videos, misleading claims about census records and demonstrably false claims that could undermine trust in elections or the democratic process. Political advertisements that refer to election candidates, political parties, or measures that were the subject of a current ballot would be allowed to target voters only through the use of data about their age, gender, or address, not of more sensitive data. The new restrictions would apply to the UK general election called for December 2019 and to the U.S. presidential election of 2020.
There are, however, other areas in which reform is needed. The Financial Times had an article on November 15, 2019Footnote 105 about reaction to its discovery that some of the UK’s most popular websites, were sharing sensitive data such as medical symptoms and diagnoses with companies, including Google.
XVIII. The Continuing Need To Control Political Parties
Unfortunately, in the period leading up to the UK’s general election of December 12, 2019, there was confirmation of the UK Information Commissioner’s earlier stress on the need to control the use of social media by political parties. The Financial Times carried a report on November 21, 2019,Footnote 106 that the Conservative party (it was not said exactly who) had used the trick of doctoring a video, to make it appear that a leading member of the Labour opposition party, its later leader Sir Keir Starmer, had been unable to answer an important political question. At election time, again according to the Financial Times report, the governing party had misleadingly rebranded its account on Twitter as an independent fact-checking service. This could enable it to evade Twitter’s recent reform of banning political advertisements, and it also put the party’s interest in retaining power through the election ahead of one of our few safeguards against the misuse of social media: our ability to rely on the trustworthiness of fact-checks.
The Conservative party was also said, in the same Financial Times report, to have gone on to buy a deceptive website address, www.labourmanifesto.co.uk, in order to present a fake election manifesto implied to be that of the leading opposition party, although its website on which this was shown was still clearly labelled as a website of the Conservative party. Moreover, the party was said to have used funding to make the fake manifesto (properly labelled by Google as an advertisement) appear automatically as the top search result for any user of Google who sought “Labour,” on the day that the Labour Party published its actual election manifesto.
As reported in the Guardian,Footnote 107 the two hired individuals named as running the Conservative party’s digital strategy for the UK general election were said to be New Zealanders, so foreign to the UK, and to have already operated, as foreigners in Australia, a strategy which helped to produce an unexpected victory in the last Australian general election for a climate-change-denying Prime Minister. The two New Zealanders would have been working in the UK under another Australian, Isaac Levido, the head of the governing party’s election strategy, and formerly the right hand man of the Australian election strategist mentioned above, Lynton Crosby. It looks as if foreign election agents can be hired who are not, under existing law, legally responsible for their actions in the country whose elections they influence.