I. Introduction
In one of the masterpieces of contemporary literature, The Prague Cemetery,Footnote 1 Simone Simonini (and his alter ego the Abbé Dalla Piccola) is a cynical forger who continuously produces and sells fake news. Simonini fakes documents not only for private individuals but also for the police, the secret services and even the State. He shapes the story to his liking, transforming it with his false documents, often based on the decontextualisation of existing documents and therefore originating from true facts. It is the principle of likelihood that makes this news plausible.
Eco’s novel shows how reality can be altered, changed and even created by words. The protagonist, Simonini, demonstrates this, as does a historical case cited in the text, “The Protocols of the Elders of Zion”, a fake that spread throughout Europe, being exploited by Hitler and even gaining credence among enlightened men such as Henry Ford. Eco describes the process underlying so-called fake news as a real problem for society. Its presence on the Internet, from the most harmless to the most pernicious, is growing rapidly and is increasingly difficult to recognise.
Yet Eco’s novel also shows the importance of freedom of expression and opinion as the cornerstone of democratic society.
There is no doubt that disinformation and particularly fake news can pose considerable risks to society. Spreading fake news on a major social media platform could cause serious concern for users and negative externalities with ripple effects for people.
Nevertheless, freedom of expression may come at a price, and perhaps we are paying it now in this time of crisis through the spread of fake news on social media. However, it is one that we should be willing to pay without undue hesitation as the future of the democracy in which we believe is at stake.
Furthermore, is it not true that we would challenge government restrictions on the right to economic freedom, as some are already doing? The same holds true for freedom of movement and other human rights. By contrast, especially in the context of the current health emergency, we are observing how governments address fake news on social media by adopting regulatory policies that can significantly undermine freedom of expression; and this could also be a pretext for limiting freedom of expression in the future.
In this article, I seek to challenge the stringent emergency legislative and administrative measures on fake news that governments have recently put in place in order to analyse their negative implications for the right to freedom of expression and to suggest a possible solution in the context of public law.
I shall discuss these controversial government policies in order to clarify why we cannot allow freedom of expression to be jeopardised in the process of trying to manage the risks of fake news.
I start with an examination of the legal definition of fake news in academia in order to establish the essential characteristics of the phenomenon (Section II).
Secondly, I assess the legislative and administrative measures implemented by governments at both the international and European Union (EU) levels (Sections III and IV, respectively), showing how they may undermine a core human right by curtailing freedom of expression.
Then, starting from the premise of social media as a “watchdog” of democracy and moving on to the contention that fake news is a phenomenon of “mature” democracy, I will argue that public law already protects freedom of expression and ensures its effectiveness at the international and EU levels through some fundamental rules (Section V).
Lastly, I explore key regulatory approaches and, as alternatives to government intervention, I propose self-regulation and above all empowering users as strategies to manage fake news by mitigating the risks of undue interference in the right to freedom of expression (Section VI).
In so doing, I conclude by offering some remarks on the proposed solution and in particular by recommending the implementation of reliability ratings on social media platforms (Section VII).
II. What is fake news? A definition
What is fake news? This is quite a challenging question for legal scholars, so a preliminary task is to establish a definition and identify some of the key problems raised by fake news in academic debate and at the institutional level. This section aims to do just that.
To this end, it could be argued that the definition of fake news is part of the more general notion of disinformation.Footnote 2 For this reason, when discussing fake news in this article, I will often refer to the broader phenomenon of disinformation.Footnote 3 On this topic, it may be understood that fake news is a recent phenomenon, and both the legislations and the measures of the authorities, especially the European ones, are used to talk about disinformation including fake news in this legal context.
There is no universally agreed-upon definition of fake news. Indeed, as we shall see, scholars have suggested various meanings for the term, and a definition has recently been proposed as “the online publication of intentionally or knowingly false statements of fact”.Footnote 4
Academics have also defined fake news as lies, namely deliberately false statements of fact distributed via news channels.Footnote 5 Nevertheless, as has rightly been noted, current usage is not yet settled, and there are clearly different types of fake news that should not be conflated for legal purposes.Footnote 6
For other scholars, fake news should be limited to articles that suggest, through appearance and content, the conveyance of real news, but which also knowingly include at least one material factual assertion that is empirically verifiable as false and that is not otherwise protected by the fair report privilege,Footnote 7 even if these are often understood as fabricated news stories.Footnote 8
Moreover, according to another definition, fake news is information that has been deliberately fabricated and disseminated with the intention of deceiving and misleading others into believing falsehoods or doubting verifiable facts.Footnote 9
The authors of a well-known article on the US presidential election of 2016 define fake news as news articles that are intentionally and verifiably false and could mislead readers,Footnote 10 as well as news stories that have no factual basis but are presented as news.Footnote 11 Incidentally, it has been appropriately noted that fake news is also the presentation of false claims purporting to be about the world in a format and with content that resembles the format and content of legitimate media organisations.Footnote 12
Furthermore, it has been observed that fake news may also purport to describe events in the real world, typically by mimicking the conventions of traditional media reportage, yet it is known by its creators to be significantly false and is transmitted with the dual goal of being widely retransmitted and of deceiving at least some of its audience.Footnote 13 Beyond this, it has been argued that fake news is best defined as the deliberate presentation of (typically) false or misleading claims as news, where the claims are misleading by design.Footnote 14
Lastly, from analysis of previous studies that have defined and operationalised the term, other authors have stated that fake news is information constituting viral posts based on fictitious accounts made to look like news reports in contemporary discourse and particularly in media coverage.Footnote 15
Outside academia, the definition of fake news has been discussed at the institutional level. In the USA, several important events have been held on the definition of fake news, such as the workshop organised on 7 March 2017 by the Information Society Project at Yale Law School and the Floyd Abrams Institute for Freedom of Expression.Footnote 16 During the workshop, news organisations, information intermediaries, data scientists, computer scientists, the practising bar and sociologists explored efforts to define fake news and to discuss the feasibility and desirability of possible solutions.Footnote 17
In general, most participants were reluctant to propose negative State regulations for fake news. The option of using government funding or other economic incentives to indirectly promote legitimate news and information outlets was floated, but this was rightly critiqued on similar grounds to those associated with government intervention to penalise certain kinds of speech, simply by aiming to prevent the government from determining what is true or worthy. Ultimately, nearly all participants agreed on one overarching conclusion: that re-establishing trust in the basic institutions of a democratic society is critical to combatting the systematic efforts being made to devalue truth. In addition to thinking about how to fight different kinds of fake news, it is necessary to think broadly about how to bolster respect for facts.
In Europe, the European Commission promoted a public consultation from 13 November 2017 to 23 February 2018, which, among other things, afforded some criteria for defining fake news.Footnote 18 In particular, organisations were asked to suggest different criteria for defining fake news, including from a legal point of view. The responses highlighted a wide range of criteria, with the consensus that fake news could be defined by looking at: (1) the intent and apparent objectives pursued by fake news; (2) the sources of such news; and (3) the actual content of news.Footnote 19
Essentially, this consultation reached a definition of fake news based on the pursued objectives of the news. Thus, the concept would mainly cover online news, albeit sometimes disseminated in traditional media too, intentionally created and distributed to mislead readers and influence their thoughts and behaviour.
Moreover, fake news can polarise public opinion, opinion leaders and media by creating doubts regarding verifiable facts, eventually jeopardising the free and democratic opinion-forming process and undermining trust in democratic processes. Gaining political or other kinds of influence or funds through online advertising (eg clickbait) or causing damage to an undertaking or a person can also be major aims of fake news. The existence of a clear intention behind the fake news would establish the difference between this phenomenon and that of misinformation; that is, where wrong information is provided owing, for instance, to good-faith mistakes or to failure to respect basic journalism standards (eg verification of sources, investigation of facts, etc.).Footnote 20
Lastly, it should be noted that, in the consultation in question, civil society organisations and news media in particular justifiably criticised the term “fake news” as misleading and with negative connotations (ie used by those who criticise the work of the media or opposing political views). Hence, since fake news may be a symptom of a wider problem, namely the crisis of information, the use of the term “disinformation” was suggested as a more appropriate expression.
In the UK, the House of Commons Digital, Culture, Media and Sport (DCMS) Select Committee published its final report on disinformation and fake news at the end of an eighteen-month inquiry on 18 February 2019. In this Final Report, the DCMS stated that fake news is a poorly defined and misleading term that conflates a variety of false information, from genuine error through to foreign interference in democratic processes. For this reason, the DCMS Select Committee recommended that the government move away from the term “fake news” and instead seek to address disinformation and wider online manipulation by defining disinformation as the deliberate creation and sharing of false or manipulated information that is intended to deceive and mislead audiences, either for the purposes of causing harm or for political, personal or financial gain. Conversely, misinformation should refer to the inadvertent sharing of false information.Footnote 21
Starting from the suggested definition of fake news, the DCMS Select Committee called for: (1) a compulsory code of ethics for social media companies overseen by an independent regulator; (2) additional powers for the regulator to launch legal action against companies breaching the code; and (3) a reform of electoral communications laws and rules on overseas involvement in elections. Finally, it also recommended creating (4) an obligation for social media companies to take down known sources of harmful content, including proven sources of disinformation.
In addition, as regards further institutional actions linked to the aforementioned definition of fake news, the UK government has been committed to maintaining a news environment, both online and offline, where accurate content and high-quality news online can prevail. While mechanisms are in place to enforce accuracy and impartiality in the broadcast and press industries, greater regulation in the online space has been considered. It has developed a range of regulatory and non-regulatory initiatives to improve transparency and accountability in the online environment where information is shared. It has also committed to ensuring that freedom of expression in the UK is protected and enhanced online, and this work will be carried out in partnership with industry, the media and civil society institutions.
The brief analysis presented so far shows that the definition of fake news is particularly complex and raises some significant issues.
Essentially, we might reasonably argue that fake news is information designed to emulate characteristics of the media in form but not in substance. Fake news sources typically do not have media editorial policies and procedures for ensuring the correctness and reliability of their information. This, in the majority opinion of academics, would likely make them unreliable and harmful to the public. Fake news adds to other information flaws, namely misinformation, meaning false or misleading information, and disinformation, meaning false information that is deliberately disseminated to deceive people.
Fake news plays a leading role in several areas, such as politics, health and the economy, which means that governments around the world are called upon to regulate the phenomenon.Footnote 22
In the next two sections, I shall consider legislative and administrative policies in order to discuss and criticise the robust measures governments have introduced to address fake news at the international and European levels (Sections III and IV, respectively). In doing so, the analysis refers specifically to States adopting particularly stringent policies to address fake news on social media during the COVID-19 pandemic, bearing in mind that policies became more stringent during and after the outbreak.
The analysis of the measures implemented by States around the world is not, therefore, based on their policy of freedom of expression, nor on their way of guaranteeing it to their citizens. On the contrary, emergency legislative and administrative measures are evaluated in order to understand whether or not they fall foul of the international and European rules that protect human rights with regard to freedom of expression.
III. International regulatory responses to fake news
All over the world, some governments have issued stringent legislative and administrative measures restricting freedom of expression in order to address disinformation and especially fake news. In this regard, an important factor to consider is that the pandemic has encouraged strict government policies, which, acting under the threat of loss of life, have passed particularly invasive human rights laws to manage the risks of online disinformation.
Generally, these policies could trigger “chilling effects” that could be implemented by governments to build a climate of self-censorship that dissuades democratic actors such as journalists, lawyers and judges from speaking out.Footnote 23
It should be noted that in its latest report on “The state of the world’s human rights”, Amnesty International has emphasised the relationship between freedom of expression and fake news.Footnote 24 The report documented various repressions with criminal sanctions imposed by governments around the world against journalists and social media users.
In a few countries, particularly in Asia and the Middle East and North Africa, authorities prosecuted and even imprisoned human rights defenders and journalists using vaguely worded charges such as spreading misinformation, leaking state secrets and insulting authorities, or labelled them as “terrorists”. Some governments invested in digital surveillance equipment to target them.Footnote 25 Moreover, public authorities punished those who criticised government actions concerning COVID-19, exposed violations in the response to it or questioned the official narrative around it. Many people were detained arbitrarily and, in some cases, charged and prosecuted. In some countries, the government used the pandemic as a pretext to clamp down on unrelated criticism. In Latin America, disinformation laws that force platforms to decide whether to remove content without judicial orders have been found to be incompatible with Article 13 of the American Convention on Human Rights.Footnote 26
The United Nations (UN) Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression has recently declared that several States have adopted laws that grant the authorities excessive discretionary powers to compel social media platforms to remove content that they deem illegal, including what they consider to be disinformation or fake news. He has also affirmed how failure to comply is sanctioned with significant fines and content blocking.Footnote 27 The UN Special Rapporteur has highlighted how such laws lead to the suppression of legitimate online expressions with limited or no due process or without prior court order and contrary to the requirements of Article 19(3) of the International Covenant on Civil and Political Rights (ICCPR).Footnote 28 In addition, a trend emerges that sees States delegating functions to online platform “speech police” that traditionally belong to the courts. The risk with such laws is that intermediaries are likely to err on the side of caution and “over-remove” content for fear of being sanctioned.Footnote 29
Concerning private companies, although they do not have the same human rights obligations as States, social media platforms are expected to respect human rights in their activities and operations according to the Guiding Principles on Business and Human Rights. In response to the challenges raised by fake news, the main social media platforms have adopted a range of policies and tools, generally banning what they consider to be false news and various deceptive practices that undermine authenticity and integrity on their platforms. Hence, social media giants like TwitterFootnote 30 and FacebookFootnote 31 have adopted specific policies on COVID-19-related disinformation and have also established a third-party fact-checking programme,Footnote 32 as well as a new community-based approach to fighting misinformation,Footnote 33 respectively.
Nonetheless, as we shall see, social media policies are still ineffective, counterproductive and in some cases detrimental to users’ rights.
With this in mind, this section of the article specifically focuses on States around the world that have taken particularly stringent emergency legislative and administrative measures to deal with fake news on social media during the COVID-19 pandemic: China, the USA, Russia, the UK, Australia, Canada, Burkina Faso, Singapore and India.Footnote 34 Furthermore, in grasping the extent of the legislative and administrative measures on the right to freedom of expression, the analysis will also cover the social media regulation in force.
1. China
In Asia, China has passed some of the strictest laws in the world when it comes to fake news.Footnote 35 In 2016, the Cybersecurity Law criminalised “any individual or organization … that fabricates or disseminates false information to disrupt the economic and social order” (Article 12).Footnote 36 In particular, China’s Cybersecurity Law requires social media platforms to solely republish and link to news articles from registered news media. Furthermore, in 2018, the Chinese authorities started requiring microblogging sites to highlight and refute rumours on their platforms and launched a platform called Piyao that allows people to report potential fake news. The platform broadcasts real news sourced from state-owned media, party-controlled local newspapers and various government agencies. The app also uses artificial intelligence to automatically detect rumours of accounts on social media platforms like Weibo and WeChat, on which it broadcasts reports from state-owned media.
As reported in a reliable source, since the spread of the COVID-19 outbreak the Chinese government systematically started checking information on the disease available online and in the media.Footnote 37 Indeed, briefings and media reports describe how the government and public authorities delayed releasing information on the coronavirus outbreak to the public.Footnote 38 Thus, the Hubei authorities silenced and sanctioned a number of citizens for spreading rumours and disturbing the social order. Two of the people involved were medical experts who had warned the public of the spread of the coronavirus on social media.Footnote 39
The Cyberspace Administration of China (CAC), the government agency that controls Internet infrastructure and content within the Chinese borders, threatened websites, media platforms and accounts with sanctions for disseminating harmful or alarming news related to COVID-19. Additionally, in a CAC report, Sina Weibo, Tencent and ByteDance were flagged for an inspection of their platforms.Footnote 40
More generally, the authorities warned the public of the legal consequences of fake news. A Chinese police announcement report regarding sanctions for rumours demonstrates that many citizens received a reprimand, fines and administrative or criminal detention in January 2020.Footnote 41
2. The USA
In the USA, Congress passed the Honest Ads Act in 2017, aiming to regulate online political advertising and to counter fake news. Specifically, the law requires social media platforms like Facebook and Google to keep copies of ads, make them public and keep tabs on who is paying for them and how much.Footnote 42
In addition, the Honest Ads Act compels companies to disclose details such as advertising spending, targeting strategies, buyers and funding. It also requires online political campaigns to adhere to stringent disclosure conditions for advertising on traditional media.Footnote 43
The US National Defense Authorization Act of 2017 approved the establishment of the Global Engagement CenterFootnote 44 to lead, synchronise and coordinate the Federal Government’s efforts to deal with foreign State and non-State propaganda and disinformation efforts aimed at jeopardising US national security interests.Footnote 45 By contrast, the Governor of the State of California vetoed a bill that was to create an advisory group meant to monitor the spread of misinformation on social media and find potential solutions.Footnote 46
We should also consider the Stigler Committee on Digital Platforms. This is an independent and non-partisan Committee of academics, policymakers and experts who have studied how social media platforms like Google, Twitter and Facebook impact economics and antitrust laws, data protection, the political system and the new media industry. In this regard, the Stigler Committee Report addresses the impact of social media platforms on various aspects of society and proposes policy solutions for lawmakers and regulators to consider when dealing with the power held by these companies.Footnote 47
It should be noted that on 20 February 2020, the Trump administration in its annual economic report criticised – especially in its section on antitrust enforcement for the digital economy – the Stigler Report’s proposal for creating a Digital Agency, saying that it “raises a host of issues” and “that the downsides of new, far-reaching regulation need to be taken seriously”.Footnote 48 However, these criticisms have been refuted,Footnote 49 as these were based on a claim already advanced by Charles Koch Institute’s Neil Chilson in the Washington Post,Footnote 50 an argument already refuted in an earlier ProMarket post.Footnote 51
In 2020, during the COVID-19 pandemic, the USA tightened control over coronavirus messaging by government health officials and scientists, directing them to coordinate all statements and public appearances.Footnote 52
3. Russia
On 31 March 2020, the Russian authorities approved amendments to the Criminal Code and to the Code of Administrative Offences that introduced criminal penalties for the public dissemination of knowingly fake news in the context of the COVID-19 emergency, including the social media environment.Footnote 53
In particular, the Government established criminal liability for the public dissemination of knowingly fake news regarding circumstances posing a threat to the life and safety of citizens under Article 207.1 of the Criminal Code of the Russian Federation (CCRF),Footnote 54 as well as for the public dissemination of knowingly false socially significant information, which entailed grave consequences under Article 207.2 of the CCFR.Footnote 55
Furthermore, the legislative changes extended criminal sanctions for violating sanitary and epidemiological regulations. Amendments to the Russian Code of Administrative Offences introduced hefty fines of up to five million roubles for journalists spreading fake news. In fact, pursuant to Article 13.15 of the Code of Administrative Offences of the Russian Federation, committing the same offence twice could lead to a fine of up to ten million roubles.
4. The UK
In the UK, the House of Commons DCMS examined the issue of disinformation and fake news from January 2017, focusing on issues such as the definition, role and legal liabilities of social media platforms.Footnote 56
The DCMS Special Committee drew up an interim report on 29 July 2018 and a final one in February 2019. A set of recommendations produced by the committee include: (1) a compulsory code of ethics for technology companies overseen by an independent regulator with powers to launch legal action; (2) changes to electoral communications laws to ensure the transparency of political communications online; and (3) the obligation for social media companies to take down known sources of harmful content, including proven sources of disinformation.Footnote 57
Moreover, the DCMS Special Committee invited elected representatives from Argentina, Belgium, Brazil, Canada, France, Ireland, Latvia and Singapore to establish committee on disinformation and fake news (the so-called International Grand Committee), which held its inaugural session in November 2018.Footnote 58
Following the session, members of the International Grand Committee signed a declaration on the “Principles of the law governing the Internet”, which affirms the parliamentarians’ commitment to the principles of transparency, accountability and the protection of representative democracy in regard to the Internet.Footnote 59
5. Australia
In Australia, the government appointed a taskforce to address fake news threats to electoral integrity though its foreign interference laws, which passed through parliament in June 2018 and have also had some bearing on the question of disinformation. Later, the Australian Electoral Commission (AEC) commenced a social media literacy campaign and other activities to coincide with the 2019 federal election. In addition, there have also been several recent parliamentary inquiries and an inquiry by the Australian Competition and Consumer Commission (ACCC) examining issues related to fake news.
It should be noted that prior to the July 2018 federal by-electionsFootnote 60 held across four states, the Turnbull government established a multi-agency body, the Electoral Integrity Assurance Taskforce,Footnote 61 to address risks to the integrity of the electoral system, particularly in relation to cyber interference and online disinformation. Agencies involved include the AEC, the Department of Finance, the Department of Home Affairs and the Australian Cyber Security Centre. According to the Department of Home Affairs, the taskforce’s role is to provide the AEC with technical advice and expertise in relation to cyber interference and online disinformation with regard to electoral processes.
A media report on the establishment of the taskforce suggested that its central concern is cybersecurity and disinformation, including fake news and interference with the electoral roll or AEC systems. The media report added that the potential use of disinformation and messaging and any covert operations designed to disrupt the by-elections would also be closely monitored, as the foreign interference “threat environment” had escalated even within the two years since the last federal election in 2016.Footnote 62
Another critical point that should be mentioned is that the National Security Legislation Amendment Act 2018 added new foreign interference offences to the Australian Commonwealth Criminal Code.Footnote 63 The elements of these foreign interference offences could arguably be applied to persons who weaponise fake news in certain circumstances. In particular, the offences extend to persons who, on behalf of a foreign government, engage in deceptive or covert conduct intended to influence a political or governmental process of the Commonwealth or a State or Territory, or to influence the exercise of an Australian democratic or political right or duty, whether in Australia or not.
Some positive signs are emerging from the AEC advertising campaign called “Stop and Consider”. The AEC started this campaign on 15 April 2019 on social media platforms such as Facebook, Twitter and Instagram to encourage voters to carefully check the sources of electoral communication they would see or hear during the 2019 federal election campaign.Footnote 64 In effect, Stop and Consider was a media literacy campaign based on empowering users, positively alerting voters to the possibility of disinformation or false information intended to influence their vote and helping them to check sources of information and thus cast an informed vote.
Since December 2017, the ACCC has been conducting an inquiry into the impact of online platforms on media competition in Australia, including the implications of this impact for quality news and journalism.Footnote 65 The ACCC’s preliminary report discusses a range of intersecting factors that pose a risk of increasing audience exposure to fake news.Footnote 66 These factors include commercial incentives for media companies to produce sensational “clickbait stories” optimised for search engines and news aggregators and designed to go viral. Other potential problems include those associated with news feeds on social media platforms. Such feeds show users a mix of individual news stories with no context regarding source credibility, which makes it difficult for users to discern the quality of the information. In addition, platforms select and prioritise news on the basis of users’ past behaviours and preferences, so news stories that share the same perspectives may be repeatedly made available to consumers.
Recently, the Australian government has enacted the News Media and Digital Platforms Mandatory Bargaining Code, which, among others things, also promotes self-regulation by social media companies and diminishes government intervention.Footnote 67
6. Canada
In March 2018, the Canadian House of Commons Standing Committee on Access to Information, Privacy and Ethics (AIPE) began an inquiry into a breach of personal information involving Cambridge Analytica and Facebook.Footnote 68
The AIPE Committee’s preliminary report contained a number of recommendations, mostly amendments to the Personal Information Protection and Electronic Documents Act.Footnote 69 The AIPE Committee’s final report of December 2018 included a number of potential regulatory responses to the problem of misinformation and disinformation on social media.Footnote 70
One recommendation was that social media platforms would be required to be more transparent with regard to the processes behind the dissemination of material online, including clear labelling of automated or algorithmically produced content.
Other suggestions were that there should be obligations on platforms to take down illegal content, including disinformation and above all fake news, with more investment by platforms and governments in digital literacy programmes and public awareness campaigns.
7. Burkina Faso
In June 2019, Burkina Faso’s parliament adopted the Protection from Online Falsehoods and Manipulation Act, which seeks to punish the publication of fake news compromising security operations, false information about rights abuses or destruction of property or images and audio from a terrorist attack.
Parliament specifically amended the country’s Penal Code to introduce a series of new offences that aim to fight terrorism and organised crime, fight the spread of fake news and suppress efforts to demoralise the Burkinabe armed forces.Footnote 71 Offenders could face fines of up to £7000 or a maximum ten years in jail.
8. Singapore
In May 2019, the Singapore Parliament approved a law criminalising the dissemination of fake news online.Footnote 72 The law makes it illegal to spread false statements of facts that compromise security, public peace and safety and the country’s relations with other nations.
In particular, the law punishes those who post fake news with heavy fines and even jail time. In this regard, if a user shares false information, the penalty is a fine of up to $37,000 or five years in prison. What is more, the punishment jumps to $74,000 and a potential ten-year jail term if the falsehood was shared using an inauthentic online account or a bot.
A further important aspect is that social media platforms like Facebook and Twitter face fines of up to $740,000 for their roles in spreading misinformation.
9. India
In India, during the nationwide lockdown imposed after the pandemic, more than fifty journalists were arrested under emergency laws for spreading fake news.
On 7 April 2020, Uttar Pradesh police lodged a First Information Report (FIR) against journalist Prashant Kanojia for allegedly making “objectionable remarks” about Prime Minister Modi and Chief Minister Yogi Adityanath on social media.
Shortly afterwards, the Uttar Pradesh police registered another FIR against The Wire, a daily news website, and its editor Siddharth Varadarajan for reporting that Yogi Adityanath had attended a public religious event after the nationwide lockdown was announced.Footnote 73
IV. Legislative and administrative measures regulating fake news in the European Union
Member States of the EU have started to regulate fake news by law, and some EU governments have issued administrative measures accordingly. However, the COVID-19 pandemic has considerably increased legislative initiatives and administrative measures imposed by EU governments. Therefore, this section of the article specifically looks at the Member States that have taken particularly stringent emergency legislative and administrative measures to manage fake news during the COVID-19 pandemic: Germany, France, Italy and Spain. In addition, the social media regulation in force is analysed in order to understand the impact of emergency measures on the right to freedom of expression.
1. Germany
On 28 October 2020, the Federal States in Germany passed the Interstate Media Treaty (Medienstaatsvertrag – “MStV”).Footnote 74 It addresses online disinformation – and thus also fake news – by regulating transparent algorithms, the labelling of social bots, the findability of public service content and journalistic due diligence for social media.Footnote 75
The MStV is the German implementation of the EU Audiovisual Media Services Directive 2010/13/EU,Footnote 76 as amended by Directive 2018/1808/EU.Footnote 77 It replaces the Interstate Broadcasting Treaty (Rundfunkstaatsvertrag – RStVFootnote 78) and is considered an important cornerstone in media policy.
The MStV plays a pivotal role in national efforts to modernise the media landscape and to make the German legislative framework fit the European social media legal environment.Footnote 79 As a result, the MStV focuses on telematics services in addition to broadcasting. Through media intermediaries, social media platforms, user interfaces and video sharing services, the MStV applies to many players in the social media market.Footnote 80
Previously, the Network Enforcement Act (Netzwerkdurchsetzungsgesetz – NetzDG) had been approved on 30 June 2017Footnote 81 and recently amended on 28 June 2021Footnote 82 with the specific aim of fighting hate speech and fake news on social networks by improving the enforcement of the current laws.Footnote 83 Basically, with the NetzDG, Germany made fighting hate speech on social networks a priority. In this legal context, the NetzDG also regulates disinformation – and thus specifically fake news – by implementing previous legislation on disinformation.Footnote 84 To achieve this goal, the German government implemented law enforcement on social networks in order to promptly remove objectively criminal content, namely incitement to hatred, abuse, defamation or content that might lead to a breach of the peace by misleading authorities into thinking a crime has been committed.
However, the obligations placed upon private companies to regulate and take down content raises concern with respect to freedom of expression. A prohibition on the dissemination of information based on vague and ambiguous criteria, such as “insult” or “defamation”, could be incompatible with Article 19 of the ICCPR.Footnote 85
The NetzDG works by requiring social media platforms to provide a mechanism for users to submit complaints about illegal content. Once they receive a complaint, platforms must investigate whether the content is illegal and must remove it within seven days only if the content is “manifestly unlawful”.Footnote 86 Failing that, public authorities may impose high fines for failing to comply with existing legal obligations. In particular, platforms that do not remove clearly illegal content may be fined up to €50 million.Footnote 87
Nevertheless, the provisions imposing high fines for non-compliance with the obligations set out in the bill raise concerns, as the obligations as mentioned above may represent undue interference with the right to freedom of expression and privacy. The heavy fines raise proportionality concerns, and they may prompt social networks to remove content that may actually be lawful.Footnote 88
Moreover, the NetzDG has been cited by other countries seeking to introduce unduly restrictive intermediary laws or social media regulations that would enable the removal of fake news without a judicial or even a quasi-judicial order. Hence, several criticisms have rightly been levelled against the NetzDG, and some political parties have unsuccessfully presented proposals to modify the law since it is considered unconstitutional, particularly regarding freedom of expression.Footnote 89
2. France
In France, a law against the manipulation of information was approved by the National Assembly on 22 December 2018 with the aim of providing further protecting democratic principles from the spread of fake news.Footnote 90 In particular, this law targeted the widespread and rapid dissemination of fake news by means of digital tools, especially through the channels of dissemination offered by social media platforms influenced by foreign States.Footnote 91
Thus, the new legislation concentrates especially on election campaigns, both just before and during elections, in order to focus available regulation tools on the real risk, namely attempts to influence election results, as occurred during the US presidential elections and the Brexit campaign.
Legally speaking, the law provides the administrative measures requiring a transparency obligation for platforms, which must report any sponsored content by publishing the name of the author and the amount paid. Additionally, platforms exceeding a certain number of hits per day must have a legal representative in France and publish their algorithms. There is also a legal injunction that allows the circulation of fake news to be quickly halted.
Last but not least, during the three months preceding an election, an interim judge will qualify fake news as defined in the 1881 law on the freedom of the press according to three criteria: (1) the news must be manifestly fake; (2) it must have been disseminated deliberately on a massive scale; and (3) it must be liable to cause a breach of the peace or compromise the outcome of an election.
The French law against the manipulation of information establishes a duty of cooperation on the part of social media platforms in order to encourage them to introduce measures to prevent fake news and make these measures public. In this regard, the French Broadcasting Authority, the Superior Audiovisual Council, is assigned the role of preventing, suspending and stopping the broadcast of television services controlled by, or influenced by, other States and that are harmful to the fundamental interests of the country.
In the midst of the health emergency, in order to fight fake news, on 30 April 2020, the French government created a specific section on COVID-19 on its website.Footnote 92 However, this initiative raised issues among journalists who consider that the government should not judge information.Footnote 93 Consequently, the French government took down the fake news COVID-19 page after accusations that it had overstepped its constitutional role and infringed upon press freedoms.Footnote 94
In greater detail, the national assembly approved a controversial law referring to COVID-19 information that gives platforms a one-hour deadline to remove related content after being instructed to do so by the authorities.Footnote 95 Critics reasonably claimed that the law limits freedom of expression and is difficult to apply.Footnote 96 Indeed, on 18 June 2020, the Constitutional Council judged that social media obligations to remove illegal content within twenty-four hours were not compatible with freedom of expression.Footnote 97
3. Italy
In Italy, following the constitutional referendum campaign in December 2016, a huge debate arose, and political actors began calling for new regulations to address the proliferation of fake news online. Thus, a number of proposals were made to prevent it; these could lead to the imposition of strict liability on social media platforms.
On 7 February 2017, a Member of Parliament, Senator Adele Gambaro, introduced a particularly controversial bill (the so-called DDL Gambaro) proposing fines and criminal penalties for anyone who publishes or spreads “false, exaggerated, or biased” news reports online.Footnote 98 However, after severe criticism from public opinion, the bill did not move forward in Parliament and was not adopted.
The Italian Minister of the Interior aimed to combat fake news by promoting a system of reporting manifestly unfounded and biased news or openly defamatory content. In this regard, on 18 January 2018, the Minister of the Interior introduced the “Red Button Operational Protocol” to fight the dissemination of fake news online at the time of the political elections in 2018.
The Ministry of the Interior introduced this specific online procedure in order to limit the actions of those who, with the sole intent of conditioning public opinion and tendentiously orientating their thoughts and choices, design and spread news relating to topics or subjects of public interest with no foundation. Specifically, the protocol provided a “red button” signalling service where users can report a possible network of contents linked to the phenomenon of fake news. The unit of the Italian state police that investigates cybercrime was tasked with reviewing reports and act accordingly.Footnote 99
The Constitutional Affairs Committee of the Senate of the Italian Republic is currently discussing bill No. 1900 aimed at establishing a parliamentary committee of inquiry in order to examine the problem of disinformation and, more precisely, the dissemination of fake news on a massive scale.Footnote 100 The bill under discussion does not establish any binding measures to deal with the spread of fake news. On the contrary, its purpose is to empower a committee with a variety of tasks:
-
(1) Investigating the massive dissemination of illegal, false, non-verified or intentionally misleading information and content via traditional and online media.
-
(2) Ascertaining whether such activities are backed by subjects, groups or organisations that receive financial support, including from foreign sources, with the specific aim of manipulating information and influencing public opinion, also in the context of electoral or referendum campaigns.
-
(3) Assessing the impact of disinformation on health and in the context of the COVID-19 pandemic.
-
(4) Evaluating whether disinformation activities pursue the goal of inciting hatred, discrimination and violence.
-
(5) Exploring whether any connection exists between disinformation and commercial activities, most notably pursued by websites and digital platforms.
-
(6) Verifying the status quo from a legal standpoint, as well as the existence and adequacy of procedures implemented by media platforms and social media service providers for the removal of false pieces of information and illegal content.
-
(7) Assessing the existence of social, educational and literacy measures and best practices or of initiatives aimed at raising the awareness of individuals about the importance of fact checking and reliable sources of information.
-
(8) Determining whether legal or administrative measures aimed at countering and prevent disinformation, as well as crimes committed via the media, are necessary, also with regard to the negative consequences of disinformation on the development of minors and their learning abilities.Footnote 101
The Italian government further engaged in transparency and debunking actions. On 4 April 2020, it set up a monitoring taskforce to combat the spread of fake news related to COVID-19 on the web and on social networks. The main goals of the government monitoring taskforce are to contain the risk that the spread of online disinformation could weaken pandemic containment measures and to promote initiatives in order to increase citizens’ control over the reliability of social network information.
The Italian Ministry of Health also took action to tackle fake news on COVID-19 on the institutional homepage, where it promoted the information campaign “Beware of Hoaxes” with a collection of the most recurrent fake news on social media.Footnote 102 In addition, the Communication Authority (AGCOM) launched an observatory on disinformation online by publishing reports, sending information to ministries and involving various stakeholders in the design of policies against disinformation.Footnote 103
The actions taken by the Italian government have been rightly criticised, as these pose many problems concerning the right to freedom of expression, media freedom and media pluralism,Footnote 104 as well as for having adopted a notion of fake news that is in disagreement with that of the European Commission, which deemed it inadequate and misleading.Footnote 105
4. Spain
In Spain, too, the government started monitoring online information in order to contain the spread of fake news regarding COVID-19 by establishing a specific protocol against disinformation campaigns.Footnote 106
On 30 October 2020, the Spanish government issued a ministerial order approving the protocol called “Procedure for Intervention against Disinformation” by Spain’s National Security Council in order to prevent, detect and respond to disinformation campaigns in addition to establishing coordination mechanisms.Footnote 107
The document makes provision for the possibility of carrying out communication campaigns to counter fake news stories without censoring them. It is up to the government to decide what exactly constitutes misinformation, with no representatives from the media or journalist associations involved in the process.Footnote 108
The order passed by the coalition government – led by the Socialist Party (PSOE), with junior partner Unidas Podemos – is based on the concept that the use of fake news to destabilise a country or interference in public opinion by a third country constitute forms of attack.
However, opposition parties accused the government of creating a “Ministry of Truth” that would allegedly make decisions regarding content and provide media outlets with guidelines to follow. Although the Madrid Press Association acknowledged that the State needs to fight disinformation, it warned of the risk that the plan would lead the government to act more like a censor than as a guarantor of truth.
In reality, the Spanish plan leaves everything in the hands of the government and calls for the media to be consulted only if needed, but it is the media that should control the government rather than the government controlling the media.
V. Safeguarding the right of expression in international and European Union law
The administrative measures governments have recently adopted to regulate fake news may undermine fundamental rights, especially the right to freedom of expression, and the COVID-19 pandemic is emphasising this trend.Footnote 109 Indeed, we have seen that by governing disinformation and especially by fighting the spread of fake newsFootnote 110 on social media platforms,Footnote 111 many governments have enacted legislative and administrative measures restricting freedom of expression.Footnote 112
In this section, disputing the government policies in question, I shall argue that some important principles of public law guarantee and protect freedom of expression as a fundamental individual right at the international and European levels, never subjecting it to conditions contrary to the enjoyment of such right.Footnote 113
Indeed, although it might seem clear in the legal literature, discussing the guarantees of the right to freedom of expression in relation to social media platforms and especially in relation to the recent phenomenon of fake news requires a review of the main international and European rules that expressly establish the protection of this right. Such guarantees will be the focus of this section. In a democratic, free and open society, the right to express one’s opinion may not be restricted in the name of fighting fake news.
Hence, as several scholars have clearly stated, in international public law, the expression of ideas and opinions is considered a basic right of all persons as it is part of the full development of an individual, but this right also represents a milestone for free and democratic society, providing the vehicle for the exchange and development of opinions.Footnote 114 Many important rules of international law safeguard people’s freedom of expression, so that we might at least refer to the ones we indicate below to argue for the limits governments encounter when regulating the phenomenon of fake news on social media.
Firstly, the Universal Declaration of Human Rights (UDHR), proclaimed by the UN General Assembly in Paris on 10 December 1948, ensures an effective protection of the freedom of expression pursuant to Article 19 of the UDHR in the following terms: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive, and impart information and ideas through any media and regardless of frontiers”.Footnote 115
In particular, the last part of Article 19 of the UDHR provides a legal basis for the dissemination “through any media” of information by establishing that it can be conveyed “without interference”.
According to this interpretation of Article 19 of the UDHR, a free, uncensored and unhindered media is essential in any society to ensure freedom of opinion and expression as the basis for the full enjoyment of a wide range of other human rights (eg rights to freedom of assembly and association, the exercise of the right to vote). In other words, it constitutes one of the cornerstones of a democratic society.Footnote 116
Secondly, the ICCPR, adopted by the General Assembly on 16 December 1966,Footnote 117 guarantees freedom of expression and opinion in paragraphs 1 and 2 of Article 19, which read: “1. Everyone shall have the right to hold opinions without interference. 2. Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice”.Footnote 118
In spite of this, paragraph 3 of Article 19 of the ICCPR imposes certain limitations on the exercise of the right to freedom of expression laid down in paragraph 2 of the same norm involving special duties and responsibilities.Footnote 119 Consequently, governments may invoke Article 19(3) of the ICCPR relating (1) to the respect of the rights or reputations of others and (2) to ensuring the protection of national security and public order or of public health or morals.
Yet if we consider the specific terms of Article 19(1) of the ICCPR as well as the relationship between opinion and thought laid down by Article 18 of the ICCPR, we can affirm that any derogation from paragraph 1 would be incompatible with the object and purpose of the Covenant, even in the event of public health risks such as the COVID-19 pandemic.Footnote 120 This argument finds support in the UN Human Rights Committee claim that freedom of opinion is an element from which “it can never become necessary to derogate … during a state of emergency”.Footnote 121
As the free communication of information and ideas about public and political issues is indispensable for democracy, the ICCPR embraces the right for the media to receive information on the basis of which it can fulfil its function.Footnote 122 This entails a free media able to comment on public issues without censorship or restraint and to inform public opinion,Footnote 123 and the public consequently has a corresponding right to receive media output.Footnote 124
We can consistently note from this perspective that Article 19(2) of the ICCPR explicitly includes in this right the “freedom to seek, receive and impart information and ideas … through any other media of … choice”. Thus, governments should ensure that legislative and administrative frameworks for the regulation of social media platforms are consistent with the provisions of Article 19 of the ICCPR.
We may further observe, in accordance with the statements of the UN Human Rights Committee,Footnote 125 that any restrictions on the operation of websites, blogs or any other Internet-based, electronic or other such information dissemination system, including systems to support such communication (eg Internet service providers or search engines), are only permissible if they comply with paragraphs 1 and 2 of Article 19 of the ICCPR. In addition, permitted restrictions should generally be content-specific and, on the other hand, generic bans on the operation of certain sites and systems must be considered incompatible with Article 19 of the ICCPR. Likewise, it is also inconsistent with the same rule to forbid a site or information dissemination system to publish material solely because it may be critical of the government or the political social system it espouses.Footnote 126
In international law, other rules enshrine the right to freedom of expression, namely the International Covenant on Economic, Social and Cultural Rights (ICESCR),Footnote 127 which guarantees the right to freedom of expression under Article 15(3),Footnote 128 and the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD),Footnote 129 which expresses the right to freedom of expression under Article 5(d)(vii) and (viii), such as the rights to freedom of thought, conscience and religion and the freedom of opinion and expression, respectively.
As regards the EU’s legal system, freedom of expression, media freedom and pluralism are enshrined in the EU Charter of Fundamental Rights (CFR), as well as in the European Convention on Human Rights (ECHR). Furthermore, we might claim more generally that no country can join the EU without guaranteeing freedom of expression as a basic human right according to Article 49 of the Treaty on European Union (TEU).Footnote 130
The ECHRFootnote 131 recognises the right to freedom of expression under Article 10(1), which states: “Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. This Article shall not prevent States from requiring the licensing of broadcasting, television or cinema enterprises”.Footnote 132
In particular, we can clearly note that Article 10(1) of the ECHR describes several components of the right to freedom of expression, including the freedom to express one’s opinion and the freedom to communicate and receive information. As a matter of principle, we therefore claim that the protection given by Article 10 of the ECHR should extend to any expression, regardless of its content, disseminated by any individual, group or type of media. Therefore, based on what we have seen, we can argue that the right protected by Article 10 of the ECHR entirely covers freedom of expression on social media as it includes the right to hold opinions and to receive and impart ideas and information without interference regardless of frontiers, which are removed thanks above all to social media platforms.
The CFRFootnote 133 guarantees and promotes the most important freedoms and rights enjoyed by EU citizens in one legally binding document. The CFR under Article 11(1) states that “everyone has the right to freedom of expression”, and at the same time it ensures that this right includes “freedom to hold opinions and to receive and impart information and ideas without interference by public authorities and regardless of frontiers”.
Moreover, the CFR explicitly requires governments to respect “the freedom and pluralism of the media” in accordance with Article 11(2), thus avoiding any legislative and administrative measures that could undermine or even merely jeopardise human rights, and especially the freedom of expression.
We should bear in mind that, based on the interpretation of the ECHR and CFR Charters, in addition to the other fundamental rules of the EU legal system, the European Court of Human Rights (ECtHR) has repeatedly stated that freedom of expression constitutes “one of the essential foundations of a democratic society and one of the basic conditions for its progress and for each individual’s self-fulfilment”.Footnote 134
In its decision-making process, the ECtHR has considered national constitutional practices that afford a high level of protection to freedom of expression and has frequently ascribed the protection of freedom of expression to the ICCPR as well as other international documents.
The Strasbourg Court has consistently underlined some primary tasks that the media must accomplish without having to suffer interference from governments in order to fulfil people’s enjoyment of freedom of expression. So, in summary, media should: (1) share information and ideas concerning matters of public interest; (2) perform the leading role of public watchdog; and (3) meet the need for impartial, independent and balanced news,Footnote 135 information and comment.
The ECtHR case law principles on the right to freedom of expression are summarised in Bédat v. Switzerland of 29 March 2016.Footnote 136 It can basically be argued that these principles set out in the aforementioned judgment can protect fake news by including it in the exercise of the right of freedom of expression under certain conditions.
Indeed, the ECtHR recognises that freedom of expression represents one of the basic foundations of a democratic society and one of the essential conditions for its progress and each individual’s self-fulfilment. Indeed, in Bédat v. Switzerland, the Strasbourg Court established that, under Article 10(2) of the ECHR, this right is applicable not only to “information” or “ideas” that are favourably received or regarded as inoffensive or as matters of indifference, but also to those that – as could be the case for fake news – offend, shock or disturb. Such are the demands of pluralism, tolerance and broadmindedness, without which there is no “democratic society”. As set forth in Article 10, this freedom is subject to exceptions, which must, however, be construed strictly, and the need for any restrictions must be established convincingly.Footnote 137
Furthermore, we can also observe, concerning the level of protection, that there is little scope under Article 10(2) of the ECHR for restrictions on freedom of expression in two fields, namely political speech and matters of public interest.Footnote 138 Accordingly, a high level of protection of freedom of expression, with the authorities thus having a particularly narrow margin of appreciation, will normally be accorded where the remarks concern a matter of public interest, as in the case of comments on the functioning of the judiciary, even in the context of proceedings that are still pending.Footnote 139 A degree of hostilityFootnote 140 and the potential seriousness of certain remarksFootnote 141 do not obviate the right to a high level of protection, given the existence of a matter of public interest.Footnote 142
This section has analysed some important rules of international and European law and has argued that freedom of expression is protected and guaranteed as a fundamental human right, never subjecting it to conditions contrary to its enjoyment.
The review of these fundamental rules is useful to argue that government policies addressing online disinformation and especially the phenomenon of fake news are lawful only if they comply with the right to freedom of expression as they are established at the international and European levels.
The next section of this paper will discuss the main regulatory approaches – namely empowering users, self-regulation and government intervention – that States around the world might consider when regulating fake news in observance of the right to freedom of expression.
VI. Empowering users, self-regulation and government intervention
Having emphasised the importance of protecting freedom of expression in international and European law, this section surveys regulatory approaches that may be implemented by States in order to govern instances of disinformation and especially fake news.
In doing so, I thus recommend self-regulation and above all empowering users rather than government intervention as regulatory strategies for addressing fake news by minimising undue interference in the right to freedom of expression.
Basically, at least three different regulatory approaches to managing information on social media platforms could be proposed. The first is based on empowering users,Footnote 143 leveraging the ability of individuals to evaluate and detect fake news.Footnote 144 The second and third approaches concern the accountability of the social media platforms that can be implemented by either self-regulation or government intervention.
The first approach is essentially based on fact checking by individual websites.Footnote 145 Specifically, a number of major organisations in the USA, such as PolitiFact, FactCheck.org, The Washington Post and Snopes, fact check rumours, health stories and political claims, especially those that often appear on social media.Footnote 146 Fact checking is also carried out by reliable sources of information such as newspapers, where news is almost always subject to editorial scrutiny.Footnote 147 Moreover, another approach based on empowering users seeks to increase the ability of individuals to assess the quality of sources of information by educating them, although it is unclear whether such efforts can actually improve the ability to assess credibility and, if so, whether this will have long-term effects.Footnote 148
Yet in many cases this approach inexorably collides with social reality. Recently, the behavioural sciences have demonstrated that people are predictably irrational.Footnote 149 This may mean that people are irrational consumers of news as they prefer information that confirms their pre-existing attitudes (selective exposure), view information that is consistent with their pre-existing beliefs as more persuasive than dissonant information (confirmation biasFootnote 150) and are inclined to accept information that satisfies them (desirability bias).Footnote 151 By preselecting information that interests them, social media users tend to reinforce their own worldviews and opinions. These aspects entail social media users tending to aggregate into ideologically homogeneous groups (communities), on which they focus their attention and from which they obtain news. Furthermore, it has been authoritatively argued that people are inclined to remain within these specific communities, thus contributing to the emergence of polarisation.Footnote 152 Distinct and separate communities that do not interact with each other arise spontaneously, creating opposing groups each of which focuses only on a specific narrative – which therefore tends to strengthen and polarise within the group – ignoring the alternatives (echo chambers).Footnote 153
I am persuaded, despite the problems just mentioned, that the approach of empowering users – if properly pursued and implemented by significantly enhancing quality information and actively correcting disinformation – might represent an effective and democratic strategy to address fake news on social media platforms by avoiding undue interference with the right to freedom of expression.Footnote 154 The empowerment approach, in essence, can create a sound foundation for dealing with fake news by providing social media users with the tools to detect and share high-quality information.
Empowering users could potentially allow social media platforms to play a leading role in reducing the spread and impact of fake news through algorithms and bots.Footnote 155 Major platforms like Google, WhatsApp, Twitter and Facebook may use complex statistical models that predict and maximise engagement with content in order to improve the quality of information.Footnote 156 The platforms can therefore signal to users the quality of the information and their relative sources by incorporating these signals in the algorithmic rankings of their content. They can similarly decrease the personalisation of political information relating to other types of content, thus reducing the phenomenon of “echo chambers.”
Likewise, the platforms can effectively minimise the impact of the automated dissemination of content using bots (ie users who automatically share news from a set of sources with or without reading them). Some of the major platforms, especially Facebook and Twitter, have recently pursued the aim of managing fake news by modifying their algorithms to improve the quality of their content and counteracting bots that disseminate disinformation.
The second and third approaches refer to self-regulation and government intervention, respectively. We will analyse them here by comparing the disadvantages and drawbacks of the latter with the advantages and benefits of self-regulation, bearing in mind that the policies of the States we have discussed were essentially based on government intervention (see supra, Sections III and IV).
Self-regulation generally increases online accountability while providing more flexibility and safeguards than government intervention. Indeed, self-regulation may help preserve the independence of social platforms by safeguarding them from government interference. Fundamentally, self-regulatory mechanisms aim to foster public trust in the media. In fact, through self-regulation, social media platforms understand how they work best and therefore have an incentive to provide effective rules over more short-sighted and therefore more invasive government measures.
As social networks have caused considerable concern, States around the world have increasingly called for government intervention in order to protect their citizens from (fake) news that is considered harmful. Legislative and administrative measures have shown, however, that government intervention pursuing legitimate goals can easily cause negative side effects, including becoming a tool for suppressing opposition and critics (see supra, Sections III and IV).
By contrast, as little government intervention as possible is required if the media are to continue to fulfil their role as watchdogs of democracy. Thus, the self-regulatory approach can help prevent stringent legislative and administrative measures against social media platforms, which would undermine rather than protect fundamental rights such as the freedom of expression. Moreover, although self-regulation might provide an alternative to courts for resolving media content complaints in this context, members of the public can still choose to take matters to court, as this remains an irrepressible core human right.
The social network environment makes legal supervision difficult and therefore opens up new prospects for social media self-regulation. First of all, because we are experiencing a time of rapid and constant change in media technology, self-regulation offers more flexibility than the government regulation option. Secondly, self-regulation is less costly for States and society in general. However, to avoid the risk of self-regulation benefitting (only) the interests of social media companies, provisions on transparency and efficiency must be reinforced.
A coherent strategy based on self-regulation and empowering users with aim of addressing the spread of disinformation, and especially fake news, across social networks has recently been implemented in EU law. To this end, the European Commission passed the so-called Code of Practice on Disinformation (CoP).Footnote 157 The CoP was warmly welcomed by representatives of leading social media platforms (as well as advertisers and the advertising industry) who volunteered to establish self-regulatory standards to fight online disinformation. Indeed, media giants such as Facebook, Google, Twitter, Mozilla, Microsoft and TikTok officially signed the CoP and accepted the rules on periodic European Commission monitoring.Footnote 158
In 2019, the European Commission implemented monitoring policies to define the progress of the measures adopted by social media platforms to comply with the commitments envisaged by the CoP. In particular, the European Commission asked platforms to report on a monthly basis on the actions undertaken to improve the scrutiny of ad placements, to ensure transparency of political and issue-based advertising and to tackle fake news and the malicious use of bots. In addition, further monitoring actions have been undertaken to counter the spread of fake news during the COVID-19 pandemic. The platforms that signed the CoP have stepped up their efforts against disinformation and fake news on the coronavirus by providing specific reports on the actions taken to implement transparency.Footnote 159 As a result, a first general study to assess the implementation of the CoP has been released.Footnote 160
Concerning the fake news monitoring policies put in place by social media platforms during the health crisis, signatories effectively joined the ad hoc “Fighting COVID-19 Disinformation Monitoring Programme”.Footnote 161 Indeed, the COVID-19 disinformation monitoring programme has provided an in-depth overview of the actions taken by platforms to fight false and misleading information regarding coronavirus and vaccines. It has proven to be a useful transparency measure to ensure platforms’ public accountability and has put the CoP through a stress test.
Specifically, the signatories to the CoP have been requested to provide information regarding: (1) the initiatives to promote authoritative content at the EU and Member State levels; (2) tools to improve users’ awareness; (3) information on manipulative behaviour on their services; and (4) data on flows of advertising linked to COVID-19 disinformation on their services and on third-party websites.
Fundamentally, the baseline reports from Facebook, Google, Microsoft–LinkedIn, TikTok, Twitter and Mozilla summarise the actions taken by these platforms to reduce the spread of false and misleading information on their services, covering a period from the beginning of the health emergency until 31 July 2020. More importantly, these reports offer a comprehensive overview of the relevant actions. Overall, baseline reports demonstrate that the signatories to the CoP have intensified their efforts compared with the first year of implementation of the Code’s commitments.
In general, it can be said that the platforms have enhanced the visibility of authoritative sources by conferring significance to COVID-19 information from the World Health Organization (WHO) and national health authorities and by providing new tools and services to facilitate access to relevant and reliable information concerning the health emergency.
Furthermore, the reports reveal that the platforms have addressed a considerable quantity of content encompassing false or misleading information, especially by removing or degrading content liable to cause physical harm or weaken public health policies. From this perspective, platforms have increased their efforts to detect cases of social media manipulation, operations intended to have a negative influence or coordinated inauthentic behaviour. In doing so, platforms did not detect coordinated disinformation operations with a specific focus on COVID-19 on their services, even if they detected a high number of items including false information concerning the coronavirus.
In addition, it may be noted that these reports emphasise strong measures to reduce the flow of advertising on third-party web pages supplying disinformation about the coronavirus, while providing free COVID-19-related advertising space for government and public health authorities.
In detail, the reports encompass quantitative data exemplifying the impact of social media platform policies. In particular, we can affirm that Google has placed importance to articles published by EU fact-checking organisations, which generated more than 155 million impressions over the first half of 2020.Footnote 162 With regard to Mozilla, this platform has enhanced the use of the browser space (Firefox snippets) and features (Pocket) to promote important public health information from the WHO, leading to more than 35 million impressions and 25,000 clicks on the snippets in Germany and France alone, while the curated coronavirus hub in Pocket generated more than 800,000 page views from more than 500,000 users around the globe. It has also provided expertise and opened up datasets on Firefox usage in February and March to help researchers investigating social distancing measures.Footnote 163
We can also see how Microsoft–LinkedIn shares with interested members a “European Daily Rundown”, namely a summary of the day’s news written and curated by experienced journalists and distributed to members in all twenty-seven EU Member States. The “European Daily Rundown” has a reach of approximately 9.7 million users in the EU.Footnote 164 Regarding Facebook, the platform has referred over two billion people globally to resources from the WHO and other public health authorities through its “COVID-19 Information Center”, with over 600 million people clicking through to learn more.Footnote 165 Twitter’s COVID-19 information pages were visited by 160 million users. Such pages bring together the latest tweets from a number of authoritative and trustworthy government, media and civil society sources in local languages.Footnote 166 Lastly, we can also note that TikTok’s informational page on COVID-19 has been visited over 52 million times across their five major European markets (the UK, Germany, France, Italy and Spain).Footnote 167
Legally speaking, the CoP aims to achieve the objectives defined by the European Commission in its Communication COM/2018/236, “Tackling Online Disinformation: A European Approach”, presented in April 2018,Footnote 168 by setting a wide range of commitments, from transparency in political advertising to the demonetisation of purveyors of disinformation. Communication COM/2018/236 in fact outlined the key overarching principles and objectives that should guide the actions of Member States to raise public awareness about disinformation and fake news. I contend that, in doing so, the EU has correctly interpreted self-regulation as a more effective approach than government intervention for addressing fake news as it is more flexible and above all less detrimental to users’ rights than the severe administrative measures that governments might adopt.
However, what I wish to emphasise even more strongly is that the European Commission has rightly recommended implementing user empowering. In this regard, Section I point (x) of the CoP seeks directly to promote the empowerment of users through tools enabling a customised and interactive online experience so as to enable users to fully grasp the meaning of content and to easily access different news sources reflecting alternative viewpoints, as well as providing appropriate and effective means for them to report disinformation and fake news.
To do so, Section I point (xi) of the CoP fosters “fact-checking activities” on social media platforms with reasonable measures to enable privacy-compliant access to data and to cooperate by providing relevant data on how their services function, including data for independent investigation by academic researchers and general information on algorithms.Footnote 169
A first example that seems to go in this direction is represented by the Facebook Oversight Board. In 2018, the Founder and Chief Executive Officer of Facebook, Mark Zuckerberg, announced the creation of this board with the aim of ensuring an additional and independent control policy on content removal or account suspensions for alleged violations of Facebook community rules.Footnote 170 Indeed, the scope of the Oversight Board – a body of independent experts who review Facebook’s most challenging content decisions and focus on important and disputed cases – was to serve as an appellate review system for user content and to make content-moderation policy recommendations to Facebook.Footnote 171
It has been argued that the Oversight Board exemplifies an important innovation and represented an appreciated attempt to disperse the enormous power over online discourse held by Facebook. Specifically, it could help to make that power more transparent and legitimate by encouraging dialogue around how and why Facebook’s power is exercised in the first place. However, this represents a more modest purpose than becoming an independent source of universally accepted free speech norms, but it is still ambitious for an institution that is breaking new ground.Footnote 172
More importantly, without going into the question of whether it can authoritatively resolve clashing ideas of freedom of expression, the fact of establishing an autonomous board that judges potentially competent rights in the right balance certainly represents a more substantial advantage over Facebook’s internal moderation practices that have proved incomplete and largely ineffective. The Facebook Oversight Board might represent the first “platform-scaled moment” of transnational Internet adjudication of online speech. Thus, it means a step towards empowering users by involving them in private platform governance and providing them with a minimum of procedural due process.Footnote 173
It should be borne in mind, however, that this goal cannot be effectively achieved by Facebook or other social media platforms in the absence of targeted government policies that facilitate the implementation of user empowerment strategies.
Hence, and more generally, I claim that empowering users could be a crucial task that the Members States of the EU and other States around the world should take on in the future. Empowering users could become a key challenge, especially in order to avoid or at least limit stringent government interventions and ineffective or counterproductive administrative measures that would undermine fundamental human rights.
As it is based on a careful combination of self-regulation and empowering users, we can reasonably sustain that the recent regulatory approach applied in the European Commission’s CoP might play a role in national policies to face the challenge of fake news. Nevertheless, it is also my opinion that careful adjustments will need to be made to regulatory strategies depending on the social, political and legal frameworks in which they are to be implemented, so that any measures will be proportionate and adequate to the risks to be managed, the problems to be solved and, ultimately, the human rights to be protected.
Theoretically, as I underline here, if we try to look beyond fake news as a negative phenomenon and start considering it as a shared concern of “mature” democracies, and at the same time if we bear in mind the key role that social media play as a watchdog in a democratic society,Footnote 174 then we might wonder whether we will still be willing to accept government policies that violate our freedom of expression.
Arguably, a first and significant finding in this direction emerges in the European Commission’s policy, which seems to take these aspects into due consideration in the legal context of the CoP. As a matter of fact, in line with Article 10 of the ECHR and the principle of freedom of expression, Section I point (vvii) of the CoP establishes that social media platforms should not be compelled by governments, nor should they adopt voluntary policies, to delete or prevent access to otherwise lawful content or messages solely on the basis that they are thought to be false.Footnote 175
Lastly, de jure condendo, some consideration should be made on the Digital Services Act (DSA), with particular regard to the empowerment of users.Footnote 176 Indeed, the proposal for regulation on a Single Market for Digital Services – namely the DSA – represents one of the key measures within the European strategy for digital.Footnote 177 In line with what was announced by the European Commission in the Communication “Shaping Europe’s Digital Future”,Footnote 178 the initiative was presented with a view to an overall review of the European-based regulatory body. In particular, the DSA aims, on the one hand, to increase and harmonise the responsibilities of social media platforms and information service providers by strengthening control over the content policies of platforms in the EU and, on the other hand, to introduce rules to ensure the fairness and contestability of digital markets.
More specifically, the European Commission has taken into consideration the recent emergence of three fundamental problems: firstly, the increased exposure of citizens to online risks, with particular regard to damage from illegal activities and violations of fundamental rights; secondly, the coordination and effectiveness of supervision of platforms, which are considered ineffective due to the limited administrative cooperation framework established by the E-Commerce Directive to address cross-border issues; and thirdly, the fragmented legal landscape due to the first initiatives to regulate digital services at a national level – this has resulted in new barriers in the internal market that have produced competitive advantages for already-existing very large platforms and digital services.
It should be noted that the DSA seeks to ensure the proper functioning of the single market in relation to the provision of online intermediation services across borders. It sets out a number of specific objectives, such as: (1) maintaining a secure online environment; (2) improving the conditions for innovative cross-border digital services; (3) establishing effective supervision of digital services and collaboration between authorities; and especially (4) empowering users and protecting their fundamental rights online.
Regarding empowering users, the DSA upholds the general principle that “what is illegal offline must also be illegal online”.Footnote 179 With this in mind, the new regulation introduces: (1) new harmonised procedures for the faster removal of illegal contents, products and/or services; (2) more effective protection of online users’ rights and internal complaint-management systems – including mechanisms for user reporting and new obligations regarding vendor traceability; and (3) a general framework of enforcement of the legislation thanks to the coordination between the national authorities, and particularly through the designation of the new figure of the “digital services coordinator”.
With respect to self-regulation, the DSA is based on the rationale that regulating social media companies through the CoP was not sufficient to address online harms. Indeed, in my opinion, the risk assessment and mitigation set forth in Articles 26 and 27 of the DSA can be viewed as an effective self-regulatory tool. Moreover, from this perspective, it can reasonably be said that the code of conduct has not been successful, which has led to the DSA having stringent specifications.
It must be said that the DSA has its strengths and weaknesses. The main objective of subjecting social platforms to specific obligations, hitherto substantially ungoverned by national and European regulatory frameworks, can be advantageous, but only if it serves to promote freedom of expression and not just the security of the platforms themselves.
To this end, from the perspective of empowering users, transparency policies should be encouraged in order to provide users with the tools to fully understand the potential of the digital environment on social media platforms and therefore to actively participate in the public debate.
Finally, the motto “what is illegal offline is illegal offline” cannot be taken literally. In my opinion, the particular context must be considered – namely the online and digital sphere, in which the rights and duties of both users and platforms are exercised.
VII. Conclusion
This article seeks to challenge the stringent legislative and administrative measures governments have recently put in place, analysing their negative implications for the right to freedom of expression and suggesting different regulatory approaches in the context of public law.
It began by exploring the legal definition of fake news in academia in order to establish the essential characteristics of the phenomenon (Section II).
It then went on to assess the legislative and administrative measures implemented by governments at both the international and EU levels (Sections III and IV, respectively), showing how they risk undermining a core human right by curtailing freedom of expression, but adding that many governments worldwide are regulating the spread of information on social media under the pretext of addressing fake news.
I have emphasised how governments are doing this above all to prevent the risk of an uncontrolled dissemination of information that could well increase the adverse effects of the COVID-19 health emergency by exploiting false and misleading news.
This paper also claims that there is an equally non-negligible risk that governments might use the health emergency as a ruse for implementing draconian restrictions on the right to freedom of expression, in addition to increasing social media censorship (eg chilling effects).
Specifically, starting from the premise of social media as a “watchdog” of democracy and moving on to the contention that fake news is a phenomenon of “mature” democracy, I have argued that public law already protects freedom of expression and ensures its effectiveness at the international and EU levels through some fundamental rules (Section V).Footnote 180
Lastly, I have explored key regulatory approaches and, as an alternative to government intervention, have proposed empowering users and self-regulation as strategies to manage fake news by mitigating the risks of undue interference in the right to freedom of expression (Section VI).
To conclude, in this section, I offer some remarks on the proposed solution by recommending the implementation of legal tools such as “reliability ratings” of social media platforms in order to enhance the management of information and particularly to minimise the risks related to fake news, especially in a time of emergency.
The regulatory approaches proposed to manage fake news phenomena on social media, self-regulation and especially empowering users (see supra, Section VI) might be implemented by public policies focused on long-term behavioural incentives rather than stringent short-term legislative and administrative measures.
As a matter of fact, behavioural science, and particularly psychology,Footnote 181 can play a leading role in public policy.Footnote 182 Recent important surveys on the behavioural approach support the long-term effectiveness of active psychological inoculation as a means of building resistance to fake news and disinformation in general.Footnote 183
Overall, these findings might strengthen the argument in favour of introducing legal tools for measuring the trustworthiness of information on social media platforms. To do so, in my opinion, governments could promote policies empowering users through a reliability rating mechanism.
In this regard, governments may set up the management of reliability ratings through independent third-party bodies, namely regulatory agencies or authorities, in order to carry out the various steps of the procedure transparently, checking its correctness and reporting the results to the public.
Specifically, reliability ratings can measure the frequency and percentage of fake news spread on a social media platform over a certain time frame. Thus, based on the frequency and percentage of fake news it disseminates, the platform could be considered statistically reliable or otherwise by those using it.Footnote 184
It might be reasonable to assume that reliability ratings may act as an incentive to enhance the efficient management of fake news among both users and owners of social media. If users can consciously decide whether or not to use a certain platform based on reliability ratings, the owner would probably be induced to adopt more effective measures to deal with fake news in order not to lose users.
At the same time, by maximising its reliability rating, one platform may incentivise another to do so too in order not to lose its reputation and therefore its users. Indeed, the threat of losing users would incentivise a platform to ensure the reliability of information in a better and more effective way. In other words, reliability ratings can stimulate competition among social media platforms to effectively manage information and address fake news.
It should be noted that reliability ratings also reveal the importance of self-regulation. To avoid or at least minimise the risk of losing their reputation and users, the owners of social media platforms will certainly implement effective legal systems to regulate information and mitigate fake news phenomena.
Hence, it may be reasonable to argue, from a legal point of view, that empowering users can play a crucial and proactive role in effectively managing fake news on social media in the era of emergency for at least three compelling reasons: (1) to build and maintain the trust of other users; (2) to ensure and preserve the function of social media as a watchdog in democratic society; and (3) to promote and protect freedom of expression by limiting undue interference by regulators.
On this last point, it must be said that reliability ratings in governing fake news – especially in emergencies – should not interfere with users’ freedom of expression. Users, in my view, remain absolutely “free” to consider a social media platform trustworthy or not based on their own personal and unquestionable opinion. In fact, the contents of a platform that has implemented reliability ratings should not be “labelled” as reliable a priori, precisely because they are subject to the right of freedom of expression.
It should also be said that in order not to create oligopolies or monopolies, reliability ratings should not per se create advantageous positions when users search for news on social media platforms using Internet search engines (eg because news from “reliable” platforms is placed by default “at the top” in the search results on the Internet and thus becomes more visible in general).
Specifically, an effective and, above all, “democratic” reliability rating model should promote user participation in the ex post evaluation of a platform’s reliability. In doing so, the reliability rating can take into account the diversity of opinions typical of freedom of expression and an open and plural society, promoting the comparison between different thoughts on the same news.
Lastly, at the same time, involving users in the debate on the effective reliability of a social media platform could trigger mechanisms of democratic control by the public in order to govern fake news in the “era of emergency”.