In its early days, the internet was often characterised as a digital wild west. This early view saw a new electronic frontier, beyond the reach of law and regulation that was anarchic and untamed.Footnote 1 That view was soon displaced as laws in the offline world applied online, and new forms of regulation, emerged. Nonetheless, the idea of the internet and digital communications as a space for freedom of expression has lived on in the minds of many people. In practice, users have experienced an increase in the opportunities to speak freely, given the ease of posting expression online. While that freedom is often celebrated, many commentators also point to the harms that can follow such freedoms. These range from the rapid spread of gossip, the decline of privacy, the growth in cyberstalking, bullying, hate speech, the echo-chamber effect and the persistence of falsities and conspiracies.
Despite the libertarian aspirations associated with digital communications, there is also a danger that a greater quantity of people's everyday expression is likely to be subject to legal regulation. Two decades ago, ill-judged remarks made in the heat of the moment or poor taste jokes among friends were unlikely to be on the radar of law enforcers. Now that those comments can be made available to the world at large and remain recorded and searchable, such expression has greater potential to come to the attention of prosecutors and litigators. The result is that people enjoy greater freedom of expression, in that they can communicate with a wider range of people in different places and times than before, yet people's everyday expressive activities are potentially subject to more regulation.
The application of existing laws to digital communications can often provoke mixed feelings. In one high profile case, following the August 2011 riots in England, a 22-year-old man was sentenced to four years in prison after he had used his Facebook account to design a page titled “The Warrington Riots”.Footnote 2 The defendant was drunk when posting this content and, after several hours, took the site down with an apology stating that it had been a joke (although the appeal court noted the retraction occurred only after it was clear the police were searching for him).Footnote 3 No riot took place as a result of this post. While there is clearly no right to make statements advocating a riot, some commentators felt that the length of the sentence was disproportionate.Footnote 4 There are other high profile examples, such as the “Twitter joke trial” in which a man was convicted of a criminal offence after making a joke about blowing up an airport. In many cases, the speaker oversteps a boundary and should have used the digital media with greater responsibility. The legal responses can, however, seem heavy-handed for what might have been a statement made with little thought while the speaker was sat at a desk at home. Words typed in seconds followed by hitting the enter key can lead to a criminal record or costly civil litigation.
The cases that attract criminal prosecution or defamation claims are often thought to raise few free speech issues. Much of the European Court of Human Rights (ECHR) Article 10 jurisprudence is focused on “high value” speech that contributes to discussion of matters in the public interest. Many of the digital speech cases are not concerned with matters of public importance. However, it will be argued that informal and spontaneous communications, which can be made by anyone through digital communications, should be afforded some protection. While much discussion on free speech concerns the “value” of the expression, this article will contrast the different “levels” of expression. The term “high level” is adopted here to refer to expression that is professionally produced, aimed at a wide audience, is well resourced and researched in advanced. By contrast, the “low level” refers to amateur content that is spontaneous, inexpensive to produce, and is often akin to everyday conversation. With “low level” communications, lower standards of responsibility will normally be expected of the speaker than of a professional mass media entity. This does not point to a cyber-libertarian conclusion that amateur expression online should have no constraints. Some very real harms can flow from digital communications, affecting people in a way that offline conversations cannot.Footnote 5 If a type a speech is so harmful that it requires a legal response, the laws should be framed in a way that protects the freedom to converse and any controls should be proportionate. Several alternatives will be considered, including raising the threshold to determine when speech causes actionable harm and the use of regulatory sanctions.
Before considering such approaches, the next sections will examine the way that laws traditionally governing particular types of expressive activity have converged in relation to digital communications. Three examples will be considered: media laws, public order laws and laws regulating targeted communications. Each of these laws were developed in response to problems in a specific setting and tended to govern distinct activities, but are now applicable to some digital communications. The examples are not exhaustive, but are used to illustrate the way laws that once applied to different expressive activities have converged, creating considerable overlap among the various causes of action, and, it will be submitted, expanding the reach of those laws into people's everyday communications.
I. Different legal sectors
A. Media Law
The first area of law that can affect digital communication is media law, in particular actions for defamation and misuse of private information.Footnote 6 There is already a body of case law in which media laws have been applied to content on the internet. In Keith-Smith v Williams a parliamentary candidate for UKIP successfully sued a participant in an online discussion group for suggesting the claimant was a Nazi, a racist and had been charged with several sex offences.Footnote 7 In Applause Stores an action was brought in both libel and misuse of private information following the publication of a false Facebook profile.Footnote 8 In Cairns v Modi, £90,000 was awarded in libel damages after defamatory allegations of cricket match fixing were posted on Twitter (seen by 65 people) and told to an online journal (seen by 1000 people).Footnote 9 Media laws are still developing in this context and have yet to grapple with some of the more difficult issues, such as whether a person providing a link to a defamatory article is deemed to be a publisherFootnote 10 and the circumstances where the author of a post will be liable for republication on other websites.Footnote 11
That media laws apply to digital communications seems unsurprising. Statements made on websites or social media have the potential to injure reputations and violate expectations of privacy. The laws have never been applied exclusively to the mass media, and successful libel actions have been brought for statements made in leaflets and letters.Footnote 12 However, libel and misuse of private information are typically described as media laws and are seen to affect the mass media more often than other types of speaker.Footnote 13 While there are no official statistics on the identity of defendants in libel claims, the mass media are frequent defendants.Footnote 14 That much is to be expected given the mass media are in the business of producing high profile expression to a large audience and have deep pockets. The laws are often justified as a way of curbing the abuse of mass media power. Defenders of the existing libel laws often point to the fact that newspapers have the capacity to tear down a person's name in front of a mass audience, and the individual has little chance to reply or set the record straight.Footnote 15
The mass media have devised strategies to negotiate the legal risks. Media companies will train journalists in media law basics and invest in legal advice and in-house lawyers. The experienced journalist also learns to navigate the legal waters, which can mitigate the potential chilling effect. Through litigation the mass media have secured victories such as Reynolds privilege, which provides a defence in defamation cases where the subject matter of the article is in the public interest and the defendant acted as a “responsible journalist”.Footnote 16 While the Privy Council has stated that Reynolds can protect expression in any medium and not just the press and broadcasters, the defence is conditional on the speaker fulfilling the requirements of responsible journalism.Footnote 17 That condition specifically refers to the norms of the mass media professional, and as a result the defence will offer limited protection to the amateur digital speaker that is not acquainted with those journalistic practices. The safeguards for free speech were shaped by the years of litigation with mass media and thereby framed with the mass media, rather than the casual amateur, in mind.Footnote 18
Before digital communications, the non-media speech that would be subject to defamation law would often take place in a public or formal setting that would come to people's attention, such as a formal meeting or a leaflet. The bulk of everyday communications would normally fall below the radar and escape legal sanction, or would be subject to slander rather than libel (so claimants would face more demanding standards).Footnote 19 With the growth of digital communications, there is more content being published than before, which means there is more for the media laws to apply to. This content includes not only professionally produced content, but all the amateur content, conversations and comments that are made by users. Media law now has the potential to play a bigger role outside the context of the mass media.
B. Public order laws
Public order laws form the second category of control on expressive activities, some of which can apply to digital expression. The convictions during the summer of 2011 for inciting riots on Facebook provide a high profile example. That might be an unusual case, but other public order laws have also been applied to messages on the internet and social networks. In R v Sheppard, the defendants were convicted for publishing racially inflammatory material on the internet, including a pamphlet titled “The Holohoax.”Footnote 20 In S v D.P.P. a man was convicted under s.4A of the Public Order Act 1986 after posting a photograph of a laboratory worker on a website with the caption “C'mon I'd love to eat you! We're the Covance Cannibals”.Footnote 21 In another case, a supporter of Crawley Town F.C. was given an 8 week suspended sentence following a conviction under the Public Order Act after he was seen making gestures referring to the Munich air disaster in a video posted on YouTube.Footnote 22 In March 2012, a 21-year-old student was sentenced to 56 days in prison for a racially aggravated s.4A offence, after sending abusive messages on Twitter when the footballer Fabrice Muamba collapsed on the field.Footnote 23
The facts of Sheppard and S do not concern the spontaneous and conversational expression that is the focus of this article. The significance of those cases for the present purposes lies in the application of public order laws to certain types of digital expression. Section 4A of the Public Order Act 1986 provides that it is an offence for a person to use “threatening, abusive or insulting words or behavior” or display “any writing, sign or other visible representation which is threatening, abusive or insulting” which causes “that or another person harassment, alarm or distress” and which the speaker intends to have that effect.Footnote 24 In S v D.P.P., the district judge stated that “any person who posts material on the Internet puts that material within the public ambit” and can thereby be liable when that material causes harassment, alarm and distress.Footnote 25 The offence was drafted to exclude situations where the defendant and the recipient of the communication were inside a “dwelling”, thereby providing some protection for private speech. However, digital speech is not confined to a dwelling and if it is deemed to be within the “public ambit”, words written inside the home on a computer or mobile phone will now fall within the reach of this law. While the requirement of intent to cause harm provides a safeguard to the s.4A offence, in limiting its reach, there is still a question of whether the requirement that the speech be insulting and cause distress is a too low threshold. A final safeguard is the principle in Dehal that a prosecution under s.4A will not be consistent with Article 10 of the ECHR unless it is necessary to protect public order.Footnote 26 This principle might provide some additional protection to digital expression, which might be thought less likely to cause of a threat to public order than a statement at the site of a protest. However, the way that principle applies to digital expression has not been discussed in the cases so far. Furthermore, if the Dehal principle only applies to speech that is protected under Article 10, then its reach will depend on the way Article 10 is interpreted. As will be explained below, the Article 10 jurisprudence has traditionally focused on political speech and expression on matters in the public interest, and provides a more limited level of protection to other types of expression.
Section 4A is just one type of public order law that can apply online. In addition, the Public Order Act 1986 includes offences where expression is likely to incite hatred on the grounds of race,Footnote 27 religion and sexual orientation.Footnote 28 While the harm in these offences is more serious, in that it refers to “hatred” rather than “harassment, alarm and distress”, there is no need to show that actual hatred has been stirred up.Footnote 29 The broadest of the public order offences is under s.5, which is cast in similar terms to s.4A, but does not require the expression to actually cause harassment, alarm or distress, nor does it require that the speaker intends his words to have that effect. However, unlike the other public order offences, s.5 requires that the words or behavior be “within the sight or hearing” of the victim. It is not clear if posting material on the internet can be sufficient to bring it within a person's sight or hearing, or whether the clause in s.5 refers to physical proximity.Footnote 30
The public order laws are primarily about standards of behavior in public. The law seeks to manage the competing rights and interests of people sharing public spaces. Speech in public places is harder for people to avoid and face-to-face communication can have different impact on the listener. That is what makes public protest so powerful, but also what makes some legal control necessary. Public order laws were initially drafted to exclude certain private communications, such as a domestic row or phone call from one house to another. However, it does not exclusively apply to public spaces and can apply to the press, and some far-right newspapers and magazines have been prosecuted for public order offences. Such prosecutions of the media tend to arise in extreme cases, for example where the media content stirs racial hatred.Footnote 31 The public order controls on expression primarily target activities “on the ground”, in which there is physical proximity between the speaker and listener.
The context in which the law has developed allowed those most likely to fall foul of the offences to develop ways to negotiate the legal risks. Protestors will often discuss arrangements with the police in advance. In many public order cases, the action takes place in the presence of the police or in a place where the police are soon in attendance. The police can often provide some warning to those engaging in threatening and abusive behavior, and prosecutions will arise where those warnings are not adhered to. The gradual application of public order law to digital speech applies the law to a new context where users may have different expectations and feel removed from the consequences. The trend extends this area of law to communications that the speaker may think of as semi private or at least not taking place in a public space. So far, prosecutions are normally against the more extreme and intentionally harmful statements made online. However, those applications of the law confirm that digital communications are within the remit of the public order laws.
C. Targeted communications
The third category concerns laws designed to regulate harmful messages to a specific individual or set of individuals. This category looks at laws that were initially designed to stop poison pen letters, offensive phone calls and stalking, such as the Malicious Communications Act 1988 and the Protection from Harassment Act 1997, but which have since been applied to digital communications. For example, the Malicious Communications Act 1988 was enacted to tackle the problem of poison pen letters.Footnote 32 The Act provides that it is an offence to send to another person a communication which conveys (i) “a message which is indecent or grossly offensive, (ii) a threat, or (iii) information which is false and known or believed to be false by the sender” if one of the sender's purposes is to “cause distress or anxiety” to the recipient or any other person the sender intends the contents of the message to be communicated to.Footnote 33 It is unclear from the terms of the statute whether posting content on a website, for example, constitutes sending a message to another person.Footnote 34 There are, however, reports that the Act is being enforced in relation to some digital communications. In July 2011, a man received a police caution under the Malicious Communications Act for making a false claim in a blog post that a contestant in the television show Britain's Got Talent “had been groomed for stardom by Simon Cowell for two years”.Footnote 35 In February 2012, a 29-year-old man was given a four month suspended sentence for an offence under the Malicious Communications Act after an online discussion about football got out of hand and he sent a message on Twitter referring to Newcastle United as “Coon Army”.Footnote 36 According news reports, the man was arrested after a journalist saw the messages and reported the matter to the police.Footnote 37
Section 127 of the Communications Act 2003 is the broadest provision in this category, making it an offence to send or cause to be sent through a public electronic communications network “a message or other matter that is grossly offensive or of an indecent, obscene or menacing character”. The same section also provides that it is an offence to send or cause to be sent a false message “for the purpose of causing annoyance, inconvenience or needless anxiety to another”. The defendant must be shown to have intended or be aware that the message was grossly offensive, indecent or menacing, which can be inferred from the terms of the message or from the defendant's knowledge of the likely recipient.Footnote 38 The offence is committed by sending the message, there is no requirement that any person sees the message or be offended by it. Given the breadth of the legislation, there have been suggestions that a crime could be committed if two people expressed racist opinions in strong terms over the telephone.Footnote 39 Similarly, the offence could in theory apply to some adult chat lines, which would be a new inroad into private communications.Footnote 40 Broadcasters are, however, exempt from this provision,Footnote 41 suggesting the offence is not aimed at mass communications.
While doubts have been expressed as to whether creating a webpage or social network group constitutes “sending” a message,Footnote 42 there are a number of examples where s.127 has been used against internet communications. The most well known instance of this offence being deployed against an online speaker was the so-called “Twitter joke trial”, in which a 26-year-old defendant was convicted for posting a facetious remark on Twitter after his plane to Belfast had been cancelled. The message, which was placed in the publicly accessible part of Twitter, stated: “You've got a week and a bit to get your shit together otherwise I am blowing the airport sky high!”. A member of staff at the airport saw the “tweet” after conducting a search with the airport's name and then contacted the police. The defendant argued that he intended the comment to be a joke. His appeal was rejected, with the court finding it “impossible to accept that anyone living in this country, in the current climate of terrorist threats, particularly at airports, would not be aware of the consequences of his actions in making such a statement”.Footnote 43
There are other examples. In 2011, a man was convicted under s.127 after posting messages on Facebook about the M.P. Gregory Campbell stating he “should get a bullet in the head” and that he is a “scumbag”. The defendant said he was expressing outrage at the M.P.'s remarks about the Saville Inquiry on the Bloody Sunday killings in Northern Ireland, and claimed that it was a “throwaway comment” not intended to cause harm.Footnote 44 A 21-year-old man was convicted “of sending an offensive or indecent, obscene or menacing electronic communication” after he set up a Facebook group titled “Pakis Die” and posted a message stating “Help me shoot all the Pakis”.Footnote 45 Of all the laws discussed, s.127 appears to be used as a general criminal control on digital communications.
D. Converging sectors
The different laws discussed above share a similarity in so far as the basic ingredients of the offences and causes of action are broadly worded and can be applied flexibly as circumstances change. However, all three categories of law traditionally regulated different types of activity and with different harms in mind. While some of the laws were drafted to avoid overlaps,Footnote 46 the categories are not rigidly sealed. For example, the law of defamation can be applied to protestors and public order laws can be applied to the mass media. Similarly, the Protection of Harassment Act has been applied to protesters and the press.
The trends discussed above are leading to greater convergence in these laws. When applied to digital communications, the term convergence normally is used to describe the coming together of different communication technologies. Newspapers publish audio-visual content, which was once the province of the broadcast media. Conversely, broadcasters provide written articles on their websites, producing content similar to a newspaper. Individuals can now post content to an audience that previously would have been available only through the mass media, leading to a convergence in audience and producer. New forms of protest and collective expression are arising online, from online petitions to illegal denial of service attacks. These trends lead to a level of convergence in the forms of media. Alongside this trend, the laws that regulate the different types of activity are also converging. Now that more people publish their words online and make them available to the public or a section of the public, media laws can now affect casual comments made in an online conversation. Public order laws are not just concerned with what goes on in physical public spaces, but can now apply to statements typed in a person's home and not intended for a wide audience. The use of laws that regulate telephone calls and poison pen letters are appropriate for controlling digital hate and harassment campaigns, but are now being applied to a wider range of speech than initially envisaged, including some communications to the world at large. Online speakers have to navigate a range of different laws that were initially developed to deal with different activities. The changes in communication may have empowered people, but at the same time, people's expressive activities face regulation coming from different directions. There is no general objection to the laws of the offline world applying online, but these developments provide a reason to take stock and evaluate the balance struck by those laws between expression and the harms it may cause.
Not only do these developments pose possible problems of over-criminalisation and over-regulation, the current approaches in law do not provide a satisfactory method of combating the harms caused by some digital communications. Police and prosecutors do not have the resources to deal with every communication that people have cause to complain about. Given these difficulties, the danger exists that the laws are being applied selectively in a way that is hard to predict. The extent of enforcement may change in future, if dealing with such communication becomes a higher priority for police. However, the current approach can leave speakers over-exposed to a number of laws, while failing to give people sufficient protection from the harms that can be caused by such expression.
II. Stored and searchable
Digital communications are not only subject to a wide range of laws, but any unlawful expression made online is subject to new methods of detection. When a comment is made online, it is recorded in a form that can be searched and examined long after the statement has been made, even if the message took only seconds to write. Digital communications are more “persistent”.Footnote 47 Digitised communications also allow the expression to come to the attention of people beyond the speaker's intended audience. This can arise where a person searches a name or topic on the internet and encounters a statement. While a blog or comment on a social network may have few followers, its content may be returned in the results of a search query. This can make the material especially harmful, as the information is most likely to be seen by those people who are specifically interested in the subject (and are interested enough to make a search). Alternatively, recipients of the original message may forward or re-post it, thereby bringing the message to a wider audience. In such circumstances, the expression remains accessible even if the author chooses to delete a message or circulate it to a smaller number of people. The speaker has less control over the audience.
It is commonplace to focus on the harms that such dissemination and recording can cause. There are increasing concerns about cyberstalking and calls for new laws to be passed to tackle the problem.Footnote 48 Gossip and unpleasant comments can become accessible to the world at large and made available for long periods.Footnote 49 This can affect people's social and professional prospects, in addition to causing distress. Even where statements are not malicious, a teenager might volunteer information about himself on a social network, only to suffer deep embarrassment later in life. As Daniel Solove writes:
There is a grave danger that people will enslave themselves and each other by making past mistakes permanently and readily available for the rest of their lives. The long-standing value of giving people a second chance, of allowing people to reinvent themselves, might soon become a relic of a bygone era.Footnote 50
An obvious application of this problem arises where prospective employers search the names of job applicants, and discover all the various negative comments that have been made about that person. While this is an important point that highlights the problems of digital expression, attempts to control some of these harms through criminal sanctions or expensive litigation have the potential to deprive people of second chances in a different way. The increasing application of the laws to digital expression means that some speakers might have to live with a criminal record or face high litigation costs due to a throwaway comment or idiotic remark.
The capture and storage of information facilitated by digital technologies can allow for greater monitoring of expression. This applies to expressive activities in the offline world. For example, protests are frequently filmed, allowing the participants' behavior to be reviewed long after the event. Conduct that does not face police sanction or attract attention at the time of the protest can thereby be the subject of a prosecution after the event.Footnote 51 The same is true for the content written on blogs, websites and messenger services. An online message the speaker intends to be received by friends now has the potential to be viewed by the police. The monitoring is more likely to be undertaken by other individuals who then report the statement to the police or, in some cases, bring a legal action themselves. For example, twenty years ago where a person made a racist remark in a social setting, the chances of the police ever hearing about it were small. Now a recipient can direct the police to the statement made online and allow them to witness it first hand. The fact that the actions, which may be spur of the moment and impulsive, are recorded and can be viewed widely, makes the chances of a complaint more likely. This development contributes to the problem of the “digital panopticon” in which people believe that they might be under surveillance (whether by state or private actors).
One response to this line of argument is that it is no bad thing if more crimes are being detected. The monitoring merely ensures that laws are enforced. The difficulty with that response is that the laws were drafted with particular contexts in mind. The broad terms of the laws allowed flexible application and avoided loophole problems in those contexts, while casual conversations and comments normally occurred off the radar of enforcers. Even in those circumstances, there were objections to broadly worded laws and over-reliance on prosecutorial discretion. The application of the law to digital communications potentially heightens the problem, as the broad terms of the law apply to an even broader range of expression. Facing new types of complaint about digital communications, the authorities have an incentive to apply the existing laws to new situations.
III. High and low value speech
It has been argued above that the application of various types of law to digital communications potentially regulates some “everyday” expression that previously fell below the radar of enforcers. One question is why the regulation of such expression should be a cause for concern? While such laws might interfere with the right to freedom of expression under Article 10 of the ECHR, not every such interference will violate that right. In deciding the level of protection to be afforded to a particular type of expression, the courts consider its “value”. Those types of expression that are of the highest value will be granted the strongest protection, while less rigorous standards of review will apply to “lower value expression”. While the terminology of high/low value has been developed in US scholarship on the First Amendment,Footnote 52 a similar approach can be seen in the distinctions drawn by European and UK courts between different types of speech: political, artistic and commercial expression, celebrity gossip, pornography and hate speech.Footnote 53
Of all the categories, political speech is deemed to be of the highest importance.Footnote 54 The European Court of Human Rights has on numerous occasions stressed that under Article 10 “there is little scope […] for restrictions on political speech or on debate on questions of public interest”.Footnote 55 Similarly, Baroness Hale stated in Campbell that of all the types of speech that are “deserving of protection in a democratic society”, “[t]op of the list is political speech”.Footnote 56 Baroness Hale explained the high value of such expression:
The free exchange of information and ideas on matters relevant to the organisation of the economic, social and political life of the country is crucial to any democracy. Without this, it can scarcely be called a democracy at all. This includes revealing information about public figures, especially those in elective office, which would otherwise be private but is relevant to their participation in public life. Intellectual and educational speech and expression are also important in a democracy, not least because they enable the development of individuals' potential to play a full part in society and in our democratic life.Footnote 57
While political speech has the highest value, the Strasbourg and the domestic courts look at the different categories on a continuum. After political speech comes artistic speechFootnote 58 and then commercial expression,Footnote 59 which attract intermediate protection. Celebrity gossip has a lower protection still,Footnote 60 while beneath that falls pornography,Footnote 61 gratuitous personal attacksFootnote 62 and hate speech,Footnote 63 which attract little, if any, protection. Under this approach, the harms caused by certain types of expression are only worth tolerating where the speech is of high value. This may provide some protection for digital communications that relate to politics and public affairs, but much everyday speech posted online will not fall into a high value category and will receive little protection under the current jurisprudence.
One response to the arguments above might be to claim that the categories of speech identified by the courts are misguided and that some categories of speech should not be dismissed as “low value”. Diane Zimmerman has argued that gossip, while characterised as trivial by many, can act as a form of social glue that helps create bonds between people.Footnote 64 Zimmerman writes that “gossip, and the rules governing who participates and who is privy to what information about whom, helps mark out social groupings and establish community ties”.Footnote 65 The point can be extended beyond gossip to conversation in general. The chance to engage with others, which can include poor taste jokes, ill-judged comments and some offensive remarks allow the speaker to decide how to present himself to society. It is a way of making social connections, and people's reactions to such comments will provide a route to discovering social norms. The argument, however, only goes so far. Even if it is accepted that there is some social value in those communications, it does not point to a very strong protection of such speech where it is found to cause significant harm to others.Footnote 66 While this argument might support the limited protection for certain types of conversational speech, it will be argued below that some protection should be afforded to such speech independent of any value it might have.
A further response to the limits of the categories-based approach to expression might be to argue for a broader democracy-based justification for free speech that gives robust protection to contributions to “public discourse”.Footnote 67 While the protection of expression under such an approach is broader than the classic focus on political speech, what types of expression count as “public discourse” is unclear.Footnote 68 A narrow approach, which excludes some private interest speech, gossip, trivia and in some cases personal abuse would under-protect conversational expression.Footnote 69 As the approach seeks to give robust protection to “public discourse”, a broad definition of its ambit would over-protect that speech if it does not permit content related controls. The argument for protecting conversational and spontaneous speech here does not demand an absence of any control, but just calls for safeguards and for any measures to be proportionate
To summarise, the approach that looks at the value of expression plays an important role in the Article 10 jurisprudence. That approach should not be abandoned, and speech such as gossip and trivia need not be given the same level of protection as political expression. However, the traditional approach under Article 10 should be supplemented with an additional principle to grant some limited protection to conversations and informal expression that are thought to have little value. The protection should not be absolute, but should ensure restrictions are proportionate and do not hamper people's day-to-day conversations. To explain why such expression is worthy of this protection a separate distinction will be drawn between high and low level speech.
IV. High and low level speech
The context in which the expression takes place can be considered by looking at the “level” of expression. A distinction can be drawn between two ends of a spectrum. “High level” expression refers to that which is communicated to a mass audience, normally through a professional entity or one that has such an audience on a regular basis. “High level” expression, such as that on television, radio, newspapers and mass mailings, is normally intended to be widely disseminated, well prepared and researched in advance, presented with authority and supported with considerable resources to evaluate the legal risks. At the other end of the spectrum is “low level” expression such as conversations taking place between friends in the street, over the telephone or in the pub. Such speech is often casual, with little prior thought about the risks. It is deemed to be low level as it as undergoes little preparation, is inexpensive, normally reaches a small audience and it is possible for anyone to engage in such expression. While the high value/low value distinction is based on the content of expression, the high level/low level distinction is based on the context in which the speech is made.
Much debate exists on how digital speech should be characterized and which analogies, if any, are appropriate. The paradigm is often said to be “many-to-many” or “mass self-communication”.Footnote 70 In practice, a wide variety of models can be found in digital communications, including activities ranging from high to low level (and much in between). The mass media have an online presence, with websites, distribution through I-Player, YouTube and various social networks. There are also some high-level communications that are to be found online only, such as news sites like Huffington Post. The distinction does not rest on the types of digital platforms. While content on social networks such as Twitter and Facebook are often characterized as conversations, those platforms can include high level content from media companies and professional advertising campaigns. High and low level communications are not two rigidly distinct categories but mark two ends of a continuum, in which there is much between. For example, a blog by a professional journalist may fall somewhere between the two, in so far as it may include spontaneous comments and conversation, while being backed by professional resources and experience. Much content found online will nonetheless fall towards the “low level” end of the spectrum, in the sense of being produced by a lone individual or small group, aimed at a small audience and of a casual manner. Others may have a greater level of preparation, such as a lengthy blog post, but will not have had the level of checking and assessment normally found in mass media content.
There are a number of relevant factors when locating the level of a speaker, some of which are already taken into account in the Article 10 jurisprudence on freedom of expression. An obvious starting point is to look at the size of the audience. This factor cuts both ways. A large audience can point to stronger protection for the speaker. The fact that the high level speaker can bring matters to the national attention stresses the importance of ensuring this channel of communication is not blocked. This explains why the courts so often stress the importance of the press in disseminating information and acting as a public watchdog.Footnote 71 However, the Strasbourg Court has stressed that the mass media have “duties and responsibilities” on account of its level of influence and capacity to shape the way information is assessed.Footnote 72 Such reasoning suggests that a large audience does not always point to strong protection, but can lead to the Article 10 protection becoming conditional on the fulfilment of certain standards. With a wide audience, the mass media has greater potential to damage a person's reputation, prejudice the administration of justice, cause public disorder or stir up racial hatred.Footnote 73 The size of an audience is also a factor relevant to whether a restriction is proportionate.Footnote 74 The more onerous regulations on the broadcast media, for example, are appropriate for those with a regular mass audience, but not for individual speakers.Footnote 75 A small audience can in some cases reduce the level harm, which can be a factor to be taken into account by the court. For example, in Alinak v Turkey, when considering the proportionality of a sanction, the Strasbourg Court took the small size of the audience into account in finding that a publication would have a limited impact on public order.Footnote 76
While still relevant, the size of audience alone is of limited assistance when assessing the nature of a digital communication. The growth of user generated content online has led to a blurring of the boundary between mass and individual communications. An individual can reach a mass audience in a way that was not possible before. On some occasions, a communication that would normally be low level can get a very large audience, for example where a post on a website gets a link from a national newspaper. There are, however, still differences in the level of responsibility that can be expected of the speaker that gains an unexpectedly large audience. Where such attention is not anticipated and does not come on a regular basis, it should arguably fall on the lower end of the spectrum. The risk that a conversation might go viral should not require that the speaker take a level of care normally expected when appearing on Newsnight or writing for the Times. Where a person posts a comment on a forum with a large audience, for example commenting on a newspaper website, the speaker should be aware that the speech may come before a wider audience. This alone does not make the comment high level and it does not enjoy the same status as the article itself. The commenter will not know how many people are likely to see the comment, as some readers will not look at the comments at all, while others may just read a selection. Similarly, in a very popular Twitter stream, the person sending the message may not know just how many people will notice the message or whether it will be buried in a mass of comments. The focus should not just be on the audience of the platform, but also the likely attention the comment will gain.
The opportunities to prepare the content are also relevant to the high/low level distinction. The Strasbourg Court has considered this factor, looking at whether there was any “possibility of reformulating, perfecting or retracting” a statement before putting it into the public domain.Footnote 77 Comments made on a live television or radio will be given more leeway than statements made on a pre-recorded prepared show.Footnote 78 However, the court can also take into account the experience of the speaker, as an established broadcaster will be held to a higher standard than a member of the public on such a show.Footnote 79 Furthermore, even on a live broadcast, a television or radio station will have producers that can cut or intervene if a speaker is in breach of the expected standards.Footnote 80 While these points are made as mitigating factors for “high level” mass media entities, these arguments should have greater force in relation to the low level speaker. People engaging in a conversation or writing a tweet or blog post are often spontaneous, giving little thought or preparation to the expression. This in turn shapes the expectation of the audience. If something appears in the text of a formal news article, then the audience will expect it to be have been researched and attribute some level of authority to the content. By contrast, a part of the website that it is open for anyone to post content to will be read in a different way. These factors help to distinguish the comments section of a newspaper from the article itself. Even where both are read by a wide audience, the audience knows the comments form part of a conversation, in which people respond to one another and make relatively unguarded statements.Footnote 81
While greater thought and preparation is generally to be encouraged, people should be given leeway to say things they later regret without running into serious legal difficulties. This factor is what distinguishes the insult delivered in the heat of the moment from the deliberate poison pen letter or harassment campaign. Once a spontaneous remark has been made, the speaker can often remove or edit what has been said. This does not mean people should be required to continually review all statements for as long as they remain accessible. However, there may be a case for notice requirements, in which a failure to remove something once informed by the relevant authorities could lead to liability.
The speaker's resources and experience are also relevant to the high/low level distinction. Again, this can cut both ways. Those working in the mass media make regular judgments about what is of public importance and what details need to be made public. While the courts need not show deference to editorial judgment, some leeway is granted to that expertise.Footnote 82 On the other hand, with greater resources and experience, more may be expected of the high level speaker. These factors also suggest that the high level speaker will normally be in a stronger position to negotiate the various legal controls. The national media will have legal departments to assess the legal risks and help editors navigate the media laws. Furthermore, as a regular and high profile publisher, the mass media will establish an ongoing relationship with regulatory authorities, the police, politicians, celebrities and their agents. Such contacts will give the media an additional level of information indicating what types of publication will attract prosecution or a private claim. Given these advantages, highly technical and complex laws are more likely to survive the “prescribed by law” standard under Article 10 if primarily aimed at “specialists” such as the professional mass media, who can be reasonably expected to look up and take advice on relevant laws.Footnote 83 By contrast, greater clarity will be required of those laws that apply more generally and extend to low-level speakers that do not have access to these benefits.
That communications are made with an expectation of a limited audience, amateur, cheap and spontaneous are reasons for limiting the level of responsibility demanded of the speaker by law. There is an additional reason for protecting those communications that are spontaneous and made in the heat of the moment, which is the close connection with freedom of thought. When a person says whatever is on his mind, the gap between the thought and its expression to the outside world is minimal. It is not mediated by further reflection, research or verification. Consequently, when people suffer severe penalties for merely venting whatever they happen to be thinking about, it comes close to an attack on one's thoughts. Along these lines, freedom of expression begins to overlap with privacy interests, that a person's thoughts should be off-limits to regulation. There is, of course, a clear difference between merely thinking something and communicating it. There is also a limit to the extent that one can claim privacy in relation to thoughts that have been voluntarily disseminated and made widely accessible. There is a connection between low level speech and freedom of thought, but the two are not the same. For this reason, the argument advanced here is not a fully-fledged autonomy-based justification for protecting speech. For the present purposes it is assumed that there may be good reason to seek to control the dissemination of some messages. The issue lies in giving people leeway to vent their current thoughts, while seeking to address the possible harms that can be caused. In striking a balance between the two, it is argued here that any such measures should be proportionate and not impose lasting sanctions on things said in the heat of the moment.
The continuums between high/low value expression and high/low level expression overlap. Media reporting on matters in the public interest will be high level and high value. A national newspaper publishing gossip or trivia might be deemed to be high level and low value. The argument for protecting low level speech supplements the traditional value-based approach to free speech. Consequently, low level expression that is of high value, such as informal conversations in an online forum about matters of public policy, will generally be protected under the traditional priority for political speech. The additional protection for low level speech will therefore be most important in cases where the expression is low level and low value, which will be the focus of the remainder of this article.
V. Protecting the freedom to converse online
The arguments advanced do not point to absolute protection for low level expression. There is clearly no right to make a genuine death threat or deliberately deface a memorial page on a social network. Such comments have harmful consequences that the law may be expected to guard against, in some circumstances justifying criminal sanctions. The laws restricting poison pen letters and malicious communications have always applied to low level speech and equivalent controls are warranted for digital communications. The problem is that the existing laws dealing with such communications can be overly expansive and catch statements that might not warrant such serious treatment. Any such law should be tailored to deal with the most serious and deliberate cases of harassment or bullying. Putting the extreme cases aside, there are some types of digital expression that do not deserve a heavy penalty, but which may be thought to create serious harms that require a proportionate response.
A libertarian response may be that fewer legal controls are necessary to address such harms. Given low-level speech takes place in egalitarian conditions, being cheap and accessible, one argument is that rather than seeking legal remedy, it may be just as easy to set up your own website to set the record straight. Communications on a blog, forum or chat-room can form part of an online conversation (albeit a heated one) to which it is easier for people to reply than to pursue damages in court. These arguments do not, however, hold for all the harms that can arise online. A person may not wish to refute defamatory statements through a public reply. A person may not feel that they should have to justify themselves in public. Even if a person does wish reply to an attack, there is no guarantee that the message will reach the target audience or be believed.
If it is accepted that there is a role for some legal controls on digital expression, then it begs the question how low level expression should be protected. The arguments in the previous section point to the protection for low level speech as an aspect of the free speech principle, but do not provide a determinate rule, the application of which can be identified in advance. The argument does not demand that fine distinctions be drawn between the precise levels of speaker, but asks that laws regulating speech provide sufficient leeway for speakers at the lower level and ensure any restrictions are proportionate. The principle might be directed at legislators, especially when enacting laws that regulate expression. This is particularly important given the continuing demands for new laws to regulate speech, for example in relation to privacy, cyber-stalking or bullying. The need to protect the freedom to converse can act as a reminder to legislators to include necessary safeguards to ensure casual comments made in the heat of the moment are not criminalized. There is also a role for the courts in protecting low level expression, through the interpretation of laws regulating expression. Raising the threshold of harm or seriousness necessary to found a cause of action or criminal prosecution is one example that will be considered below. What follows is not exhaustive and explores just some possible approaches.
A. Taking account of the context
Low level speech can be accommodated by interpreting existing laws to take account of the context of the expression to assess the level harm caused and the responsibility expected of the speaker. Such an approach might allow courts to consider the length of time that the material was posted, whether it was intended for widespread distribution, the level of preparation and how seriously it was likely to be taken. Elements of this strategy can be seen in libel law. The courts have a power to strike out libel actions that would be an abuse of process, where the damage to reputation is minimal.Footnote 84 Such circumstances can arise where a posting on a website has been seen by only a small number of people, although this factor alone is not decisive. The courts also have to decide whether a defamatory statement is actionable, or amounts to mere “vulgar abuse”, in which it is obvious to the audience that the words were spoken in the heat of the moment and not meant to be taken seriously. Normally it will be easier to find that the spoken word amounts to mere abuse, while written words tend to require greater reflection.Footnote 85 However, in Smith v ADVFN, Eady J. stated that comments on an internet bulletin board were:
like contributions to a casual conversation (the analogy sometimes being drawn with people chatting in a bar) which people simply note before moving on; they are often uninhibited, casual and ill thought out; those who participate know this and expect a certain amount of repartee or “give and take”.Footnote 86
Despite the written form, the words were therefore more like a slander than a libel. In such circumstances, Eady J. found that it would be obvious that the statements were not meant to be taken seriously.Footnote 87 Similarly in Clift v Clarke, Sharp J. declined to make an order for the disclosure of the identity of people posting comments on the Daily Mail website, on the grounds that the comments were mere “pub talk”, and it would be “fanciful to suggest any reasonable sensible reader would construe them in any other way”.Footnote 88 Sharp J. also noted that the comments did not form a “concerted and damaging campaign”.Footnote 89 There are limits to this approach. It is not always clear what types of comment will fall into the category of casual conversation or “pub talk”.Footnote 90 The limits only filter those statements that readers would clearly not take seriously, and other comments on the social media have been the subject of successful libel claims. Nonetheless, these developments in defamation law show that there is already recognition that some of the casual conversations found online do not warrant anything as heavy-handed as an expensive civil law action.
Approaches similar to that taken in defamation law could be applied in other areas of law. In the law of public order or in targeted communications, the court could incorporate a higher threshold to decide if a statement is, for example, “threatening, abusive or insulting” or “menacing”. When interpreting the Public Order Act, the courts have indicated that s.3 of the Human Rights Act 1998 requires terms such as “insulting” to be construed in a way that is compatible with Article 10.Footnote 91 In relation to the Malicious Communications Act 1988, Dyson L.J. stated in Connolly that Article 10 rights can be protected by “giving a heightened meaning to the words ‘grossly offensive’ and ‘indecent’ or by reading into s.1 a provision to the effect that the section will not apply where to create an offence would be a breach of a person's convention rights”.Footnote 92 Similarly, the term “menacing” under s.127 of the Communications Act 2003 could be interpreted to cover only those messages that convey a threat that creates a fear in the recipient “that something unpleasant is going to happen”, and where the speaker intends the message to have that effect.Footnote 93 By interpreting the law in such a way, the threshold for the speech crime or private law action could be raised to take into account the context in which it takes place, allowing greater give and take than would normally be the case in other settings. Certain words or offensive language may be deemed to be harmful if used in a letter or phone call to a specific individual, or if shouted in a town centre, but may be taken less seriously in certain digital communications.
There are a number of limits to such an approach. First, while the courts have referred to raising the threshold in relation to the Public Order Act to protect political speech, critics argue that it has made little difference to the outcome of the cases.Footnote 94 Secondly, it introduces a degree of uncertainty if the threshold is determined on a case-by-case basis and sensitive to the particular facts. While uncertainty is a common feature in many laws and does not provide a decisive argument, it is important to note the chilling effect may still result if speakers do not know where the boundaries lie. This is of particular importance for low level speakers with limited legal advice, but who use the digital media on a daily basis.
A thresholds approach to protecting expression can also take place when deciding whether to prosecute. In making that decision, the Crown Prosecution Service will consider whether there is a “realistic prospect of conviction” and whether “a prosecution is required in the public interest”.Footnote 95 While the assumption will be in favour of bringing a prosecution where the evidential threshold has been met, factors pointing away from a prosecution include where the penalty is nominal or where the harm was minor, both of which will often be present in the types of heat of the moment speech cases discussed earlier. The decision made by prosecutors is a central control mechanism that stops potentially trivial applications of the law coming before the courts.Footnote 96 Furthermore, when rights are engaged, the principle in Dehal, discussed earlier, requires that prosecution be a proportionate response.Footnote 97
However, there are shortcomings of prosecutorial discretion as a safeguard for expression. In particular, the decision of the prosecutors is discretionary and may in some cases lack clarity in advance. A broadly worded criminal offence can have a chilling effect on speakers, if it is not clear whether a prosecution is likely or not. The prospect of being subject to an investigation, even where no prosecution follows, can have a chilling effect in itself. As more complaints are made about digital content, there is also a risk that more prosecutions will be brought, as the police and prosecutors reach for the legal tools available governing that situation. Finally, the fact that prosecutions have been brought where the harm was minimal, such as the “Twitter joke trial”, provides reason to doubt the effectiveness of prosecutorial discretion as a safeguard in all cases.
The casual nature of the comments can also be taken into account at the sanction stage, in deciding the level of damages or the criminal penalty to be imposed. That approach fits with the focus on proportionality of the interference. However, regardless of the penalty, having a criminal record or having to go through a trial or expensive litigation may be disproportionate in itself. Issues of proportionality relate not only to the severity of the sanction, but also the type of the procedure, which will be considered in the following section.
B. Administrative and self-regulatory responses
Threshold tests are suitable for excluding the trivial and less harmful types of expression that could technically fall within the letter of the current law. However, there are cases that cause sufficient harm to warrant some action, but for which the existing laws are possibly too heavy handed. In such circumstances what is needed is not absolute freedom to speak regardless of the consequences, but proportionate responses that can help to foster a sense of responsibility and ethics with those using the online media. A more proportionate response may be to resolve disputes through a low-cost adjudicator or regulator that can publish its findings and, where appropriate, impose a fine and direct the speaker to remove the material. A further possibility is that the regulator could refer the most serious and repeat offenders to criminal prosecutors as a final measure. Such an approach could be incorporated into proposals for low-cost libel tribunals. Different regulations could directed at different tiers of speaker. For example, professional mass communications could be subject to whatever regime of press regulation emerges from the Leveson Inquiry. Online advertisements would remain subject to the Advertising Standards Authority. Digital conversations could then be subject to a minimal tier of regulation possibly covering libels, certain types of hate speech and intrusive communications. While there are still free speech concerns with such measures, the consequences of falling foul of such a regulation would not be a criminal record that taints the speaker for the rest of his life.
The harms caused by low level expression could also be addressed through cooperation with intermediaries. This can be done if the intermediary removes harmful messages, thereby depriving the speaker of an audience. Intermediaries already have an incentive to remove illegal content in some cases under the notice and takedown framework of the E-Commerce Regulations 2002.Footnote 98 The major social networks already have policies for dealing with complaints about offensive or illegal material.Footnote 99 The difficulty with such self-regulatory measures is that it leaves the private body to decide what standards apply and make a determination about the content. If the social network or search engine is very responsive to complaints, that may provoke criticisms that its gives too little protection to expression and potentially takes down harmless and lawful material simply because someone objects to it.Footnote 100 However, if the social network or search engine requires a complainant to have a court order to establish illegality or concrete evidence of harm, then the process will be too onerous to be useful for many people. Most people will lack the resources or inclination to seek a court judgment showing that an item is defamatory, and provide such evidence for every single item of objectionable content.
To help address these issues, the option could be combined with a regulator, which adjudicates the dispute and gives a direction to the intermediary as to the appropriate remedy.Footnote 101 That combination would at least give the speaker notice that the comment is being challenged and a chance to defend their statements. Taking down is not the only remedy. Where appropriate, the agency could also require the intermediary to provide a right of reply to the individual named in a webpage. For example, search engine results could provide a link with a response from the person in question.Footnote 102 Alternatively, a search engine or intermediary could provide its own statement, providing links to sources that challenge the offending viewpoint. This might be appropriate with certain forms of hate speech, in which a search result for a holocaust denial site comes with a warning on possible offensiveness and provides an additional link to a site with an opposing view.Footnote 103
There are difficulties in taking such an approach to digital communications. First, a low cost regulatory approach may encourage groups to bring claims against the content they wish to see suppressed. The issue is familiar to the regulators of the broadcast media.Footnote 104 Secondly, in some cases content can be republished so quickly and in so many places that an adjudication by a court or agency may be futile and the only remedy is an award of damages, assuming the author's identity is known. There are also difficulties where the expression is based outside of jurisdiction, although in some extreme cases the possibility of blocking the content may provide a limited remedy.Footnote 105 There is a danger that an agency covering digital communications would simply be overwhelmed with applications. For that reason, its regulations would have to cover a limited number of issues reserved for relatively serious cases, and would have procedures akin to that which Ofcom uses for the Broadcasting Code, rather than a court of law. Even if there is a small penalty, there can still be concerns about free speech and public interest defences would still be necessary. The presence of a regulator raises too many complex issues to be dealt with here, and there are strong arguments for and against. The point made here is that if a regulatory system were workable, it would be more proportionate than criminal sanctions and high cost litigation as way of dealing with the harms caused by insults, offensive remarks and other ill-judged comments that make up some content on the web.
VI. Conclusion
Three trends have been identified in relation to the legal regulation of digital communications. The first is the greater capacity to record and monitor the everyday expression of people engaged in online conversations. While the persistence and searchability of digital speech is often taken to increase the potential harms caused by what people say, it also increases the potential for such communications to be subject to legal regulation. This is to be coupled with the wide terms of certain criminal law offences and causes of action in tort law. That crimes are now better detected through the monitoring of digital speech may be thought not to be a cause for complaint. However, the broad terms of those laws were developed at a time when it was assumed that much speech would not come to the attention of prosecutors and litigants. As a result of the changes, speech that is insulting or in poor taste that would normally be ignored if said in conversation can now fall within the letter of certain legal controls. The total number of cases brought against digital speakers is not known and it may not appear to be a pressing problem. Many of the cases decided so far attract little sympathy, although some prosecutions have been brought against speakers that may not deserve criminal penalty. However, with more complaints and concerns about the harms of digital communications, prosecutors and litigators may reach for these laws in a wider range of circumstances.
The second trend is the convergence of laws in digital expression. Certain laws were developed and primarily applied to distinct spheres of activity. Three examples provided here are laws regulating the media, public order and targeted communications. None of these laws were rigidly confined to a specific sphere and there was always some overlap at the edges. However, all three appear to govern at least some types of digital expression. The online speaker must comply with the laws of journalism and protest, as well as laws regulating telephone calls and mailings. The third trend is in the jurisprudence on freedom of expression under Article 10, protecting expression deemed to be of high value. That approach is entirely consistent with some of the classic theories justifying that right. While that offers protection to those digital speakers that are engaged in discussion on matters in the public interest, the categories-based approach does less for those engaged in everyday speech that is deemed to be of less value to the audience. Arguments based on democracy and the truth tend not to protect those who rant, vent or merely converse on matters of little importance.
The response to these trends is not to abandon the categories-based approach to freedom of expression, or to demand blanket protection of all speech. Instead, it is to draw upon a different distinction that is reflected in the free speech cases (though not drawn explicitly) based on the context of the speaker to supplement the traditional approach. The casual amateur speaker with limited resources or legal advice should be held to lower standards than professional journalists or even those involved in a protest, who have greater guidance on the ground from the police. This does not mean complete freedom from any responsibility, but that any regulations should be suited to the digital context, and the procedure and sanctions should be proportionate not only to the harm, but to the level of responsibility expected from the speaker. Rather than the occasional and selective imposition of a heavy handed penalty on a speaker unlucky enough to be singled out, proportionate measures might make people more aware of the consequences of their expression. In turn, it is hoped that such measures could encourage greater responsibility and awareness of the consequences among those who exercise communicative freedoms.