14.1 Introduction
Media companies can benefit from artificial intelligence (AI)Footnote 1 technologies to increase productivity and explore new possibilities for producing, distributing, and reusing content. This chapter demonstrates the potential of the use of AI in media.Footnote 2 It takes a selective approach to showcase a variety of applications in the following areas: Can ChatGPT write news articles? How can media organizations use AI to recommend public interest content? Can AI spot disinformation and instead promote trustworthy news? These are just a few opportunities offered by AI at the different stages of news content production, distribution, and reuse (Section 14.2). However, the use of AI in media also brings societal and ethical risks, as well as legal challenges. The right to freedom of expression, media pluralism and media freedom, the right to nondiscrimination, and the right to data protection are among the affected rights. This chapter will therefore also show how the EU legal framework (e.g., the Digital Services Act,Footnote 3 the AI Act,Footnote 4 and the European Media Freedom ActFootnote 5) tries to mitigate some of the risks to fundamental rights posed by the development and the use of AI in media (Section 14.3). Section 14.4 offers conclusions.
14.2 Opportunities of AI Applications in MediaFootnote 6
14.2.1 AI in Media Content Gathering and Production
Beckett’s survey of journalism and AI presents an impressive list of possible AI uses in day-to-day journalistic practice.Footnote 7 At the beginning of the news-creating process, AI can help gather material, sift through social media, recognize genders and ages in images, or automatically add tags for newspaper articles with topics or keywords.Footnote 8
AI is also used in story discovery to identify trends or spot stories that could otherwise be hard to grasp by the human eye and to discover new angles, voices, and content. To illustrate, already in 2014, Reuters News Tracer project used natural language processing techniques to decide which topics are newsworthy.Footnote 9 It detected the bombing of hospitals in Aleppo and the terror attacks in Nice and Brussels before they were reported by other media.Footnote 10 Another tool, the Topics Compass, developed under EU-funded Horizon 2020 ReTV project, allows an editorial team to track media discourse about a given topic coming from news agencies, blogs, and social media platforms and to visualize its popularity.Footnote 11
AI has also been proven useful in investigative journalism to assist journalists in tasks that could not be done by humans alone or would have taken a considerable amount of time. To illustrate, in a cross-border Panama Papers investigation, the International Consortium of Investigative Journalists used an open-source data mining tool to sift through 11.5 million whistleblowers’ documents.Footnote 12
Once journalists have gathered information on potential stories, they can use AI for the production of news items: text, images, and videos. Media companies such as the Associated Press, Forbes, and The New York Times have started to automate news content.Footnote 13 Terms like robot journalism, automated journalism, and algorithmic journalism have been used interchangeably to describe this phenomenon.Footnote 14 In addition, generative AI tools such as ChatGPT,Footnote 15 Midjourney,Footnote 16 or DALL-EFootnote 17 are being used to illustrate news stories, simplify text for different audiences, summarize documents, or writing potential headlines.Footnote 18
14.2.2 AI in Media Content DistributionFootnote 19
Media organizations can also use AI for providing personalized recommendations. Simply put, “recommendation systems are tools designed to sift through the vast quantities of data available online and use algorithms to guide users toward a narrower selection of material, according to a set of criteria chosen by their developers.”Footnote 20
In recent years, online news media (e.g., online newspapers’ websites and apps) started engaging in news recommendation practices.Footnote 21 Recommendation systems curate users’ news feed by automatically (de)prioritizing items to be displayed in user interfaces, thus deciding which ones are visible (to whom) and in what order.Footnote 22
The 2022 Ada Lovelace reportFootnote 23 provides an informative in-depth snapshot of the BBC’s development and use of recommendation systems, which gives insights into the role of recommendations in public service media (PSM).Footnote 24 As pointed out by the authors, developing recommendation systems for PSM requires an interrogation of the organizations’ role in democratic societies in the digital age, that is, how to translate the public service valuesFootnote 25 into the objectives for the use of recommendation systems that serve the public interest. The report concludes that the PSM had internalized a set of normative values around recommendation systems: Rather than maximizing engagement, they want to broaden their reach to a more diverse set of audiences.Footnote 26 This is a considerable difference between the public and private sectors. Many user-generated content platforms rank information based on how likely a user is to interact with a post (comment on it, like it, reshare it) or to spend more time using the service.Footnote 27
Research shows that social media platforms are using a mix of commercial criteria, but also vague public interest considerations in their content prioritization measures.Footnote 28 Importantly, prioritization of some content demotes other.Footnote 29 As a way of example, Facebook explicitly says it will not recommend content that is associated with low-quality publishing, including news that it is unclear about its provenance.Footnote 30 In fact, online platforms use a whole arsenal of techniques to (de)amplify the visibility or reach of some content.Footnote 31 To illustrate, in an aftermath of Russian aggression on Ukraine, platforms announced they would restrict access to RT and Sputnik media outlets.Footnote 32 Others have also been adding labels and started reducing the visibility of content from Russian state-affiliated media websites even before the EU-imposed sanctions.Footnote 33
Overall, by selecting and (de)prioritizing news content and deciding on its visibility, online platforms take on some of the functions so far reserved to traditional media.Footnote 34 Ranking functions and optimization metrics in recommendation systems have become powerful determinants of access to media and news content.Footnote 35 This has consequences for both the fundamental right to freedom of expression and media freedom (see Section 14.3).
14.2.3 AI in Fact-Checking
Another important AI potential in media is fact-checking. The main elements of automated fact-checking are: (1) identification of false or questionable claims circulating online; (2) verification of such claims, and (3) (real-time) correction (e.g., flagging).Footnote 36
To illustrate, platforms such as DALIL help fact-checkers spot questionable claims which then require subsequent verification.Footnote 37 Then, to verify the identified content, the AI(-enhanced) tools can perform a reverse image search, detect bot accounts and deep fakes, assess source credibility, check nonfactual statements (claims) made on social media or analyze the relationships between accounts.Footnote 38 WeVerify plug-in is a highly successful tool which offers a variety of verification and analysis features in one platform to fact-check and analyze images, video, and text.Footnote 39 Some advanced processing and analytics methods can also be used to analyze different types of content and apply a trustworthiness scoring to online articles.Footnote 40
The verified mis- or disinformation can then be flagged to the end user by adding warnings and providing more context to content rated by fact-checkers. Some platforms have also been labeling content containing synthetic and manipulated media.Footnote 41
Countering disinformation with the use of AI is a growing research area. The future solutions based on natural language processing, machine learning, or knowledge representation are expected to deal with different content types (audio, video, images, and text) across different languages.Footnote 42 Collaborative tools that enable users to work together to find, organize, and verify user-generated content are also on the rise.Footnote 43
14.2.4 AI in Content Moderation
AI in content moderation is a broad topic. Algorithmic (commercial) content moderation can be defined as “systems that classify user-generated content based on either matching or prediction, leading to a decision and governance outcome (e.g., removal, geoblocking, and account takedown).”Footnote 44 This section focuses on the instances where AI is used either by media organizations to moderate the discussion on their own sites (i.e., in the comments section) or by social media platforms to moderate posts of media organizations and journalists.
14.2.4.1 Comment Moderation
For both editorial and commercial reasons, many online news websites have a dedicated space under their articles (a comment section), which provides a forum for public discourse and aims to engage readers with the content. Empirical research shows that a significant proportion of online comments are uncivil (featuring a disrespectful tone, mean-spirited, disparaging remarks and profanity),Footnote 45 and encompass stereotypes, homophobic, racist, sexist, and xenophobic terms that may amount to hate speech.Footnote 46 The rise of incivility in online news comments negatively affects people’s perceptions of news article quality and increases hostility.Footnote 47 “Don’t read the comments” has become a mantra throughout the media.Footnote 48 The amount of hateful and racist comments, together with high costs – both economic and psychological – of human moderators, has prompted news sites to change their practices.
Some introduced AI systems to support their moderation processes. To illustrate, both the New York TimesFootnote 49 and the Washington PostFootnote 50 use machine learning to prioritize comments which are then evaluated by human moderators or to automatically approve or delete abusive comments. Similarly, STANDARD Community (part of the Austrian newspaper DerSTANDARD) has developed an automated system to prefilter problematic content, as well as a set of preemptive moderation techniques, including forum design changes to prevent problematic content from being posted in the first place.Footnote 51
Others, like Reuters or CNN, have removed their comment sections completely.Footnote 52 Apart from abusive and hateful language, the reason was that many users were increasingly commenting on media organizations’ social media profiles (e.g., on Facebook), and not on media organizations’ websites.Footnote 53 This, however, did not remove the problem of hateful speech. To the contrary, it amplified it.Footnote 54
14.2.4.2 Content Moderation
Online intermediary services (e.g., online platforms such as social media) can, and sometimes have to, moderate content which users post on their platforms. In the EU, to avoid liability for illegal content hosted on their platforms, online intermediaries must remove or disable access to such content when the illegal character of the content becomes known. Other content moderation decisions are performed by platforms voluntarily, based on platforms’ community standards, that is, private rules drafted and enforced by the platforms (referred to as private ordering).Footnote 55 Platforms can therefore remove users’ content which they do not want to host according to their terms and conditions, even if the content is not illegal. This includes legal editorial content of media organizations (see Section 14.3.4).
Given the amounts of content uploaded on the Internet every day, it has become impossible to identify and remove illegal or unwanted content using only traditional human moderation.Footnote 56 Many platforms have therefore turned to AI-based content moderation. Such automation can be used as proactive detection of potentially problematic content prior to its publication or as a reactive moderation after it has been flagged by other users or automated processes.Footnote 57 Besides deleting content and suspending users, platforms use a whole arsenal of tools to reduce the visibility or reach of some content, such as age barriers, geo-blocking, labeling content as fact-checked or adding a graphic content label to problematic content before or as users encounter it.Footnote 58
Algorithmic moderation systems help classify user-generated content based on either matching or prediction techniques.Footnote 59 These techniques present a number of technical limitations.Footnote 60 Moreover, speech evaluation is highly context dependent, requiring an understanding of cultural, linguistic, and political nuances as well underlying facts. As a result, AI is frequently inaccurate; there is growing empirical evidence of platforms’ over-removal of content coming from individuals and media organizations (see Section 14.3.4).Footnote 61
14.3 Legal and Ethical Challenges of AI Applications in Media
This section identifies the legal and ethical challenges of AI in media across various stages of the media value chain described earlier. The section also shows how these challenges may be mitigated by the EU legal framework.Footnote 62
14.3.1 Lack of Data Availability
Lack of data availability is a cross-cutting theme, with serious consequences for the media sector. Datasets are often inaccessible or expensive to gather and data journalists rely on private actors, such as data brokers which have already collected such data.Footnote 63 This concentrated control over the data influences how editorial decision-making is automated (see Section 14.3.6).
Data availability is also of paramount importance for news verification and fact-checking activities. Access to social media data is vital to analyze and mitigate the harms resulting from disinformation, political microtargeting, or the effect of social media on elections or children’s well-being.Footnote 64 This is because it enables journalists and researchers to hold platforms accountable for the working of their AI systems. Equally, access to, for example, social media data is important for media organizations that are developing their own AI solutions – particularly in countries where it can be difficult to gain access to large quantities of data in the local language.Footnote 65
Access to platforms’ data for researchers is currently mainly governed by contractual agreements, platforms’ own terms of service, and public application programming interfaces (APIs). Application programming interfaces access can be restricted or eliminated at any time and for any reason.Footnote 66 The UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression stressed a lack of transparency and access to data as “the major failings of companies across almost all the concerns in relation to disinformation and misinformation.”Footnote 67
A key challenge for research access frameworks is to comply with the General Data Protection Regulation (GDPR).Footnote 68 Despite a specific derogation for scientific research purposes (art. 89), the GDPR lacks clarity regarding how platforms might share data with researchers (e.g., on what legal grounds).Footnote 69 To mitigate this uncertainty, various policy and regulatory initiatives aim to clarify how platforms may provide access to data to researchers in a GDPR-compliant manner.Footnote 70 In addition, there have been calls for a legally binding mechanism that provides independent researchers with access to different types of platform data.Footnote 71
The Digital Services Act (DSA) requires providers of very large online platforms (VLOPs) and very large online search engines (VLOSEs) to grant vetted researchers access to data, subject to certain conditions.Footnote 72 Data can be provided “for the sole purpose” of conducting research that contributes to the detection, identification and understanding of systemic risks and to the assessment of the adequacy, efficiency and impacts of the risk mitigation measures (art. 40(4)). Vetted researchers must meet certain criteria and procedural requirements in the application process. Importantly, they must be affiliated to a research organization or a not-for-profit body, organization or association (art. 40(12)). Arguably, this excludes unaffiliated media practitioners, such as freelance journalists or bloggers. Many details about researchers’ access to data through the DSA will be decided in delegated acts that have yet to be adopted (art. 14(13)).
Moreover, under the Digital Markets Act,Footnote 73 the so-called gatekeepers will have to provide advertisers and publishers with access to the advertising data and allow business users to access the data generated in the context of the use of the core platform service (art. 6(1) and art. 6(8)).
Furthermore, the European strategy for dataFootnote 74 aims at creating a single market for data by establishing common European data spaces to make more data available for use in the economy and society. The Data Governance ActFootnote 75 and the Data Act proposalFootnote 76 seek to strengthen mechanisms to increase data availability and harness the potential of industrial data, respectively. Lastly, the European Commission announced the creation of a dedicated media data space.Footnote 77 The media data space initiative, financed through the Horizon Europe and Digital Europe Programmes,Footnote 78 aims to support both PSM and commercial media operators to pool their content and customer data to develop innovative solutions.
14.3.2 Data Quality and Bias in Training Datasets
Another, closely related, consideration is data quality. There is a growing literature on the quality and representation issues with training, testing, and validation data, especially those in publicly available datasets and databases.Footnote 79 Moreover, generative AI raises controversies regarding the GDPR compliance of the training dataFootnote 80 and brings a broader question of extraction fairness, defined as “legal and moral concerns regarding the large-scale exploitation of training data without the knowledge, authorization, acknowledgement or compensation of their creators.”Footnote 81
The quality of training data and data annotation is crucial, for example, for hate speech and abusive language detection in comments. A 2022 report by the EU Agency for Fundamental Rights shows how tools that automatically detect or predict potential online hatred can produce biased results.Footnote 82 The predictions frequently overreact to various identity terms (i.e., words indicating group identities like ethnic origin or religion), flagging text that is not actually offensive.Footnote 83 Research shows that social media content moderation algorithms have difficulty differentiating hate speech from discussion about race and often silence marginalized groups such as racial and ethnic minorities.Footnote 84 At the same time, underrepresentation of certain groups in a training dataset may result in them experiencing more abusive language than other groups.
There are blurred lines between what constitutes hateful, harmful, and offensive speech, and these notions are context dependent and culturally specific. Many instances of hate speech cannot be identified and distinguished from innocent messages by looking at single words or combinations of them.Footnote 85 Such contextual differentiation, between, for example, satirical and offensive uses of a word proves challenging for an AI system. This is an important technical limitation that may lead to over- and under-removal of content. Both can interfere with a range of fundamental rights such as the right to freedom of expressionFootnote 86 (see Section 14.3.4), the right to data protection, as well as the right to nondiscrimination.
The consequence of using unreliable data could be the spread of misinformationFootnote 87 as illustrated by inaccurate responses to news queries from search engines using generative AI. Research into Bing’s generative AI accuracy for news queries shows that there are detail errors and attribution errors, and the system also sometimes asserts the opposite of the truth.Footnote 88 This, together with the lack of media literacy, may cause an automation bias, that is, the uncritical trust in information provided by the automated system despite the information being actually incorrect.
14.3.3 Transparency
Transparency can mean many different things. Broadly speaking, it should enable people to understand how an AI system is developed, is trained, how it operates, and how it is deployed so that they can make more informed choices.Footnote 89 This section focuses on three aspects of transparency of AI in media.Footnote 90
The first aspect relates to internal transparency, which describes the need for journalists and other non-technical groups inside media organizations to have sufficient knowledge around the AI systems they use.Footnote 91 The importance of closing the intelligibility gap around AI within a media organization is necessary for an understanding of how AI systems work to use them responsibly.Footnote 92 The AI Act requires providers and deployers of AI systems including media organizations to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operations and use of AI systems on their behalf (art. 4).Footnote 93
The second aspect concerns external transparency, which refers to transparency practices directed toward the audience to make them aware of the use of AI. The AI Act requires providers of AI systems, such as OpenAI, to make it clear to users that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use (art. 50(1)).Footnote 94 As a rule, they must also mark generative AI outputs (synthetic audio, image, video or text content) as AI-generated or manipulated (art. 50(2)). For now it remains unclear what forms of transparency will be sufficient and whether they will be meaningful to the audience. Transparency requirements also apply to those who use AI systems that generate or manipulate images, audio or video content constituting a deep fake (art. 50(4) para 1). However, if the content is part of an evidently artistic, creative or satirical work, the disclosure should not hamper the display or enjoyment of the work. Moreover, deployers of an AI-generated or manipulated text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. There is an important exception for the media sector. If the AI-generated text has undergone a process of human review or editorial control within an organisation that holds editorial responsibility for the content (such as a publisher), disclosure is no longer necessary. This provision raises questions as to what will count as a human review or editorial control and who can be said to hold editorial responsibility.Footnote 95 Moreover, research shows that audiences want media organisations to be transparent and provide labels when using AI.Footnote 96
In addition to the AI Act, the DSA presents multiple layers of transparency obligations for the benefit of users that differ depending on the type of service concerned.Footnote 97 In particular, it requires transparency on whether AI is used in content moderation. All intermediary services must publish in their terms and conditions, in a “clear and unambiguous language,” a description of the tools used for content moderation, including AI systems that either automate or support content moderation practices (art. 14). In practice, this means that users must know why, when, and how online content is being moderated, including with the use of AI, and when human review is in place.
The DSA also regulates recommender system transparency. As mentioned earlier, recommender systems can have a significant impact on the ability of recipients to retrieve and interact with information online. Consequently, providers of online platforms are expected to set out in their terms and conditions in plain and intelligible language the main parameters used in their recommender systems and the options for users to modify or influence them (art. 27). The main parameters shall explain why certain information is suggested, and include, at least, the criteria that are most significant in determining the information suggested, and the reasons for the relative importance of those parameters. There are additional requirements imposed on the providers of VLOPs and VLOSEs to provide at least one option for their recommendation systems which is not based on profiling.
There are also further obligations for VLOPs and VLOSEs to perform an assessment of any systemic risks stemming from the design, functioning, or use of their services, including algorithmic systems (art. 34(1)). This risk assessment shall include the assessment of any actual or foreseeable negative effects on the exercise of fundamental rights, including the right to freedom of expression and the freedom and pluralism of the media (art. 34(1)(b)). When conducting risk assessments, VLOPs and VLOSEs shall consider, in particular, whether the design of their recommender systems and their content moderation systems influence any of the systemic risks. If so, they must put in place mitigation measures, such as testing and adapting their algorithms (art. 35).
Lastly, intermediary services (excluding micro and small enterprises) must publish, at least once a year, transparency reports on their content moderation activities, including a qualitative description, a specification of the precise purposes, indicators of the accuracy and the possible rate of error of the automated means (art. 15). Extra transparency reporting obligations apply to VLOPs (art. 42).
The third aspect concerns third-party transparency, which refers to the importance of having insights into how AI systems provided by third-party providers have been trained on and how they work.Footnote 98
In both the DSA and the AI Act, there are no explicit provisions that make such information widely available.Footnote 99
14.3.4 Risks for the Right to Freedom of Expression
Article 10 of the European Convention of Human Rights (ECHR), as well as Article 11 of the Charter of Fundamental Rights of the European Union (CFR),Footnote 100 guarantees the right to freedom of expression to everyone. The European Court of Human Rights (ECtHR) has interpreted the scope of Article 10 ECHR through an extensive body of case law. The right to freedom of expression includes the right to impart information, as well as the right to receive it. It protects the rights of individuals, companies, and organizations, with a special role reserved for media organizations and journalists. It is their task to inform the public about matters of public interest and current events and to play the role of the public watchdog.Footnote 101 The right applies offline and on the Internet.Footnote 102
One of the main risks for freedom of expression associated with algorithmic content moderation is over-blocking, meaning the unjustified removal or blocking of content or the suspension or termination of user accounts. In 2012, the Court of Justice of the EU held that a filtering system for copyright violations could undermine freedom of information since it might not distinguish adequately between lawful and unlawful content, which could lead to the blocking of lawful communications.Footnote 103 This concern is equally valid outside the copyright context. The technical limitations of AI systems, together with regulatory pressure from States who increasingly request intermediaries to take down certain categories of content, often based on vague definitions, incentivize platforms to follow a “if in doubt, take it down” approach.Footnote 104 There is, indeed, growing empirical evidence of platforms’ over-removal of content.Footnote 105 To illustrate, social media platforms have deleted hundreds of posts condemning the eviction of Palestinians from the Sheikh Jarrah neighborhood of JerusalemFootnote 106 or restricted access to information about abortion.Footnote 107 Both examples are a consequence of the algorithmic content moderation systems either not being able to recognize context or not knowing underlying facts and legal nuances. Such automated removals, even if unintentional and subsequently revoked, potentially limit both the right to impart information (of users who post content online) and the right to receive information (of third parties who do not get to see the deleted content).
On the other hand, the under-blocking of certain online content may also have a negative impact on the right to freedom of expression. Not acting against illegal content and some forms of legal but harmful content (i.e., hate speech) may lead people (especially marginalized communities) to express themselves less freely or withdraw from participating in the online discourse.
In addition, in the context of fact-checking, AI cannot yet analyze entire, complex disinformation narratives and detect all uses of synthetic media manipulation.Footnote 108 Thus, an overreliance on AI systems to verify the trustworthiness of the news may prove detrimental to the right to freedom of expression.
To mitigate these risks, the DSA provides certain procedural safeguards. It does not force intermediary services to moderate content, but requires that any restrictions imposed on users’ content based on terms and conditions are applied and enforced “in a diligent, objective and proportionate manner,” with “due regard to the rights and legitimate interests of all parties involved” (art. 14(4)). Not only do they have to take due regard to fundamental rights in cases of content removal, but also when restricting the availability, visibility, and accessibility of information. What due regard means in this context will be defined in courts. Moreover, the DSA requires intermediary services to balance their freedom to conduct a business with other rights such as users’ freedom of expression. Online platforms also have to provide a statement of reasons as to why the content has been removed or the account has been blocked and implement an internal complaint-handling system that enables users to lodge complaints (art. 21). Another procedural option is the out-of-court dispute settlement or a judicial remedy.Footnote 109
A novelty foreseen by the DSA is an obligation for VLOPs and VLOSEs to mitigate systemic risks such as actual or foreseeable negative effects for the exercise of fundamental rights, in particular freedom of expression and information, including the freedom and pluralism of the media, enshrined in Article 11 of the CFR, and foreseeable negative effects on civic discourse (art. 34).
News personalization from the freedom of expression perspective looks paradoxical at first glance. As Eskens points out, “news personalisation may enhance the right to receive information, but it may also hinder or downplay the right to receive information and the autonomy with which news users exercise their right to receive information.”Footnote 110 Given that content prioritization practices have a potential for promoting trustworthy and reliable news, it can be argued that platforms should be required to ensure online access to content of general public interest. The Council of Europe, for instance, suggested that States should act to make public interest content more prominent, including by introducing new obligations for platforms and intermediaries, and also impose minimum standards such as transparency.Footnote 111 Legal scholars have proposed exposure diversity as a design principle for recommender systemsFootnote 112 or the development of “diversity-enhancing public service algorithms.”Footnote 113 But who should decide what content is trustworthy or authoritative, and based on what criteria? Are algorithmic systems of private platforms equipped enough to quantify normative values such as newsworthiness? What safeguards would prevent States from forcing platforms to prioritize State-approved-only information or government propaganda? Besides, many of the problems with content diversity are at least to some extent user-driven – users themselves, under their right to freedom of expression, determine what kind of content they upload and share.Footnote 114 Legally imposed public interest content recommendations could limit users’ autonomy in their news selection by paternalistically censoring the range of information that is available to them. While there are no such obligations in the DSA, some legislative proposals at the national level are currently reviewing such options.Footnote 115
14.3.5 Threats to Media Freedom and Pluralism Online
Freedom and pluralism of the media are pillars of liberal democracies. They are also covered by Art. 10 ECHR and Art. 11 CFR. The ECtHR found that new electronic media, such as an online news outlet, are also entitled to the protection of the right to media freedom.Footnote 116 Moreover, the so-called positive obligations doctrine imposes an obligation on States to protect editorial independence from private parties, such as social media.Footnote 117
Social media platforms have on multiple occasions erased content coming from media organizations, including public broadcasters, and journalists. This is often illustrated by the controversy that arose around Facebook’s decision to delete a post by a Norwegian journalist, which featured the well-known Vietnam War photo of a nude young girl fleeing a napalm attack.Footnote 118 Similarly, users sharing an article from The Guardian showing Aboriginal men in chains were banned from Facebook on the grounds of posting nudity.Footnote 119 Other examples include videos of activists and local news outlets that documented the war crimes of the regime of Bashar al-Assad in SyriaFootnote 120 or a Swedish journalist’s material reporting sexual violence against minors.Footnote 121 This is due to technical limitations of the algorithmic content moderation tools and their inability to distinguish educational, awareness raising or journalistic material from other content.
In order to prevent removals of content coming from media organizations, a so-called media exemptionFootnote 122 was proposed during the discussions of the DSA proposal, aiming to ensure that the media would be informed and have the possibility to challenge any content moderation measure before its implementation. The amendments were not included in the final text of the DSA. There is no special protection or any obligation of prior notice to media organizations in the DSA. Media organizations and journalists can invoke the same procedural rights that apply to all users of online platforms. One can also imagine that mass-scale algorithmic takedowns of media content, suspension or termination of journalists’ accounts by VLOPs could amount to a systemic risk in a form of a negative effect on the exercise of the freedom and pluralism of the media.Footnote 123 However, what qualifies as systemic, and when a threshold of systemic risk to freedom and pluralism of the media is reached, remains undefined.
Recognizing media service providers’ role in the distribution of information and in the exercise of the right to receive and impart information online, the European Media Freedom Act (EMFA) grants media service providers special procedural rights vis-à-vis VLOPs. Where a VLOP considers that content provided by recognized media service providersFootnote 124 is incompatible with its terms and conditions, it should “duly consider media freedom and media pluralism” in accordance with the DSA and provide, as early as possible, the necessary explanations to media service providers in a statement of reasons as referred to in the DSA and the P2B Regulation (recital 50, art. 18). In what has been coined as a non-interference principleFootnote 125, VLOPs should provide the media service provider concerned, prior to the suspension or restriction of visibility taking effect, with an opportunity to reply to the statement of reasons within 24 hours of receiving it.Footnote 126 Where, following or in the absence of a reply, a VLOP takes the decision to suspend or restrict visibility, it shall inform the media service provider concerned without undue delay. Moreover, media service providers’ complaints under the DSA and the P2B Regulation shall be processed and decided upon with priority and without undue delay. Importantly, EMFA’s Article 18 does not apply where VLOPs suspend or restrict the visibility of content in compliance with their obligations to protect minors, to take measures against illegal content or in order to assess and mitigate systemic risks.Footnote 127
Next to media freedom, media pluralism and diversity of media content are equally essential for the functioning of a democratic society and are the corollaries of the fundamental right to freedom of expression and information.Footnote 128 Media pluralism is recognized as one of the core values of the European Union.Footnote 129
In recent years, concerns over the decline of media diversity and pluralism have increased.Footnote 130 Online platforms “have acquired increasing control over the flow, availability, findability and accessibility of information and other content online.”Footnote 131 Considering platforms’ advertising-driven business model based on a profit maximization, they have more incentives to increase the visibility of content that would keep users more engaged. It can be argued that not only does this fail to promote diversity, but it strongly reduces it.Footnote 132 The reduction of plurality and diversity of news content resulting from platforms’ content curation policies may limit users’ access to information. It also negatively affects society as a whole, since the availability and accessibility of diverse information is a prerequisite for citizens to form and express their opinions and participate in the democratic discourse in an informed way.Footnote 133
14.3.6 Threats to Media Independence
The growing dependence on automation in news production and distribution has a profound impact on editorial independence as well as on the organizational and business choices of media organizations. One way in which automation could potentially challenge editorial independence is media reliance on non-media actors such as engineers, data providers, and technology companies that develop or fund the development of the datasets or algorithms used to automate editorial decision-making.Footnote 134
(News) media organizations depend more and more on platforms to distribute their content. The phenomena of platformed publishing refers to the situation where news organizations have no or little control over the distribution mechanisms decided by the platforms.Footnote 135 Moreover, media organizations optimize news content to make it algorithm ready, for example, by producing popular content which is attractive for the platforms’ recommender systems.Footnote 136 The entire news cycle, from production, distribution, to consumption of news “is (re)organized around platforms, their rules and logic and thus influenced and mediated by them.”Footnote 137 Individuals and newsrooms, therefore, depend structurally on platforms, which affects the functioning and power allocation within the media ecosystem.Footnote 138
Moreover, platforms provide essential technical infrastructure (e.g., cloud computing and storage), access to AI models, or stand-alone software.Footnote 139 This increases the potential for so-called infrastructure captureFootnote 140 and risks shifting even more control to platform companies, at the expense of the media organizations autonomy and independence.
The relationship between AI, media, and platforms, raises broader questions about the underlying political, economic, and technological power structures and platforms’ opinion power.Footnote 141 To answer these challenges, legal scholars have called for rethinking media concentration rulesFootnote 142 and media law in general.Footnote 143 However, the considerations about opinion power of platforms, values, and media independence are somehow missing from the current EU regulatory initiatives. The EMFA rightly points out that providers of video-sharing platforms and VLOPs “play a key role in the organisation of content, including by automated means or by means of algorithms,” and some “have started to exercise editorial control over a section or sections of their services” (recital 11). While it does mention “the formation of public opinion” as relevant parameter in the assessment of media market concentrations (art. 21), it does not provide a solution to address the concerns about the dependency between platforms’ AI capacities and media organizations.Footnote 144
14.4 Conclusions
AI will continue to transform media in ways we can only imagine. Will news articles be written by fully automated systems? Will the proliferation of synthetic media content dramatically change the way we perceive information? Or will virtual reality experiences and new forms of interactive storytelling replace traditional (public interest) media content? As AI technology continues to advance, it is essential that the EU legal framework keeps pace with these developments to ensure that the use of AI in media is responsible, ethical, and beneficial to society as a whole. After all, information is a public good and media companies cannot be treated as any other businesses.Footnote 145
The DSA takes an important step in providing procedural safeguards to mitigate risks for the right to freedom of expression and freedom of the media posed by online platforms’ content moderation practices. It recognizes that the way VLOPs and VLOSEs moderate content may cause systemic risks to the freedom and pluralism of the media and negatively affect civic discourse. The EMFA also aims to strengthen the position of media organizations vis-à-vis online platforms. However, it remains to be seen how effective a 24-hour non-interference rule will be given the high threshold of who counts as a media service provider and which content falls within the scope of Art. 18 EMFA.
Many of the AI applications in (social) media, such as recommender systems, news bots, or the use of AI to generate or manipulate content are likely to be covered by the AI Act. A strong focus on external transparency both in the AI Act and in the DSA can be seen as a positive step to ensure that users become more aware of the extensive use of AI in (social) media.
However, many aspects of the use of AI in and by media such as the intelligibility gap, societal risks raised by AI (including worker displacement and environmental costs), as well as reliance of tech companies for access to high-quality media content to develop AI systems,Footnote 146 remain only limitedly addressed. Media organizations’ dependency on social media platforms recommender systems and algorithmic content moderation as well as power imbalances in access to AI infrastructure should also be tackled by the European legal framework.
It is equally important to facilitate and stimulate responsible research and development of AI in the media sector, particularly in local and small media organizations, to avoid the AI divide. In this regard, it is worth mentioning that the Council of Europe’s Committee of Experts on Increasing Resilience of the Media adopted Guidelines on the responsible implementation of AI systems in journalism.Footnote 147 The Guidelines offer practical guidance to news media organizations, States, tech providers and platforms that disseminate news, on how AI systems should be used to support the production of journalism. The Guidelines also include a checklist for media organizations to guide the procurement process of AI systems by offering questions that could help in scrutinizing the fairness of a procurement contract with an external provider (Annex 1).
Now the time has come to see how these regulations are enforced and whether they will enable a digital level playing field. To this end, policymakers, industry stakeholders, and legal professionals must work together to address the legal and ethical implications of AI in media and promote a fair and transparent use of AI.