1 Introduction
The outbreak of the COVID-19 pandemic and the measures undertaken to contain its spread have dramatically shaken the world as we knew it. The pandemic moment, however, has also perpetuated (if not exacerbated) developments and problems already afoot. Among them, the absence of an authority with enforcement powers for global health governance (Gostin et al., Reference Gostin, Moon and Meier2020; Taylor and Habibi, Reference Taylor and Habibi2020), the growth of nationalism (Su and Shen, Reference Su and Shen2021; Woods et al., Reference Woods2020), the escalation of geopolitical tensions among great powers (European Parliament, 2020), the global and local widening of wealth-inequality gaps (Bonacini et al., Reference Bonacini, Gallo and Scicchitano2021; Stiglitz, Reference Stiglitz2020), societies’ increasing reliance on data and technology for controlling human behaviour (Taylor et al., Reference Taylor2020) and alarming rates of spread of fake news and conspiracy theories (Gruzd and Mai, Reference Gruzd and Mai2020; Ioannidis, Reference Ioannidis2020; Stephens, Reference Stephens2020).
Amidst the trends that pre-existed and were amplified by the current pandemic, there is the fact that we (at least many of us) might well live in increasingly globalised and data-driven societies, but we still lack reliable global data on a variety of fundamental issues, including health and health care (Genicot, in this issue; Taylor, Reference Taylor2020). Local information providers collect data with different means and purposes, and a few concerned actors at the global level gather such information and repackage it into quantitative depictions of the globe with varying degrees of accuracy, objectivity and transparency. This global knowledge is questionable, insofar as it unavoidably suffers from a number of weaknesses surrounding the collection, interpretation and treatment of masses of largely non-homogeneous data (Linnet, Reference Linnet2020, p. 2; Mooney and Juhàsz, Reference Mooney and Juhàsz2020; Shelton, Reference Shelton2020). Yet, some forms of knowledge – namely those departing the furthest from verifiable data and attempting to quantify complex social phenomena – are more questionable than others.
This is typically the case for global indicators. There is much uncertainty about how indicators are defined; according to the most accepted definition, indicators are collections of
‘rank-ordered data that purport … to represent the past or projected performance of different units. The data are generated through a process that simplifies raw data about a complex social phenomenon. The data, in this simplified and processed form, are capable of being used to compare particular units of analysis (such as countries or institutions or corporations), synchronically or over time, and to evaluate their performance by reference to one or more standards.’ (Davis et al., Reference Davis, Kingsbury, Merry and Davis2012, p. 6)
‘Global’ indicators are indicators comparing units of analysis (most commonly: states) covering the entire globe (Nelken, Reference Nelken2018; Infantino, Reference Infantino and Cassese2017, pp. 348–349; Siems and Nelken, Reference Siems and Nelken2017). While all quantitative representations of the world provide contestable forms of knowledge (Bruno et al., Reference Bruno, Jany-Catrice and Touchelay2016; Rudinow Sætnan et al., Reference Sætnan A, Lomell H and Hammer2010; Desrosières, Reference Desrosières2000), global indicators are more prone than other artefacts to incur methodological fallacies and to suggest inferences from the supposed known to the really unknown (Kelley and Simmons, Reference Kelley and Simmons2020; Bhuta et al., Reference Bhuta, Malito, Umbach, Malito, Umbach and Bhuta2018; Broome et al., Reference Broome, Homolar and Kranke2018; Kelley, Reference Kelley2017; Merry, 2015; Broome and Quirk, Reference Broome and Quirk2015; Jerven, Reference Jerven2013; Rittich, Reference Rittich, Fabri H, Wolfrum and Gogolin2010; Espeland and Sauder, Reference Espeland and Sauder2007). Given such hazards and fallacies, it is important to attempt to identify which global quantitative initiatives in the current pandemic could qualify as indicators and which ones should be taken with utmost caution.
This is why the present contribution deals neither with on-field efforts of data collection about confirmed cases, deaths and tests; with the analysis of such data by virologists, epidemiologists, health statisticians and alike; nor with global qualitative trackers of health and policy measures. Rather, the essay deals with global attempts to quantitatively measure the health situation and/or the reaction of the world's countries vis-à-vis the pandemic. In particular, it focuses on ten English-language worldwide initiatives that were launched between January and May 2020.Footnote 1 Some are more health-focused; others predominantly look at the law-and-policy reactions of countries. The more health-focused initiatives are the World Health Organisation (WHO)'s ‘COVID-2019 situation reports’, the European Centre for Disease Prevention and Control (ECDC)'s ‘COVID-19 situation update worldwide’, the Johns Hopkins University (JHU)'s ‘COVID-19 Dashboard’, Worldometer (WoM)'s statistics on the ‘Coronavirus Pandemic’, Our-World-in-Data (OWiD)'s statistics on the ‘Coronavirus Pandemic’ and the Institute for Health Metrics and Evaluation (IHME)'s ‘COVID-19 Projections’. The remainder of the initiatives are oriented more towards law and policy; this is the case for the Oxford University's ‘COVID-19 Government Response Tracker’ (Ox-CGRT), the Deep Knowledge Group (DKG)'s ‘COVID-19 Rankings and Analytics’, the Centre for Civil and Political Rights (CCPR)'s ‘State of Emergency Data’ and Simon Porcher's ‘Rigidity of Governments’ Responses to COVID-19’ dataset and index.
For each of these initiatives, the paper examines who is making them, which data they collect and on the basis of what sources, what methodologies, format and design they adopt, and what uses they serve (section 2). The comparative analysis of the above initiatives will shed light on what these global measurements have and do not have in common (section 3). On the basis of such findings and of existing critical literature on global indicators, the paper will then attempt to identify the features of global indicators that are most commonly associated with hazards and fallacies, drawing a shortlist of criteria for caution (section 4). The review of the above-mentioned measurement initiatives according to the selected criteria (section 5) will help to assess the utility of global numerical initiatives as a basis for knowledge and action in the context of the pandemic (section 6). While conclusions must be considered tentative and open to revision, the aim of the essay is to invite a deeper understanding of global measurements of health and related policy measures and to suggest caution about their use.
2 Ten global measurements
Since the outbreak of the pandemic, a number of actors have started collecting and publishing data about the health situation and containment measures adopted in the world's countries. While these initiatives share the goal of providing a numerical overview of the pandemic and its effects, they differ in focus, means of data collection and presentation, and prospective effects.
International organisations stepped in first. Since January 2020, the WHO (Geneva, Switzerland) has published daily a worldwide ‘situation report’ in a PDF file, collecting data about confirmed cases of COVID-19 and deaths caused by it as received by the WHO from national authorities.Footnote 2 Since mid-August 2020, situation reports have been published weekly,Footnote 3 and daily updates are now summarised on a dashboard and a table recapping the main information and ranking countries according to the total number of COVID-19 cases.Footnote 4 At around the end of January 2020, the ECDC's Epidemic Intelligence team, based in Stockholm (Sweden), started publishing daily tables and maps about confirmed cases and deaths on the basis of information gathered by the WHO, EU countries and by national authorities from other countries.Footnote 5 As in the case of the WHO, an interactive dashboard accompanied by a set of customisable graphs allowing comparisons between different countries was added at a later stage.Footnote 6 The data-collection initiatives undertaken by the two organisations have many points in common. The numbers published by the WHO and the ECDC are updated daily. In both cases, the precise origin and level of comparability of the data reported remain relatively unclear. Both the organisations started releasing data in a very simple and non-stylised format, which made them hard to extract and rework; only at a later stage did they adopt more intuitive and customisable setups.Footnote 7 Although neither the WHO nor the ECDC tracks the third-party use of their data, their numbers are in fact relied upon by a number of other global data providers.
On 22 January 2020, Lauren Gardner, a civil and systems engineering professor at JHU in Baltimore (Maryland, US), launched the Johns Hopkins Coronavirus Resource Centre and its COVID-19 Dashboard (Dong et al., Reference Dong, Du and Gardner2020).Footnote 8 The dashboard offers real-time information and is complemented with user-friendly and interactive maps. It reports confirmed cases and deaths worldwide on the basis of data published by global data providers (such as, initially, the WHO and the ECDC and, currently, Worldometer), national authorities and selected media sources.Footnote 9 Users can interact with the online dashboard, create their own maps and rankings, and download the website's entire dataset, which is freely available on GitHub.Footnote 10 The data are updated daily. Differently from the WHO and the ECDC, the website keeps track of the number of visits and of media uses of the data.Footnote 11
WoM is a website owned by Dadax, a digital media company also based in the US (exact location is unknown: McLean et al., Reference McLean2020).Footnote 12 Since 30 January 2020, WoM's team of developers have screened reports from national authorities and media newsFootnote 13 to provide daily updates with an online table of confirmed cases, deaths, recovered cases, tests performed in absolute terms and cases/deaths/tests per 1 million population.Footnote 14 The table looks quite simple, and all its columns can be easily re-ordered from the highest to the lowest value. The webpage also offers some graphs and analysis about the data.Footnote 15 Similar to the JHU's dashboard, WoM's website keeps track of policy, scientific and media uses of its data.Footnote 16
Both the OWiD and the Ox-CGRT are based at the Oxford University (UK). The former is a nonprofit initiative led since 2011 by Max Roser, an economics researcher at the University of Oxford.Footnote 17 In mid-February 2020, OWiD built a freely available dataset on COVID-19-related confirmed cases, deaths, tests and mortality risk, integrated with customisable graphs and maps. The dataset is compiled daily by the OWiD team on the basis of the WHO's data (initially) and (now) ECDC's data.Footnote 18 OWiD keeps track of academic and media references to its data.Footnote 19 It also features a section on policy responses, which is sourced from the Ox-CGRT. The latter was launched in March 2020 by a team led by Thomas Hale, Professor in Global Public Policy at the Oxford University's Blavatnik School of Government.Footnote 20 The Ox-CGRT consists of a set of individual countries’ trackers and heat maps built on the basis of (initially seventeen, and now) nineteen indicators of government responses to the pandemic, covering containment and closure policies, economic policies and health-system policies; from the eight indicators on containment and closure policies, the Ox-CGRT team extracts its Stringency Index (Hale et al., Reference Hale2020).Footnote 21 Information about national policies are extracted from publicly available sources, such as news articles and government press releases, by over 100 Oxford students, staff and alumni collaborating pro bono on the project, and are then reworked by the team (Hale et al., Reference Hale2020, p. 5).Footnote 22 The resulting data feed the indicators and index, as well as their accompanying, customisable maps and charts, which are updated irregularly. All the website's content is freely available on GitHub.Footnote 23 The website also offers a list of media and scientific citations of the project.Footnote 24
On 26 March 2020, the IHME, a global health research centre based at the University of Washington (US), launched its set of COVID-19 projections.Footnote 25 The IHME team, led by Christopher Murray, a Professor in Health Metrics, aggregates daily worldwide data from national authorities, the WHO, the JHU, the OWiD and other sourcesFootnote 26 on deaths, confirmed and estimated cases, tests, social distancing and hospital-resource use (IHME, 2020).Footnote 27 The initiative not only aims to capture past and current national trends; it also provides subnational numbers and attempts to forecast future national and subnational trends. Data and projections are displayed on the website through interactive and customisable graphs and maps, which are freely downloadable. The website also collects a list of references for its projections in the news.Footnote 28
At the end of March 2020, the DKG, a Hong-Kong-based consortium of commercial and nonprofit organisations in the field of artificial intelligence and frontier medical technologies (which came to prominence in 2014 for appointing an algorithm called Vital to its board: Möslein, Reference Möslein, Barfield and Pagallo2018; Hariri, Reference Hariri2017, p. 376),Footnote 29 announced its COVID-19 rankings and analytics. In April 2020, on the basis of data collected (among others) from the WHO, the ECDC, the JHU and the WoM, and treated with a rather obscure set of proprietary techniques, the DKG published online a collection of rankings and top-10s on countries’ (and their regions’) safety, risk, treatment efficiency and governments’ responses; the rankings and top-10s were copyrighted and not reworkable by users.Footnote 30 A few months later, the DKG released a new set of indicators, the most important of which is the ‘COVID-19 Regional Safety Assessment’, which ranks more than 250 countries and regions according to six dimensions: quarantine efficiency, government efficiency of risk management, monitoring and detection, health readiness, regional resilience and emergency preparedness.Footnote 31 The DKG's ‘COVID-19 Regional Safety Assessment’ is based on health and health-related data extracted from a variety of publicly available sources, including the WHO, Worldometer and the OWiD, and then treated according to a proprietary metric that is disclosed for five dimensions, but kept secret for the dimension on ‘emergency preparedness’ (DKG, 2020, pp. 3, 8–33). Notwithstanding such partial disclosure, the methodology underlying the ‘COVID-19 Regional Safety Assessment’ remains quite unclear (Lumley, Reference Lumley2020). Results are not updated; they are neither reworkable nor customisable by users. Nevertheless, the DKG's quantitative products have enjoyed considerable mediatic visibility; the website accurately reports a collection of uses and citations of DKG's rankings and indicators by the media.Footnote 32
On 1 April 2020, the CCPR, a human rights non-governmental organisation (NGO) based in Geneva (Switzerland),Footnote 33 released its ‘State of Emergency’ dataset.Footnote 34 The CCPR team, mostly comprising lawyers, prepared the dataset relying upon information on governments’ legislations and policies gathered from national authorities and media, and classified the world's countries according to eight variables.Footnote 35 Variables ask whether a country has declared a state of emergency and notified the UN about the declaration; has closed schools, restaurants and places of worship; has prohibited public gatherings; blocked borders; and suspended asylum claims. The variables are then combined to produce the ‘State of Emergency Index’, which is displayed on a non-interactive coloured map.Footnote 36 While the original data and their sources are not published, the dataset containing the team's answer for each country and variable is accessible and downloadable as a Google spreadsheet.Footnote 37 Since its initial release, the ‘State of Emergency Index’ has never been updated. The CCPR does not keep track of third-party use of its dataset and map.
On 7 May 2020, Simon Porcher, a Professor of Public Management at the IAE Paris-Sorbonne Business School,Footnote 38 published two papers presenting his ‘Rigidity of Governments’ Responses to COVID-19’ index (Porcher, Reference Porcher2020a; Reference Porcher2020b). On the basis of cross-country information reported by a few international organisations, including the ECDC, Porcher built a customisable table listing countries’ actions with regard to ten health-related policies: bans on mass gatherings, bans on sporting and recreational events, restaurant and bar closures, domestic lockdowns, school closures, travel restrictions, declarations of states of emergency, public testing, enhanced surveillance and the postponement of elections. Data were then reworked to produce the final index (Porcher, Reference Porcher2020b, pp. 3–4, 6–7). The scores in the index results are displayed on coloured maps available in the two papers (Porcher, Reference Porcher2020a, pp. 15–16; 2020b, pp. 12–14); all the underlying data are freely available for further use on GitHub.Footnote 39 Like the CCPR, Porcher does not update the index and does not keep track of third-party use and citations of his data.
3 Commonalities and differences across global measurements
The initiatives just surveyed have many features in common as well as notable differences. Table 1 summarises their main points of convergence and divergence.
At the forefront of the production of global quantitative data, there are unsurprisingly international organisations with health-related competency (see Table 1, column on ‘Identity of the provider’). This is not surprising, since the WHO is mandated by its Constitution ‘to establish and maintain … epidemiological and statistical services’ (Art. 2, letter (f), WHO Constitution (1946), as amended), and the ECDC's institutional mission is to ‘search for, collect, collate, evaluate and disseminate relevant scientific and technical data’ (Art. 3(2), letter (a), Regulation (EC) No. 851/2004 of the European Parliament and of the Council of 21 April 2004 establishing a European centre for disease prevention and control). But global data providers include universities and scholars (JHU, OWiD, Ox-CGRT, IHME, Porcher), non-governmental organisations (CCPR) and private companies (Worldometer, DKG).
With only the exception of DKG, the actors involved in the production of global numbers about the pandemic and its effects are almost all located in the Global North (see Table 1, column on ‘Seat of the provider’; see also Harrington, in this issue). As is common for expert-based tools of global governance (Kennedy, Reference Kennedy2016; Jerven, Reference Jerven2013), the initiatives surveyed are to some extent at a distance from much of what they purport to measure.
The range of expertise represented in the surveyed initiatives is vast, including health, epidemiology, engineering, data analytics, economy, political science, law and public management (see Table 1, column on ‘Expertise of the provider’). The majority of data makers provide global numbers in areas of their expertise. However, a solid knowledge of the subject matter seems not to be a necessary requirement for launching a global data initiative. The JHU's dashboard and the OWiD's dataset, for instance, are led by scholars with a background, respectively, in engineering and economics; the WoM is a company specialised in data analytics. For the production of global data, experience in data management might suffice.
All the initiatives surveyed were launched within a few days or months of the outbreak of the pandemic. This testifies that the entry-into-the-market of global data is nowadays simple for a wide range of potential data providers, no matter how little expertise they possess on the substance of what is measured and how little authority they enjoy in the field. The lack of substantial barriers to entry into the market of global data can be explained not only by the growing demand of such data, but also by the derivative nature of the information underlying global initiatives and processes of learning and mimicry between them. This clearly applies to composite quantitative analysis – that is, to initiatives aggregating data collected by others (Rovan, Reference Rovan and Lovric2011). Once somebody starts publishing their own global data, it becomes quite easy for others to step in and rework the original data in a new format, as shown by the fate of the WHO's and the ECDC's numbers (see Table 1, columns on ‘Data sources’ and ‘Uses by other providers’). Since the outbreak of the pandemic, the WHO and the ECDC (the latter relying upon the former) have strived to collect and harmonise data provided by national authorities. The WHO's and the ECDC's data were in turn relied upon by the JHU, the OWiD, the IHME, the DKG and Porcher, which combined them with additional sources and repackaged them into their projects. Conversely, the WHO and the ECDC learnt from their followers how to make the results attractive for a wider range of users. While, at the very beginning, both the WHO and the ECDC published their data in a simple format that made it difficult to extract the information and directly compare countries with one another, in the summer of 2020, the two institutions adopted more colourful and intuitive dashboards, introduced tables with rankings and rendered the extraction of their data much easier than it used to be.Footnote 40 But the entry-into-the-market of global data is now relatively easy even for those who rely upon information gathered through original searches, such as the WoM, authors of the Ox-CGRT, the CCPR and (to some extent) Porcher (see Table 1, columns on ‘Data sources’ and ‘Uses by other providers’). The progressive digitalisation of public administration and journalism, the wider availability of documents and news in English (either directly or through machine translation services), the accessibility of algorithms and software for the automatic scanning of websites, tweets and blogs make data collection and treatment easier and cheaper than ever. The result is an ever-growing ecology of global data initiatives. Developments after the period herein considered show the increasing multiplication of such initiatives. For instance, in June 2020, a group of political science scholars published the ‘CoronaNet COVID-19 Government Response Event Dataset’ – a database on government responses to the coronavirus realised on the basis of both the manual collection of national sources and machine-reading automatic software (Cheng et al., Reference Cheng2020);Footnote 41 on 1 July 2020, an open-source software engineer based in Poland, Michal Biesiad, launched his Covid-19 World Data (mostly sourced from Wikipedia);Footnote 42 in August 2020, Google started to mash up data from the ECDC, the Ox-CGRT, OWiD, Wikipedia, the New York Times and the World Bank (among others) to produce its own set of daily time-series data related to COVID-19.Footnote 43
As the above makes clear, global quantitative initiatives differ as to the data gathered (stemming from information on cases and deaths to more layered judgments about countries’ safety and governments’ responses) and the methods for gathering them (which might include manual or automated aggregation of sources, original searches or both). Yet, besides their technical differences, global data producers share the confidence that social problems are best tackled through quantitative means and the aim of contributing to problem-solving by collecting, generating and disseminating objective and impartial knowledge – a confidence and aim that are typically associated with technocratic forms of governance (Nelken, in this issue; Esmark, Reference Esmark2020; Ladeur, Reference Ladeur2004; Fisher, Reference Fisher1990).
Among this confidence and aim lies another common feature of the surveyed artefacts as global exercises of technocratic governance. Despite their reliance on purportedly neutral and objective information and their purpose of serving as knowledge-based tools, all the surveyed initiatives suffer from countless problems affecting the accuracy and comparability of their numbers. Even data initiatives that focus on the health situation on the basis of national authorities’ reports, such as those led by the WHO and the ECDC, have to deal with deficiencies in the availability of local data and wide discrepancies in the gathering practices and interpretation of the data at the local level. As the data are combined with other sources and autonomously reworked for other purposes (such as calculating countries’ mortality risks in the OWiD's graphs, forecasting trends in the IHME's projections and measuring countries’ safety in the DKG's assessments), issues of oversimplification, decontextualisation and misinterpretation of the original data multiply, and are often aggravated by the lack of transparency about the additional data sources and the method for the aggregation. For initiatives focusing on countries’ law-and-policy responses that rely on autonomous data searches made by the project teams (such as the Ox-CGRT, the CCPR's dataset and Porcher's index), further difficulties arise from data providers’ discretionary and often unverifiable choices about the ways in which data are collected, interpreted and treated. The complexity of such choices also explains why quantitative representations of law-and-policy measures are typically updated in intervals longer than those applied to health measurements, or even not updated at all (see Table 1, column on ‘Periodicity’). Problems affecting data collection, interpretation, aggregation and visualisation are implicitly confirmed by the caveats and disclaimers with which all the initiatives surveyed accompany their data. The WHO, the JHU, the WoM, the OWiD and the IHME all disclaim any warranty with respect to their data, including accuracy;Footnote 44 the ECDC advises users ‘to use all data with caution and awareness of their limitations’;Footnote 45 the Ox-CGRT stresses that ‘the indices should not be interpreted as a measure of the appropriateness or effectiveness of a government's response’ (Hale et al., Reference Hale2020, p. 9); the DKG emphasises that the information it provides ‘is intended for indicative and informational purposes only’ (DKG, 2020, p. 36); and the CCPR and Porcher make it crystal clear that their datasets might contain ‘errors’.Footnote 46
In spite of these disclaimers and caveats, global numbers are published to serve as knowledge-based quantitative tools. All the global numerical initiatives herein examined make their results freely available online through dedicated websites in a user-friendly format. Results might be conveyed through tables, dashboards, graphs, maps, rankings and datasets, either alone or in association with one another (see Table 1, column on ‘Output form’). Results might be accompanied by more or less detailed qualitative information and methodological explanations, and their reproduction might be allowed to different extents. Access to the original data sources might be limited or open. Data visualisation might be more or less customisable, allowing or disallowing users to focus on specific countries and/or specific periods, to reorder the columns of tables and to create their own graphs. These choices, albeit technical, are important insofar as they affect the clarity, comprehensibility, usability and attractiveness of the results conveyed. But, whatever the method chosen for the visualisation and use of the data, all initiatives surveyed aim to provide a variety of audiences (including peer data providers, media and the general public) with allegedly technical and objective knowledge about the state of the pandemic and its consequences (Nelken, in this issue; Siems, in this issue). It is a knowledge that is limited by insurmountable flaws and limitation. Yet, it is also easy-to-get knowledge, vested with the legitimacy of (apparent) expertise and numbers. A form of knowledge that is particularly prone to informing people's decisions and action, to nurturing untested explanations and correlations, and to providing a pseudo-scientific basis for legitimation for a variety of foreseen and unforeseen uses (Porter, Reference Porter1995).
Many of the features just outlined of pandemic-related quantitative global knowledge – the dominance of the Global North in groups working in the field, the faith and emphasis on quantification as a form of legitimation by expertise, the self-reinforcing linkages between different initiatives, the methodological difficulties and intuitive layouts they share with one another, the informative power they enjoy, their potential for giving rise to more or less intended consequences – have a strong resemblance with features of global indicators, to which section 4 is devoted.
4 Global indicators’ features and dangerous promises
The standard definition of an indicator mentioned above (section 1) has the merit of capturing many typical features of global indicators. Indicators simplify complex information and express value judgments with numbers, enabling the comparison of the measured units through time and/or space. Indicators also have the inherent tendency to rank the performance of the measured units in light of a more or less explicit standard. Global indicators exert their informative/judgmental function over the entire world.
Other typical features of global indicators might be added to the above list. For example, global indicators are often produced by teams whose expertise is not clearly related to the substance of what is measured and who have only indirect access to the contexts and phenomena that are measured (Merry, Reference Merry2015, pp. 27–35; Davis et al., Reference Davis, Kingsbury, Merry and Davis2012, pp. 19–21). More often than not, indicators’ results are made freely available on the web and are packaged with a fancy, catchy and easy-to-use format, such as scores and rankings (Restrepo Amariles, Reference Restrepo2017, pp. 472–474; Merry, Reference Merry2015, pp. 13, 206–210). Indicator makers tend to repeat their measurements with a short-time periodicity and keep track of uses, citations and achievements of their creations, since both temporal recursivity and evidence of success are key to asserting the legitimacy of the measurement and unlocking its potential for nudging change (Infantino, Reference Infantino2019, pp. 72–73, 84–85, 222; Best, Reference Best2017, pp. 174–175; Cooley, Reference Cooley, Cooley and Snyder2015, p. 5).
All the above make it clear why indicators in recent decades have become an important technology for global governance (Kelley and Simmons, Reference Kelley and Simmons2020; Bhuta et al., Reference Bhuta, Malito, Umbach, Malito, Umbach and Bhuta2018; Broome and Quirk, Reference Broome and Quirk2015; Cooley, Reference Cooley, Cooley and Snyder2015; Merry, Reference Merry2015; Davis et al., Reference Davis, Kingsbury, Merry and Davis2012). Global indicators count, classify and map geographically dispersed objects, transforming complex information into numbers and facilitating inter-temporal and inter-unit comparisons. As such, they provide apparently objective and impartial knowledge of social phenomena that might be helpful in highlighting problems and patterns for corrective action, reducing policy-makers’ burden of processing information and carrying out diagnostic analysis, and streamlining decision-making processes. Further, by identifying quantifiable targets, monitoring progress and singling out best performers, indicators might easily promote change in the behaviour of their users, orienting their agendas, priorities and lines of action. The potential of indicators for promoting change is strengthened by their capacity to capture media attention, mobilise disparate audiences and promote naming-and-shaming mechanisms against those who are measured (Espeland and Sauder, Reference Espeland and Sauder2007).
All that glitters, however, is not gold. As a top-down global technology, indicators typically rely on oversimplified, heterogeneous and decontextualised data, and are affected by the biases of their designers, who might underestimate or neglect any feature that does not fit their assumptions. Additional uncertainty stems from the (limited or non-existent transparency of the) many discretionary choices required in making a global indicator, such as the selection of data providers and the gap-filling techniques (Restrepo Amariles and McLachlan, Reference Restrepo Amariles and McLachlan2018; Merry, Reference Merry2015, pp. 27–43, 212–216; Broome and Quirk, Reference Broome and Quirk2015; Jerven, Reference Jerven2013; Rittich, Reference Rittich, Fabri H, Wolfrum and Gogolin2010). These weaknesses are important because, once information is translated into global numbers, numbers might easily enter public debates, replace other forms of judgment on complex social realities and validate specific visions about problems, their causes and their possible solutions. Further, thanks to the repetition of the measurements, indicators might easily become performative, pressuring targets to modify their behaviour and to conform to the one-size-fits-all standard expressly or impliedly embraced by indicators themselves. Bypassing political processes and conventional procedures, indicators might thus set up global standards of behaviour and spread them more effectively than traditional legal tools, in spite of their lack of authority, legitimacy and expertise in the concerned field (Broome et al., Reference Broome, Homolar and Kranke2018; Kelley, Reference Kelley2017; Siems and Nelken, Reference Siems and Nelken2017, p. 443; Fioramonti, Reference Fioramonti2014; Espeland and Sauder, Reference Espeland and Sauder2007). This might happen regardless of whether an indicator has an explicitly legal focus/purpose or is formally embedded in an official legal framework. The regulatory capacity of indicators is quite independent from their legal scope or status, and might be strong both in initiatives that are connected to or mandated by the law (Supiot, Reference Supiot2017; von Bogdandy and Goldmann, Reference von Bogdandy, Goldmann and Davis2012) and in initiatives that are not. In light of such regulatory capacity, some commentators have suggested that indicators should be considered as a form of transnational ‘soft law’ (Merry, Reference Merry2015, p. 11), treated as ‘unconventional transnational norms’ (Restrepo Amariles, Reference Restrepo2015, p. 17) or equated to ‘regulatory devices’ (Cassese and Casini, Reference Cassese, Casini and Davis2012, p. 466). One may or may not agree with such proposals, but it is undeniable that the peculiar mode of intervention of indicators favours the spontaneous internalisation of the standards proposed by indicators’ targets and users. Equally well known is that this mode of intervention often spurs rank-seeking behaviour and gaming strategies by the targets of indicators – that is, the deployment of techniques for improving results and manipulating data in ways that are unconnected to, or even undermine, the motivation underlying the indicator (Espeland and Sauder, Reference Espeland and Sauder2007).
The hazards and fallacies of global indicators just described are well documented. This is why it is important, in the context of global measurements of the current pandemic and its effects, to identify what features of global indicators make them more exposed to such hazards and fallacies, and then to check whether and to what extent these features are present in pandemic-related global measurements. The more of these features that are displayed, the higher the risk that a quantitative initiative becomes performative and produces unintended consequences, changing the world it aims to represent. ‘When a measure becomes a target, it ceases to be a good measure’ (Strathern, Reference Strathern1997, p. 4). Quantitative initiatives displaying many such features should be treated with the greatest caution.
Relying upon literature on global indicators, the following elements might be said to raise a presumption that an initiative has indicator-like features, suggesting that its findings are handled sceptically. The first three elements denote a possible fallacy in the descriptive function of an indicator; the last five refer to the hazards associated with indicators’ prescriptive potential. They are as follows:
1 The high level of simplification to which data are subject (Merry, Reference Merry2015, pp. 27–43, 212–216; Rittich, Reference Rittich, Fabri H, Wolfrum and Gogolin2010): The harder it is to quantify what is being measured (think for instance of notions such as ‘freedom’ or ‘security’), the more subjective, and consequently less reliable, its measure becomes.
2 The distance between providers and data sources (Jerven, Reference Jerven2013; Davis et al., Reference Davis, Kingsbury, Merry and Davis2012, pp. 8–9): Data sourcing can be more or less mediated. The less direct the data sourcing, the more likely it is that the data collected are misreported, misinterpreted and decontextualised.
3 The distance between the expertise of providers and what is measured (Merry, Reference Merry2015, pp. 27–35; Davis et al., Reference Davis, Kingsbury, Merry and Davis2012, pp. 19–21): The further the expertise of data producers from the field concerned, the more likely it is that they misunderstand data or do not handle them correctly.
4 The availability of a ranking (Restrepo Amariles, Reference Restrepo2017, pp. 472–474; Merry, Reference Merry2015, pp. 13, 206–210; Jerven, Reference Jerven2013, p. 89): Indicators tend to explicitly or implicitly rank their measured units to allow easy inferences about who is doing better and who is doing worse. The presence of a ranking or, alternatively, the ease with which users can create one, is symptomatic of the measurement's intent and/or capability of stimulating the measured units to (allegedly) engage in a race to the top.
5 The user-friendliness packaging of the information (Infantino, Reference Infantino2019, pp. 208–211; Restrepo Amariles, Reference Restrepo2017, pp. 472–474; Merry, Reference Restrepo2015, pp. 13, 206–210): The performative power of a global measurement tool depends upon its success. Indicator makers often lack both expertise and authority; they have to build otherwise their prestige in the crowded market of global data. Like mating birds, one of the main methods adopted by indicator makers to make their output visible is to adopt glamorous and easy-to-grasp formats devised to attract a broad range of users. Simple formats impoverish data accuracy but enhance both the accessibility of the measurement and its potential to spur behavioural change.
6 The customisability of such information (Infantino, Reference Infantino2019, p. 211): Another recurrent user-friendly aspect of global indicators regards the customisability of data: successful indicators often allow their users to build their own data charts, graphs and maps. The more interactive and playable the data, the wider the diffusion and the more likely the indicator will spread its power.
7 The presence of a system tracing the uses of such information (Infantino, Reference Infantino2019, pp. 216–230): As usability is crucial to the success of indicators, so is evidence of their use by third parties. Since indicator makers often lack both the expertise and the authority to produce their data, confirmation by third parties allows indicator makers to cement their influence.
8 The periodicity of the measurement (Infantino, Reference Infantino2019, pp. 72–73, 84–85, 222; Best, Reference Best2017, pp. 174–175; Cooley, Reference Cooley, Cooley and Snyder2015, p. 5): Much of the prescriptive/performative power of indicators comes from the repetitiveness and periodicity of the measurements they provide. Repetition works as a carrot-and-stick incentive for those who are measured to improve their performances at every round. By contrast, measurements that are not repeated at regular intervals are unlikely to nudge behavioural change.
Needless to say, the above list should be taken tentatively, insofar as it is open to be challenged and refined. One might, for instance, suppose that the identity of the data makers matters, in the sense that actors other than international organisations more often than not produce indicators. The reason for not including the identity of data makers is that experience from outside the realm of pandemic-related data suggests that the identity of the data provider is not conclusive about the kind of effort it might engage in (Infantino, Reference Infantino2019, pp. 67–68). Similar considerations explain why other criteria – such as the source of funding, the transparency in the methodology and the identity of contributors, the openness to receive communications and feedback from the public, the legal framework adopted for data sharing, ownership, licencing and access – were not taken into account. Yet, nothing prevents dropping some of the criteria mentioned above or adding new ones. The main aim of this exercise is less to devise the ultimate test for spotting indicators than it is to stress the importance of devising such tests.
5 Ten global measurements under review
Reviewing the ten global measurements selected against the criteria just mentioned produces results that are apparently counter-intuitive. One might reasonably assume that there is a neat divide between health-focused initiatives and law-and-policy-oriented ones, and that the former, being grounded in data, are more easily quantifiable and somehow more reliable and less prone to subjective judgment than the latter. However, comparative analysis of the aforementioned programs shows that the boundary between descriptive numerical representations and more prescriptive indicators runs across the health/policy divide and is much less neat than one might expect it to be. As the review below shows, the boundary could be better understood as a continuum between more descriptive/objective depictions and more prescriptive/subjective models.
What follows is the review of our ten pandemic-related global measurements of the pandemic and its effects against the criteria identified in section 4.
1 The high level of simplification to which data are subjected: Counting confirmed cases and deaths is one thing; quite another is assessing the effectiveness of governments’ responses to the pandemics. Although both measurements are open to substantial variability and interpretation, there is less discretion in aggregating national data about deaths than in assessing governments’ measures. Of the programs analysed, the initiatives led by the WHO, the ECDC, the JHU, the WoM, the OWiD and the IHME collect and repackage numbers gathered by others about cases, deaths and tests. By contrast, the Ox-CGRT, the DKG's rankings, the CCPR and Porcher's datasets more clearly depart from background data and provide numerical assessment of harder-to-quantify notions, such as governments’ responses and efficiency in treatment.
2 The distance between providers and data sources: All COVID-19 global data initiatives rely upon indirect sources, be they national authorities, news or other data providers. But data sourcing can involve more or fewer steps. Of the initiatives herein surveyed, five rely upon second-hand data (the WHO and the ECDC gather data collected from national authorities; the WoM directly collects data from national authorities and media news; the Ox-CGRT and the CCPR teams extract information from the media). The remaining five initiatives are based on third- or fourth-hand data, resting on the shoulders of others. The JHU relies upon the WoM; the OWiD re-elaborates the ECDC's data; the IHME puts together the WHO's, the JHU's and the OWiD's results; the DKG amalgamates the WHO's, the JHU's and the WoM's data; and Porcher combines information from several international organisations, including the ECDC.
3 The distance between the expertise of providers and what is measured: Of the health-focused initiatives herein examined, the personnel of the WHO, the ECDC and the IHME are specifically trained to deal with health and epidemic data, while the expertise of other providers stems from fields as diverse as engineering (JHU), analytics development (WoM) and economics (OWiD). By contrast, the expertise of the providers of policy-focused global data aligns more closely to the fields surveyed, since the competency of their indicator makers stems from political science (Ox-CGRT), health-care data analytics (DKG), law (CCPR) and public management (Porcher).
4 The availability of a ranking: Any numerical representation of the globe might be used for ranking purposes, but an initiative might do its best to avoid suggesting that performances can and should be compared with one another. For instance, the initial approach of the WHO and the ECDC was to publish their numbers in a way that did not allow an easy comparison of the countries surveyed. This approach was later abandoned, although the WHO in particular still appears reluctant to favour country-by-country comparisons. Other initiatives, by contrast, explicitly adopt a ranking. This is the case with the Ox-CGRT, the DKG's rankings and regional assessment, the CCPR and Porcher's indices. Still, in other cases, rankings might implicitly emerge from the ways in which results are or can be displayed, such as ordered tables and heat maps (JHU), reworkable tables (WoM) and interactive charts and maps (OWiD, IHME).
5 The user-friendliness packaging of the information: A widely employed catchy format concerns the display of data in coloured maps. Maps are present in the WHO's, the ECDC's and the JHU's dashboards; in the OWiD's dataset; in the Ox-CGRT; in the IHME's projections; and in the CCPR's and Porcher's indices. Other possible user-friendly formats include interactive charts (the OWiD's dataset, the Ox-CGRT and the IHME's projections) and top-10s (the DKG's rankings). The more coloured and straightforward the layout is, the more likely that that data collection is a nudging indicator. It should, however, be noted that, until the summer of 2020, both the WHO's and the ECDC's numbers were presented in a very simple and unimaginative manner; only later were fancier layouts adopted.
6 The customisability of such information: While data collected by the WHO, the JHU, the DKG, the CCPR and Porcher cannot be customised, the visualisation of data offered by the ECDC, the WoM, the OWiD, the Ox-CGRT and the IHME can be changed at the user's discretion, allowing users to make their own statistics for the countries and periods selected.
7 The presence of a system tracing the uses of such information: The WHO, the ECDC, the CCPR and Porcher do not monitor how their data are used. By contrast, the websites of the JHU, the WoM, the OWiD, the Ox-CGRT, the IHME and the DKG all keep track of their scientific and media uses.
8 The periodicity of the measurement: Much of the prescriptive/performative power of indicators comes from the repetitiveness and periodicity of the measurements they provide. Data from the WHO, the ECDC, the JHU, the WoM, the OWiD and the IHME are updated with new entries daily. The Ox-CGRT and DKG's Regional Assessment are updated irregularly. By contrast, the CCPR and Porcher's indices are one-time exercises focusing on a given time period and carrying no promise of repetition.
Table 2 summarises the above results but, needless to say, it should be taken tentatively. As said above (section 4), the list of criteria identified is open to question, as are the results of the review carried out in this section.
One might nevertheless draw from Table 2 some answers about which initiatives display indicator-like features. Signals of indicator-like features are: ‘high’ under ‘Level of data simplification’ (LDS); ‘no’ under ‘Expertise matching’ (EM); ‘far’ under ‘Distance from sources’ (DS); ‘yes’ under ‘Ranking or rankability’ (RoR); ‘yes’ under ‘Fancy format’ (FF); ‘yes’ under ‘Customisability’ (C); ‘yes’ under ‘Success tracking’ (ST); ‘yes’ under ‘Repetitiveness’ (R). The recurrence of many of such signs suggests that a global data initiative behaves like an indicator and therefore its global measurements should be taken with the greatest of caution. The final outcome of the survey is shown in Table 3.
In Table 3, no global measurement of the pandemic and its effects gets either 0 or 8. Rather, all the initiatives are located in a nuanced continuum from more to least dangerous exercises. At the bottom of the scale lies the WHO (two indicator-like features out of eight); at the other end, there is OWiD, with seven out of eight indicator-like features. In the middle are all the others.
The final picture is quite astonishing. One could have expected that health-focused initiatives were somewhat more objective and less performance-oriented than law-and-policy-related initiatives. By contrast, among the initiatives for which more caution is suggested in Table 3, there are many global measurements (such as those provided by the JHU, the WoM, the OWiD and the IHME) providing apparently basic data about deaths, infections and tests. Global quantitative representations of countries’ law-and-policy measures score comparatively low; this is especially the case for the CCPR and Porcher. The outcome might be explained considering that the higher difficulties surrounding measurements of law and policy (see section 3) might make providers of these information less able to compete with those offering more basic data.
6 Conclusions
The review just completed demonstrates that pandemic-related global measurements might be exposed to hazards and fallacies that suggest caution in using them. As repeatedly acknowledged, the outcome of the survey might be discussed and challenged under many points of view; one might agree or disagree with the criteria identified in section 4 and with the analysis in section 5. Yet, notwithstanding any possible disagreement and refinement, the above review shows that appearances might be deceiving and that, in the current emergency, as always, it is fundamental to distinguish between different forms of global numerical representations. It is one thing to attempt to visualise situations and trends. It is quite another to produce numbers that might orient choices and spur behavioural change.
It is true that numbers in themselves do not impinge on the social world. When they do so, it is because of a variety of factors that include not only the more or less conscious design and advertisement choices of their drafters, but also – and perhaps especially – the internalisation and appropriation of these numbers by wider communities of targets and users who cite, use and rely upon them. It is thanks to such processes of internalisation and appropriation that global numbers might solidify their informative and performative power.
Luckily, in the context of the pandemic, much of local decision-making is based on local rather than global data. Yet, this does not exclude the possibility that global numerical representations of the health situation and related national measures might have strings attached. Global measurements of the pandemic and its effects might form the basis for action by their targets or by third-party users. They might give rise to or support easy judgments about which country is safer or reacting ‘better’ to the pandemic. They might suggest more or less shaky correlations between given practices and results. They might provide an apparently scientific basis for further study, research, funding, aid and interventions in spite of the shortcomings affecting their reliability. As it has been noted,
‘with the millions upon millions of individuals looking at maps, charts and other visualisations of the pandemic on a daily basis, the potential for such a massive amount of information to mislead is significant, even in the absence of an intentional effort to distract or discount.’ (Shelton, Reference Shelton2020, p. 4; see also Mooney and Juhàsz, Reference Mooney and Juhàsz2020; Usher, Reference Usher2020)
It is exactly because global quantitative measurements matter that it is important to be aware of their limitations. Users of global numbers, beware!
Conflicts of Interest
None
Acknowledgements
I wish to express my gratitude to David Nelken, Mathias Siems and to the anonymous reviewers for their valuable feedback, as well as to Maitreyi Misra for the language editing. All mistakes are my own.