We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This conversation addresses the impact of artificial intelligence and sustainability aspects on corporate governance. The speakers explore how technological innovation and sustainability concerns will change the way companies and financial institutions are managed, controlled and regulated. By way of background, the discussion considers the past and recent history of crises, including financial crises and the more recent COVID-19 pandemic. Particular attention is given to the field of auditing, investigating the changing role of internal and external audits. This includes a discussion of the role of regulatory authorities and how their practices will be affected by technological change. Further attention is given to artificial intelligence in the context of businesses and company law. As regards digital transformation, five issues are reflected, namely data, decentralisation, diversification, democratisation and disruption.
This conversation centres around innovation in the financial services sector and the related regulatory supervision. Three ‘Techs’ are especially relevant: FinTech, RegTech and SupTech. ‘FinTech’ combines the words ‘financial’ and ‘technology’ and refers to technological innovation in the delivery of financial services and products. ‘RegTech’ joins ‘regulatory’ and ‘technology’ and describes the use of technology by businesses to manage and comply with regulatory requirements. ‘SupTech’, finally, unites the words ‘supervisory’ and ‘technology’ to refer to the use of technology by supervisory authorities such as financial services authorities to perform their functions. Particular approaches presented in this session include regulatory sandboxes to promote innovative technology in the financial sector, automated data analysis, the collection and analysis of granular data, digital forensics and internet monitoring systems. The speakers also address collaboration between financial institutions and supervisory authorities, for example, in the creation of data collection formats and data sharing.
The European Food Safety Authority (EFSA) provides independent scientific advice to EU risk managers on a wide range of food safety issues and communicates on existing and emerging risks in the food chain. This advice helps to protect consumers, animals and the environment. Data are essential to EFSA’s scientific assessments. EFSA collects data from various sources including scientific literature, biological and chemical monitoring programmes, as well as food consumption and composition databases. EFSA also assesses data from authorisation dossiers for regulated products submitted by industry. To continue delivering the highest value for society, EFSA keeps abreast of new scientific, technological and societal developments. EFSA also engages in partnerships as an essential means to address the growing complexity in science and society, and to better connect and integrate knowledge, data and expertise across sectors. This paper provides insights into EFSA’s data related activities and future perspectives in the following key areas of EFSA’s 2027 strategy: one substance-one assessment, combined exposure to multiple chemicals, environmental risk assessment, new approach methodologies, antimicrobial resistance and risk-benefit assessment. EFSA’s initiatives to integrate societal insights in its risk communication are also described.
In his 2019 essay, Arthur Kleinman laments that medicine has become ever-competent at managing illness, yet caring for those who are ill is increasingly out of practice. He opines that the language of ‘the soul’ is helpful to those practicing medicine, as it provides an important counterbalance to medicine’s technical rationality that avoids the existential and spiritual domains of human life. His accusation that medicine has become soulless merits considering, yet we believe his is the wrong description of contemporary medicine. Where medicine is disciplined by technological and informational rationalities that risk coercing attention away from corporealities and toward an impersonal, digital order, the resulting practices expose medicine to becoming not soulless but excarnated. Here we engage Kleinman in conversation with Franco Berardi, Charles Taylor, and others to ask: Have we left behind the body for senseless purposes? Perhaps medicine is not proving itself to be soulless, but rather senseless, bodyless – the any-occupation of excarnated souls. If so, the dissension of excarnation and the recovery of touching purpose seems to us to be an apparent need within the contemporary and increasingly digitally managed and informationally ordered medical milieu.
This book explores the intersection of data sonification (the systematic translation of data into sound) and musical composition. Section 1 engages with existing discourse and offers an original model (the sonification continuum) which provides perspectives on the practice of sonification for composers, science communicators and those interested in this rapidly emerging field. Section 2 engages with the sonification process itself, exploring techniques, models of translation, data fidelity, analogic and symbolic data mapping, temporality and the listener experience. In Section 3 these concepts and techniques are all made concrete in the context of a selection of the author's projects (2004–2023). Finally, some reasons are offered on how sonification as a practice might enrich composition, communication, collaboration, and a sense of connection.
This chapter establishes what it means to do discourse analysis. This is done by defining discourse analysis and providing examples of discourse. The chapter offers a practical overview of how the discourse in discourse analysis fits within the research process. The examples of discourse that are introduced in this chapter are grammar, actions and practices, identities, places and spaces, stories, ideologies, and social structures. After reading the chapter, readers will know what discourse analysis is; understand that there are many types of discourse; know that discourse is an object of study; and understand how an object of study fits within a research project.
Chapter 5 addresses a major demographic puzzle concerning thousands of New York slaves who seem to have gone missing in the transition from slavery to freedom, and the chapter questions how and if slaves were sold South. The keys to solving this puzzle include estimates of common death rates, census undercounting, changing gender ratios in the New York black population, and, most importantly, a proper interpretation of the 1799 emancipation law and its effects on how the children of slaves were counted in the census. Given an extensive analysis of census data, with various demographic techniques for understanding how populations change over time, I conclude that a large number of New York slaves (between 1,000 and 5,000) were sold South, but not likely as many as some previous historians have suggested. A disproportionate number of these sold slaves came from Long Island and Manhattan.
The chapter demonstrates that selecting an object of study is a consequential part of doing discourse analysis. Selecting an object of study requires considering many planning and analytic issues that are often neglected in introductory books on discourse analysis. This chapter reviews many of these planning and analytic issues, including how to organize and present data. After reading the chapter, readers will know how to structure an analysis; understand what data excerpts are and how to introduce them in an analysis; be able to create and present an object of study as smaller data excerpts; and know how to sequence an analysis.
Chapter 3 establishes that the Dutch had economic incentives to continue holding slaves. Slavery in Dutch New York was not just a cultural choice, but was reinforced by economic considerations. From archival sources and published secondary sources, I have compiled a unique dataset of prices for over 3,350 slaves bought, sold, assessed for value, or advertised for sale in New York and New Jersey. This data has been coded by sex, age, county, price, and type of record, among other categories. It is as far as I know the only slave price database for slaves in the Northern states yet assembled. Regression analysis allows us to compute the average price of Northern slaves over time, the relative price difference between male and female slaves, the price trend relative to known prices in the American South, and other variables such as the price differential between New York City slaves and slaves in other counties in the state. Slave prices in New York and New Jersey appear relatively stable over time, but declined in the nineteenth century. The analysis shows that slaveholders in Dutch New York were motivated by profit, and they sought strength and youth in purchasing slaves.
Mapping of human rights abuses and international crimes is an increasingly common tool to evidence, preserve and visualise information. This paper asks, what does rights-informed mapping in the context of mass graves look like? What are the rights concerned and allied goals, and how might these practicably apply during a pilot study? The study offers an analysis of the goals and benefits espoused to accrue to mapping and documentation efforts, as well as an explication of rights arising when engaging with mass graves. Our findings underscore the imperative of understanding the full ramifications of the applicable context, in our case the life-cycle of mass graves. This will bring to the fore the rights engaged with the subject as well as the challenges with data points, collation and reporting as experienced in a pilot (Ukraine) where realities on the ground are not static but remain in flux.
This article interrogates three claims made in relation to the use of data in relation to peace. That more data, faster data, and impartial data will lead to better policy and practice outcomes. Taken together, this data myth relies on a lack of curiosity about the provenance of data and the infrastructure that produces it and asserts its legitimacy. Our discussion is concerned with issues of power, inclusion, and exclusion, and particularly how knowledge hierarchies attend to the collection and use of data in relation to conflict-affected contexts. We therefore question the axiomatic nature of these data myth claims and argue that the structure and dynamics of peacebuilding actors perpetuate the myth. We advocate a fuller reflection of the data wave that has overtaken us and echo calls for an ethics of numbers. In other words, this article is concerned with the evidence base for evidence-based peacebuilding. Mindful of the policy implications of our concerns, the article puts forward five tenets of good practice in relation to data and the peacebuilding sector. The concluding discussion further considers the policy implications of the data myth in relation to peace, and particularly, the consequences of casting peace and conflict as technical issues that can be “solved” without recourse to human and political factors.
Focusing on methods for data that are ordered in time, this textbook provides a comprehensive guide to analyzing time series data using modern techniques from data science. It is specifically tailored to economics and finance applications, aiming to provide students with rigorous training. Chapters cover Bayesian approaches, nonparametric smoothing methods, machine learning, and continuous time econometrics. Theoretical and empirical exercises, concise summaries, bolded key terms, and illustrative examples are included throughout to reinforce key concepts and bolster understanding. Ancillary materials include an instructor's manual with solutions and additional exercises, PowerPoint lecture slides, and datasets. With its clear and accessible style, this textbook is an essential tool for advanced undergraduate and graduate students in economics, finance, and statistics.
This chapter introduces what a time series is and defines the important decomposition into trend, seasonal, and cycle that guides our thinking. We introduce a number of datasets used in the book and plot them to show their key features in terms of these components.
This chapter explores the knowledge creation aspect of contemporary tax reforms in Nigeria. It offers a historical perspective on this process which lets us see today’s reforms not only as the re-creation of long-retreated systems of state taxation-led ordering, but against the backdrop of what intervened in the meantime – a four-decade late-twentieth-century interregnum where revenue reliance on oil profits created a very different distributive system of government-as-knowledge. Today’s system of tax-and-knowledge is not just reform but an inversion of what came before.
This chapter introduces the main research themes of this book, which explores two current global developments. The first concerns the increased use of algorithmic systems by public authorities in a way that raises significant ethical and legal challenges. The second concerns the erosion of the rule of law and the rise of authoritarian and illiberal tendencies in liberal democracies, including in Europe. While each of these developments is worrying as such, in this book, I argue that the combination of their harms is currently underexamined. By analysing how the former development might reinforce the latter, this book seeks to provide a better understanding of how algorithmic regulation can erode the rule of law and lead to algorithmic rule by law instead. It also evaluates the current EU legal framework which is inadequate to counter this threat, and identifies new pathways forward.
The risks emanating from algorithmic rule by law lie at the intersection of two regulatory domains: regulation pertaining to the rule of law’s protection (the EU’s rule of law agenda), and regulation pertaining to the protection of individuals against the risks of algorithmic systems (the EU’s digital agenda). Each of these domains consists of a broad range of legislation, including not only primary and secondary EU law, but also soft law. In what follows, I confine my investigation to those areas of legislation that are most relevant for the identified concerns. After addressing the EU’s competences to take legal action in this field (Section 5.1), I respectively examine safeguards provided by regulation pertaining to the rule of law (Section 5.2), to personal data (Section 5.3) and to algorithmic systems (Section 5.4), before concluding (Section 5.5).
The WHO describes micronutrient deficiencies, or hidden hunger, as a form of malnutrition that occurs due to low intake and/or absorption of minerals and vitamins, putting human development and health at risk. In many cases, emphasis, effort and even policy revolve around the prevention of deficiency of one particular micronutrient in isolation. This is understandable as that micronutrient may be among a group of nutrients of public health concern. Vitamin D is a good exemplar. This review will highlight how the actions taken to tackle low vitamin D status have been highly dependent on the generation of new data and/or new approaches to analysis of existing data, to help develop the evidence-base, inform advice/guidelines, and in some cases, translate into policy. Beyond focus on individual micronutrients, there has also been increasing international attention around hidden hunger, or deficiencies of a range of micronutrients, which can exist unaccompanied by obvious clinical signs but can adversely affect human development and health. A widely quoted estimate of the global prevalence of hidden hunger is a staggering two billion people, but this is now over 30 years old. This review will outline how strategic data sharing and generation is seeking to address this key knowledge gap in relation to the true prevalence of hidden hunger in Europe, a key starting point towards defining sustainable and cost-effective, food-based strategies for its prevention. The availability of data on prevalence and food-based strategies can help inform public policy to eradicate micronutrient deficiency in Europe.
The Introduction sets out the central puzzle that the book seeks to solve. Descriptively, it asks whether primaries have transformed in the twenty-first century by using a series of case studies to illustrate the central descriptive argument of change. It then frames the importance of the second half of the book, justifying the focus on elite partisan positioning and ideological change in relation to recent primary elections as a (potential) mechanism. It then clarifies the data collection process and sources used. Finally, it focuses on partisan differences between the Republican and Democratic parties before providing an outline of the book’s structure.
What is literary data? This chapter addresses this question by examining how the concept of data functioned during a formative moment in academic literary study around the turn of the twentieth century and again at the beginning of electronic literary computing. The chapter considers the following cases: Lucius Adelno Sherman’s Analytics of Literature (1893), the activities of the Concordance Society (c.1906–28), Lane Cooper’s A Concordance to the Poems of William Wordsworth (1911), and the work of Stephen M. Parrish c.1960. The chapter explains how the concept of literary data was used by literature scholars to signal a commitment to a certain epistemological framework that was opposed to other ways of knowing and reading in the disciplinary field.
While it is important to be able to read and interpret individual papers, the results of a single study are never going to provide the complete answer to a question. To move towards this, we need to review the literature more widely. There can be a number of reasons for doing this, some of which require a more comprehensive approach than others. If the aim is simply to increase our personal understanding of a new area, then a few papers might provide adequate background material. Traditional narrative reviews have value for exploring areas of uncertainty or novelty but give less emphasis to complete coverage of the literature and tend to be more qualitative, so it is harder to scrutinise them for flaws. Scoping reviews are more systematic but still exploratory. They are conducted to identify the breadth of evidence available on a particular topic, clarify key concepts and identify the knowledge gaps. In contrast, a major decision regarding policy or practice should be based on a systematic review and perhaps a meta-analysis of all the relevant literature, and it is this approach that we focus on here.