We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Measurable residual disease (MRD) is an established prognostic factor after induction chemotherapy in acute myeloid leukaemia patients. Over the past decades, molecular and flow cytometry-based assays have been optimized to provide highly specific and sensitive MRD assessment that is clinically validated. Flow cytometry is an accessible technique available in most clinical diagnostic laboratories worldwide and has the advantage of being applicable in approximately 90% of patients. Here, the essential aspects of flow cytometry-based MRD assessment are discussed, focusing on the identification of leukaemic cells using leukaemia associated immunophenotypes. Analysis, detection limits of the assay, reporting of results and current clinical applications are also reviewed. Additionally, limitations of the assay will be discussed, including the future perspective of flow cytometry-based MRD assessment.
Although the fundamental idea of having cells focalised to be ’seen’ one by one by a detection system remains unchanged, flow cytometry technologies evolve. This chapter provides an overview of recent progress in this evolution. From a technical point of view, cameras can provide images of each of these cells together with their fluorescent properties, or the whole spectrum of emitted light can be collected. Markers coupled to heavy metals allow to detect each cell immunophenotype by mass spectrometry. On the analysis side, artificial intelligence and machine learning are developing for unsupervised analysis, saving time before a much better supervision of small populations.
This chapter investigates the impact of artificial intelligence on legal services. The questions addressed include: How will artificial intelligence change and improve the legal services offered by lawyers? How will the legal profession change as a result of the increased use of artificial intelligence? How will artificial intelligence change the way lawyers work and the way they organise, charge for and finance their work? A key insight discussed concerns the focus when thinking about the impact of artificial intelligence on the work of lawyers: concentrating on the ‘tasks’ that lawyers perform reveals more insights than asking whether artificial intelligence will destroy ‘jobs’. Exploring the impact on ‘tasks’ of lawyers shows that they are both consumers and producers of services augmented by artificial intelligence. Focusing on ‘tasks’ also helps in understanding what kinds of activities are affected by artificial intelligence and which activities will be performed, at least for the foreseeable future, by human lawyers. The discussion also deals with the emergence of multidisciplinary teams and the success indicators for LawTech start-ups.
This paper presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of artificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understanding of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; so we suggest that “AI philosophy” provides a new method for philosophy.
This conversation addresses the impact of artificial intelligence and sustainability aspects on corporate governance. The speakers explore how technological innovation and sustainability concerns will change the way companies and financial institutions are managed, controlled and regulated. By way of background, the discussion considers the past and recent history of crises, including financial crises and the more recent COVID-19 pandemic. Particular attention is given to the field of auditing, investigating the changing role of internal and external audits. This includes a discussion of the role of regulatory authorities and how their practices will be affected by technological change. Further attention is given to artificial intelligence in the context of businesses and company law. As regards digital transformation, five issues are reflected, namely data, decentralisation, diversification, democratisation and disruption.
Artificial intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility, and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but often fail to suffice due to the context-sensitivity of ethical challenges. Second, this chapter discusses methods to tackle these challenges. Main ethical theories (such as virtue ethics, consequentialism, and deontology) are shown to provide a starting point, but often lack the details needed for actionable AI ethics. Instead, we argue that mid-level philosophical theories coupled to design-approaches such as “design for values”, together with interdisciplinary working methods, offer the best way forward. The chapter aims to show how these approaches can lead to an ethics of AI that is actionable and that can be proactively integrated in the design of AI systems.
Can we develop machines that exhibit intelligent behavior? And how can we build machines that perform a task without being explicitly programmed but by learning from examples or experience? Those are central questions for the domain of artificial intelligence. In this chapter, we introduce this domain from a technical perspective and dive deeper into machine learning and reasoning, which are essential for the development of AI.
This conversation explores how technology changes the way disputes are solved. The focus is on the impact of artificial intelligence. After reporting on a competition, in which lawyers and an artificial intelligence competed to accurately predict the outcome of disputes before the UK Financial Ombudsman, the speaker explains how artificial intelligence is practically used in dispute resolution. Such use cases include the production of information, the creation of focused analyses, the finding of decisions and the generation of communication. The speaker then presents research projects using artificial intelligence to predict dispute outcomes in the courts of different countries. The conversation also addresses the ethical questions arising from different use cases of artificial intelligence in conflict resolution. In conclusion, the potential of artificial intelligence to improve access to justice is identified together with the ethical challenges that need to be addressed.
There are several reasons to be ethically concerned about the development and use of AI. In this contribution, we focus on one specific theme of concern: moral responsibility. In particular, we consider whether the use of autonomous AI causes a responsibility gap and put forward the thesis is that this is not the case. Our argument proceeds as follows. First, we provide some conceptual background by discussing respectively what autonomous systems are, how the notion of responsibility can be understood, and what the responsibility gap is about. Second, we explore to which extent it could make sense to assign responsibility to artificial systems. Third, we argue that the use of autonomous systems does not necessarily lead to a responsibility gap. In the fourth and last section of this chapter, we set out why the responsibility gap – even if it were to exist – is not necessarily problematic.
There is no doubt that AI systems, and the large-scale processing of personal data that often accompanies their development and use, has put a strain on individuals’ fundamental rights and freedoms. Against that background, this chapter aims to walk the reader through a selection of key concerns arising from the application of the GDPR to the training and use of such systems. First, it clarifies the position and role of the GDPR within the broader European data protection regulatory framework. Next, it delineates its scope of application by delving into the pivotal notions of “personal data,” “controller,” and “processor.” Lastly, it highlights some friction points between the characteristics inherent to most AI systems and the general principles outlined in Article 5 GDPR, including lawfulness, transparency, purpose limitation, data minimization, and accountability.
AI has the potential to support many of the proposed solutions to solve sustainability concerns. However, AI itself is also unsustainable in many ways, as its development and use are for example linked with high carbon emissions, discrimination based on biased training data, surveillance practices, and the influence on elections through microtargeting. Addressing the long-term sustainability of AI is crucial, as it impacts social, personal, and natural environments for future generations. The “sustainable” approach is one that is inclusive in both time and space; where the past, present, and future of human societies, the planet, and environment are considered equally important to protect and secure, including the integration of all countries in economic and social change. Furthermore, our use of the concept “sustainable” demands we ask what practices in the current development and use of AI we want to maintain and alternatively what practices we want to repair and/or change. This chapter explores the ethical dilemma of AI for sustainability, balancing its potential to address many sustainable development challenges while at the same time causing harm to the environment and society.
This chapter discusses the interface of artificial intelligence (AI) and intellectual property (IP) law. It focuses on the protection of AI technology, the contentious qualification of AI systems as authors and/or inventors, and the question of ownership of AI-assisted and AI-generated output. The chapter also treats a number of miscellaneous topics, including the question of liability for IP infringement that takes place by or through the intervention of an AI system. More generally, it notes the ambivalent relationship between AI and the IP community, which appears to drift between apparent enthusiasm for the use of AI in IP practice and a clear hesitancy toward catering for additional incentive creation in the AI sphere by amending existing IP laws.
The integration of AI into business models and workplaces has a profound impact on society, legal systems, and organizational structures. It also became intrinsically intertwined with the concept of work and worker, and with the assignment of jobs, the measurement of performance and the evaluation of tasks, and decisions related to disciplinary measures or dismissals. The objective of this chapter is to provide an overview of the multifaceted aspects of AI and labor law, focusing on the profound legal questions arising from this intersection, including its implications for employment relationships, the exercise of labor rights, and social dialogue.
In virtually all societal domains, algorithmic systems, and AI more particularly, have made a grand entrance. Their growing impact renders it increasingly important to understand and assess the challenges and opportunities they raise – an endeavor to which this book aims to contribute. In this chapter, I start by putting the current “AI hype” into context. I emphasize the long history of human fascination with artificial beings; the fact that AI is but one of many powerful technologies that humanity has grappled with over time; and the fact that its uptake is inherently enabled by our societal condition. Subsequently, I introduce the chapters of this book, dealing with AI, ethics and philosophy (Part I); AI, law and policy (Part II); and AI across sectors (Part III). Finally, I discuss some conundrums faced by all scholars in this field, concerning the relationship between law, ethics and policy and their roles in AI governance; the juxtaposition between protection and innovation; and law’s (in)ability to regulate a continuously evolving technology. While their solutions are far from simple, I conclude there is great value in acknowledging the complexity of what is at stake and the need for more nuance in the AI governance debate.
Public administrations are increasingly deploying algorithmic systems to facilitate the application, execution, and enforcement of regulation, a practice that can be denoted as algorithmic regulation. While their reliance on digital technology is not new, both the scale at which they automate administrative acts and the importance of the decisions they delegate to algorithmic tools is on the rise. In this chapter, I contextualize this phenomenon and discuss the implementation of algorithmic regulation across several public sector domains. I then assess some of the ethical and legal conundrums that public administrations face when outsourcing their tasks to such systems and provide an overview of the legal framework that governs this practice, with a particular focus on the European Union. This framework encompasses not only constitutional and administrative law but also data protection law and AI-specific law. Finally, I offer some take-aways for public administrations to consider when seeking to deploy algorithmic regulation.
Firms use algorithms for important decisions in areas from pricing strategy to product design. Increased price transparency and availability of personal data, combined with ever more sophisticated machine learning algorithms, has turbocharged their use. Algorithms can be a procompetitive force, such as when used to undercut competitors or to improve recommendations. But algorithms can also distort competition, as when firms use them to collude or to exclude competitors. EU competition law, in particular its provisions on restrictive agreements and abuse of dominance (Articles 101–102 TFEU), prohibits such practices, but novel anticompetitive practices – when algorithms collude autonomously for example – may escape its grasp. This chapter assesses to what extent anticompetitive algorithmic practices are covered by EU competition law, examining horizontal agreements (collusion), vertical agreements (resale price maintenance), exclusionary conduct (ranking), and exploitative conduct (personalized pricing).
Digital twins are a new paradigm for our time, offering the possibility of interconnected virtual representations of the real world. The concept is very versatile and has been adopted by multiple communities of practice, policymakers, researchers, and innovators. A significant part of the digital twin paradigm is about interconnecting digital objects, many of which have previously not been combined. As a result, members of the newly forming digital twin community are often talking at cross-purposes, based on different starting points, assumptions, and cultural practices. These differences are due to the philosophical world-view adopted within specific communities. In this paper, we explore the philosophical context which underpins the digital twin concept. We offer the building blocks for a philosophical framework for digital twins, consisting of 21 principles that are intended to help facilitate their further development. Specifically, we argue that the philosophy of digital twins is fundamentally holistic and emergentist. We further argue that in order to enable emergent behaviors, digital twins should be designed to reconstruct the behavior of a physical twin by “dynamically assembling” multiple digital “components”. We also argue that digital twins naturally include aspects relating to the philosophy of artificial intelligence, including learning and exploitation of knowledge. We discuss the following four questions (i) What is the distinction between a model and a digital twin? (ii) What previously unseen results can we expect from a digital twin? (iii) How can emergent behaviours be predicted? (iv) How can we assess the existence and uniqueness of digital twin outputs?
This article establishes a data-driven modeling framework for lean hydrogen ($ {\mathrm{H}}_2 $)-air reaction rates for the Large Eddy Simulation (LES) of turbulent reactive flows. This is particularly challenging since $ {\mathrm{H}}_2 $ molecules diffuse much faster than heat, leading to large variations in burning rates, thermodiffusive instabilities at the subfilter scale, and complex turbulence-chemistry interactions. Our data-driven approach leverages a Convolutional Neural Network (CNN), trained to approximate filtered burning rates from emulated LES data. First, five different lean premixed turbulent $ {\mathrm{H}}_2 $-air flame Direct Numerical Simulations (DNSs) are computed each with a unique global equivalence ratio. Second, DNS snapshots are filtered and downsampled to emulate LES data. Third, a CNN is trained to approximate the filtered burning rates as a function of LES scalar quantities: progress variable, local equivalence ratio, and flame thickening due to filtering. Finally, the performances of the CNN model are assessed on test solutions never seen during training. The model retrieves burning rates with very high accuracy. It is also tested on two filter and downsampling parameters and two global equivalence ratios between those used during training. For these interpolation cases, the model approximates burning rates with low error even though the cases were not included in the training dataset. This a priori study shows that the proposed data-driven machine learning framework is able to address the challenge of modeling lean premixed $ {\mathrm{H}}_2 $-air burning rates. It paves the way for a new modeling paradigm for the simulation of carbon-free hydrogen combustion systems.
Since early 2021, food prices in Britain have increased by 30%. Using monthly microdata, researchers have found that frictions in the UK’s new trade relationship with the European Union (EU) play an important part in this inflation. The trade relationship is evolving, with further changes expected in 2024. This article establishes a framework for identifying trade-related inflation in close to real time. Using programming techniques, we collect daily prices of over 100,000 supermarket items, covering 80% of the UK grocery market. We identify 1,200 products from 12 countries with a protected designation of origin (PDO). This allows us to link price changes to individual EU economies. Addressing the predominance of EU PDOs, we employ a large language model to discern product origins from additional web-scraped data, thus broadening our analysis to cover over 67,000 products. Since August 2023, we find that prices for EU-originating food products have increased at a rate of 50% higher than domestically sourced products. This study presents a unique methodological approach to dissecting food sector inflation, which is well-positioned to be used in a policy setting, allowing us to assess the possible impact of impending nontariff barriers at the GB-EU border in 2024.
Imagine that you are given access to an AI chatbot that compellingly mimics the personality and speech of a deceased loved one. If you start having regular interactions with this “thanabot,” could this new relationship be a continuation of the relationship you had with your loved one? And could a relationship with a thanabot preserve or replicate the value of a close human relationship? To the first question, we argue that a relationship with a thanabot cannot be a true continuation of your relationship with a deceased loved one, though it might support one’s continuing bonds with the dead. To the second question, we argue that, in and of themselves, relationships with thanabots cannot benefit us as much as rewarding and healthy intimate relationships with other humans, though we explain why it is difficult to make reliable comparative generalizations about the instrumental value of these relationships.