We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Measurable residual disease (MRD) is an established prognostic factor after induction chemotherapy in acute myeloid leukaemia patients. Over the past decades, molecular and flow cytometry-based assays have been optimized to provide highly specific and sensitive MRD assessment that is clinically validated. Flow cytometry is an accessible technique available in most clinical diagnostic laboratories worldwide and has the advantage of being applicable in approximately 90% of patients. Here, the essential aspects of flow cytometry-based MRD assessment are discussed, focusing on the identification of leukaemic cells using leukaemia associated immunophenotypes. Analysis, detection limits of the assay, reporting of results and current clinical applications are also reviewed. Additionally, limitations of the assay will be discussed, including the future perspective of flow cytometry-based MRD assessment.
Although the fundamental idea of having cells focalised to be ’seen’ one by one by a detection system remains unchanged, flow cytometry technologies evolve. This chapter provides an overview of recent progress in this evolution. From a technical point of view, cameras can provide images of each of these cells together with their fluorescent properties, or the whole spectrum of emitted light can be collected. Markers coupled to heavy metals allow to detect each cell immunophenotype by mass spectrometry. On the analysis side, artificial intelligence and machine learning are developing for unsupervised analysis, saving time before a much better supervision of small populations.
Global platforms present novel challenges. They serve as powerful conduits of commerce and global community. Yet their power to influence political and consumer behavior is enormous. Their responsibility for the use of this power – for their content – is statutorily limited by national laws such as Section 230 of the Communications Decency Act in the US. National efforts to demand and guide appropriate content moderation, and to avoid private abuse of this power, is in tension with concern in liberal states to avoid excessive government regulation, especially of speech. Diverse and sometimes contradictory national rules responding to these tensions on a national basis threaten to splinter platforms, and reduce their utility to both wealthy and poor countries. This edited volume sets out to respond to the question whether a global approach can be developed to address these tensions while maintaining or even enhancing the social contribution of platforms.
In this chapter, we argue that it is highly beneficial for the contemporary construction grammarian to have a thorough understanding of the strong relationship between the research fields of Construction Grammar and artificial intelligence. We start by unraveling the historical links between the two fields, showing that their relationship is rooted in a common attitude towards human communication and language. We then discuss the first direction of influence, focusing on how insights and techniques from the field of artificial intelligence play an important role in operationalizing, validating, and scaling constructionist approaches to language. We then proceed to the second direction of influence, highlighting the relevance of Construction Grammar insights and analyses to the artificial intelligence endeavor of building truly intelligent agents. We support our case with a variety of illustrative examples and conclude that further elaboration of this relationship will play a key role in shaping the future of the field of Construction Grammar.
Artificial intelligence (AI) is increasingly being integrated into sentencing within the criminal justice system. This research examines the impact of AI on sentencing, addressing the challenges and opportunities for fairness and justice. The main problem explored is AI’s potential to perpetuate biases, undermining fair-trial principles. This study intends to assess AI’s influence on sentencing, identify legal and ethical challenges, and propose a framework for equitable AI use in judicial decisions. Key research questions include: (1) How does AI influence sentencing decisions? (2) What concerns arise from AI in sentencing? (3) What safeguards can mitigate those concerns and prejudices? Utilizing qualitative methodology, including doctrinal analysis and comparative studies, the research reveals AI’s potential to enhance sentencing efficiency but also to risk reinforcing biases. The study recommends robust regulatory frameworks, transparency in AI algorithms, and judicial oversight to ensure AI supports justice rather than impedes it, advocating for a balanced integration that prioritizes human rights and fairness.
The structure of society is heavily dependent upon its means of producing and distributing information. As its methods of communication change, so does a society. In Europe, for example, the invention of the printing press created what we now call the public sphere. The public sphere, in turn, facilitated the appearance of ‘public opinion’, which made possible wholly new forms of politics and governance, including the democracies we treasure today. Society is presently in the midst of an information revolution. It is shifting from analogue to digital information, and it has invented the Internet as a nearly universal means for distributing digital information. Taken together, these two changes are profoundly affecting the organization of our society. With frightening rapidity, these innovations have created a wholly new digital public sphere that is both virtual and pervasive.
Narrative creativity is a new, neuroscience-based approach to innovation, problem solving, and resilience that has proved effective in business executives, scientists, engineers, doctors, and students as young as eight. This Element offers a concise introduction to narrative creativity's theory and practice. It distinguishes narrative creativity from ideation, divergent thinking, design thinking, brainstorming, and other current approaches to cultivating creativity. It traces the biological origins of narrative creativity and explains why narrative creativity will always be mechanically impossible for computer artificial intelligences. It provides practical exercises, developed and tested in hundreds of classrooms and businesses, and validated independently by the US Army, for improving narrative creativity. It explains how narrative creativity contributes to technological innovation, scientific progress, cultural growth, and psychological well-being, and it describes how narrative creativity can be assessed. This title is also available as Open Access on Cambridge Core.
This paper focuses on the epistemic situation one faces when using a Large Language Model based chatbot like ChatGPT: When reading the output of the chatbot, how should one decide whether or not to believe it? By surveying strategies we use with other, more familiar sources of information, I argue that chatbots present a novel challenge. This makes the question of how one could trust a chatbot especially vexing.
This paper traces the legislative process of the EU Artificial Intelligence Act (AI Act) to provide an empirical and critical account of the choices made in its formation. It specifically focuses on the dynamics that led to increasing or lowering fundamental rights protection in the final text and their implications for fundamental rights. Adopting process-tracing methods, the paper sheds light on the institutional differences and agreements behind this landmark legislation. It then analyses the implications of political compromise for fundamental rights protection. The core message it aims to convey is to read the AI Act with its institutional setting and political context in mind. As this paper shows, the different policy aims and mandates of the three EU institutions, compounded by the unprecedented level of redrafting and the short time needed to reach a political agreement, influenced the formulation of the AI Act. Looking forward, the paper points to the role of implementation, enforcement and judicial interpretation in enhancing the protection of fundamental rights in the age of AI.
Environmental data science for spatial extremes has traditionally relied heavily on max-stable processes. Even though the popularity of these models has perhaps peaked with statisticians, they are still perceived and considered as the “state of the art” in many applied fields. However, while the asymptotic theory supporting the use of max-stable processes is mathematically rigorous and comprehensive, we think that it has also been overused, if not misused, in environmental applications, to the detriment of more purposeful and meticulously validated models. In this article, we review the main limitations of max-stable process models, and strongly argue against their systematic use in environmental studies. Alternative solutions based on more flexible frameworks using the exceedances of variables above appropriately chosen high thresholds are discussed, and an outlook on future research is given. We consider the opportunities offered by hybridizing machine learning with extreme-value statistics, highlighting seven key recommendations moving forward.
This study aimed to create a risk prediction model with artificial intelligence (AI) to identify patients at higher risk of postpartum hemorrhage using perinatal characteristics that may be associated with later postpartum hemorrhage (PPH) in twin pregnancies that underwent cesarean section. The study was planned as a retrospective cohort study at University Hospital. All twin cesarean deliveries were categorized into two groups: those with and without PPH. Using the perinatal characteristics of the cases, four different machine learning classifiers were created: Logistic regression (LR), support vector machine (SVM), random forest (RF), and multilayer perceptron (MLP). LR, RF, and SVM models were created a second time by including class weights to manage the underlying imbalances in the data. A total of 615 twin pregnancies were included in the study. There were 150 twin pregnancies with PPH and 465 without PPH. Dichorionity, PAS, and placenta previa were significantly higher in the PPH-positive group (p = .045, p = .004, p = .001 respectively). In our model, LR with class weight was the best model with the highest negative predictive value. The AUC in our LR with class weight model was %75.12 with an accuracy of 70.73%, a PPV of 47.92%, and an NPV of 85.33% in our data. Although the application of machine learning to create predictive models using clinical risk factors and our model’s 70% accuracy rate are encouraging, it is not sufficient. Machine learning modeling needs further study and validation before being incorporated into clinical use.
Information is provided to navigators through advanced onboard navigation equipment, such as the electronic chart display and information system (ECDIS), radar and the automatic identification system (AIS). However, maritime accidents still occur, especially in coastal and inland water where many navigational dangers exist. The recent artificial intelligence (AI) technology is actively applied in navigation fields, such as collision avoidance and ship detection. However, utilising the aids to navigation (AtoN) system requires more engagement and further exploration. The AtoN system provides critical navigation information by marking the navigation hazards, such as shallow water areas and wrecks, and visually marking narrow passageways. The prime function of the AtoN can be enhanced by applying AI technology, particularly deep learning technology. With the help of this technology, an algorithm could be constructed to detect AtoN in coastal and inland waters and utilise the detected AtoN to create a safety function to supplement watchkeepers using recent navigation equipment.
This interview with Peter Singer AI serves a dual purpose. It is an exploration of certain—utilitarian and related—views on sentience and its ethical implications. It is also an exercise in the emerging interaction between natural and artificial intelligence, presented not as just ethics of AI but perhaps more importantly, as ethics with AI. The one asking the questions—Matti Häyry—is a person, in the contemporary sense of the word, sentient and self-aware, whereas Peter Singer AI is an artificial intelligence persona, created by Sankalpa Ghose, a person, through dialogue with Peter Singer, a person, to programmatically model and incorporate the latter’s writings, presentations, recipes, and character qualities as a renowned philosopher. The interview indicates some subtle differences between natural perspectives and artificial representation, suggesting directions for further development. PSai, as the project is also known, is available to anyone to chat with, anywhere in the world, on almost any topic, in almost any language, at www.petersinger.ai
Cardiovascular disease (CVD) is twice as prevalent among individuals with mental illness compared to the general population. Prevention strategies exist but require accurate risk prediction. This study aimed to develop and validate a machine learning model for predicting incident CVD among patients with mental illness using routine clinical data from electronic health records.
Methods
A cohort study was conducted using data from 74,880 patients with 1.6 million psychiatric service contacts in the Central Denmark Region from 2013 to 2021. Two machine learning models (XGBoost and regularised logistic regression) were trained on 85% of the data from six hospitals using 234 potential predictors. The best-performing model was externally validated on the remaining 15% of patients from another three hospitals. CVD was defined as myocardial infarction, stroke, or peripheral arterial disease.
Results
The best-performing model (hyperparameter-tuned XGBoost) demonstrated acceptable discrimination, with an area under the receiver operating characteristic curve of 0.84 on the training set and 0.74 on the validation set. It identified high-risk individuals 2.5 years before CVD events. For the psychiatric service contacts in the top 5% of predicted risk, the positive predictive value was 5%, and the negative predictive value was 99%. The model issued at least one positive prediction for 39% of patients who developed CVD.
Conclusions
A machine learning model can accurately predict CVD risk among patients with mental illness using routinely collected electronic health record data. A decision support system building on this approach may aid primary CVD prevention in this high-risk population.
A reflective analysis is presented on the potential added value that actuarial science can contribute to the field of health technology assessment. This topic is discussed based on the experience of several experts in health actuarial science and health economics. Different points are addressed, such as the role of actuarial science in health, actuarial judgment, data inputs and their quality, modeling methodologies and the use of decision-analytic models in the age of artificial intelligence, and the development of innovative pricing and payment models.
Large Language Models (LLMs) raises challenges that can be examined according to a normative and an epistemological approach. The normative approach, increasingly adopted by European institutions, identifies the pros and cons of technological advancement. Regarding LLMs, the main pros concern technological innovation, economic development and the achievement of social goals and values. The disadvantages mainly concern cases of risks and harms generated by means of LLMs. The epistemological approach examines how LLMs produce outputs, information, knowledge, and a representation of reality in ways that differ from those followed by human beings. To face the impact of LLMs, our paper contends that the epistemological approach should be examined as a priority: identifying risks and opportunities of LLMs also depends on considering how this form of artificial intelligence works from an epistemological point of view. To this end, our analysis compares the epistemology of LLMs with that of law, in order to highlight at least five issues in terms of: (i) qualification; (ii) reliability; (iii) pluralism and novelty; (iv) technological dependence and (v) relation to truth and accuracy. The epistemological analysis of these issues, preliminary to the normative one, lays the foundations to better frame challenges and opportunities arising from the use of LLMs.
The paper examines the legal regulation and governance of “generative artificial intelligence” (AI), “foundation AI,” “large language models” (LLMs), and the “general-purpose” AI models of the AI Act. Attention is drawn to two potential sorcerer’s apprentices, namely, in the spirit of J. W. Goethe’s poem, people who were unable to control a situation they created. Focus is on developers and producers of technologies, such as LLMs that bring about risks of discrimination and information hazards, malicious uses and environmental harms; furthermore, the analysis dwells on the normative attempt of European Union legislators to govern misuses and overuses of LLMs with the AI Act. Scholars, private companies, and organisations have stressed limits of such normative attempt. In addition to issues of competitiveness and legal certainty, bureaucratic burdens and standard development, the threat is the over-frequent revision of the law to tackle advancements of technology. The paper illustrates this threat since the inception of the AI Act and recommends some ways in which the law has not to be continuously amended to address the challenges of technological innovation.
Recent advancements in Earth system science have been marked by the exponential increase in the availability of diverse, multivariate datasets characterised by moderate to high spatio-temporal resolutions. Earth System Data Cubes (ESDCs) have emerged as one suitable solution for transforming this flood of data into a simple yet robust data structure. ESDCs achieve this by organising data into an analysis-ready format aligned with a spatio-temporal grid, facilitating user-friendly analysis and diminishing the need for extensive technical data processing knowledge. Despite these significant benefits, the completion of the entire ESDC life cycle remains a challenging task. Obstacles are not only of a technical nature but also relate to domain-specific problems in Earth system research. There exist barriers to realising the full potential of data collections in light of novel cloud-based technologies, particularly in curating data tailored for specific application domains. These include transforming data to conform to a spatio-temporal grid with minimum distortions and managing complexities such as spatio-temporal autocorrelation issues. Addressing these challenges is pivotal for the effective application of Artificial Intelligence (AI) approaches. Furthermore, adhering to open science principles for data dissemination, reproducibility, visualisation, and reuse is crucial for fostering sustainable research. Overcoming these challenges offers a substantial opportunity to advance data-driven Earth system research, unlocking the full potential of an integrated, multidimensional view of Earth system processes. This is particularly true when such research is coupled with innovative research paradigms and technological progress.
Edge AI is the fusion of edge computing and artificial intelligence (AI). It promises responsiveness, privacy preservation, and fault tolerance by moving parts of the AI workflow from centralized cloud data centers to geographically dispersed edge servers, which are located at the source of the data. The scale of edge AI can vary from simple data preprocessing tasks to the whole machine learning stack. However, most edge AI implementations so far are limited to urban areas, where the infrastructure is highly dependable. This work instead focuses on a class of applications involved in environmental monitoring in remote, rural areas such as forests and rivers. Such applications have additional challenges, including failure proneness and access to the electricity grid and communication networks. We propose neuromorphic computing as a promising solution to the energy, communication, and computation constraints in such scenarios and identify directions for future research in neuromorphic edge AI for rural environmental monitoring. Proposed directions are distributed model synchronization, edge-only learning, aerial networks, spiking neural networks, and sensor integration.
With recent leaps in large language model technology, conversational AI offer increasingly sophisticated interactions. But is it fair to say that they can offer authentic relationships, perhaps even assuage the loneliness epidemic? In answering this question, this essay traces the history of AI authenticity, historically shaped by cultural imaginations of intelligent machines and human communication. The illusion of human-like interaction with AI has existed since at least the 1960s, when the term “Eliza effect’ was named after the first chatbot Eliza. Termed a “crisis of authenticity” by sociologist Sherry Turkle, the Eliza effect has stood for fears that AI interactions can undermine real human connections and leave users vulnerable to manipulation. More recently, however, researchers have begun investigating less anthropomorphic definitions of authenticity. The expectation - and perhaps fantasy - of authenticity stems, in turn, from a much longer history of technologically mediated communications, dating back to the invention of the telegraph in the nineteenth century. Read through this history, the essay concludes that AI relationships might not mimic human interactions but must instead acknowledge the artifice of AI, offering a new form of companionship in our mediated, often lonely, times.