We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter addresses some of the classic problems of historical analysis, focusing on the ways in which the intellectual options that the complex history of the discipline can help historians address the challenges those problems pose. It presents a discussion of the problems of objectivity, bias, and judgment in history. It focuses on historians’ necessarily paradoxical yet coherent conception of their own relationship to history – of which they are, according to the logic of the discipline itself, both students and products. It suggests that postmodern theory about the nature of historical knowledge both recapitulates and deepens this fundamental historicist position. It discusses the standards of evidentiary support and of logical argumentation that historians use to evaluate the plausibility and productivity of historical interpretations. Finally, this chapter explores once again the unique pedagogical usefulness of History as a discipline that is irreducibly and necessarily perspectival, interpretive, and focused on standards of inquiry rather than on the production of actionable outcomes.
This scoping review of conceptualizations of fundamentalism scrutinizes the concept's domain of application, defining characteristics, and liability to bias. We find fundamentalism in four domains of application: Christianity, other Abrahamic religions, non-Abrahamic religions, and non-religious phenomena. The defining characteristics which we identify are organized into five categories: belief, behavior, emotion, goal, and structure. We find that different kinds of fundamentalisms are defined by different characteristics, with violent and oppressive behaviors, and political beliefs and goals being emphasized for non-Christian fundamentalisms. Additionally, we find that the locus of fundamentalism studies is the Global North. Based on these findings, we conclude that the concept is prone to bias. When conceptualizing fundamentalism, three considerations deserve attention: the mutual dependency between the domain of application and the specification of defining characteristics; the question of usefulness of scientific concepts; and the connection between conceptual ambiguity and the risk of bias in the study of fundamentalism.
There is a large literature evaluating the dual process model of cognition, including the biases and heuristics it implies. However, our understanding of what causes effortful thinking remains incomplete. To advance this literature, we focus on what triggers decision-makers to switch from the intuitive process (System 1) to the more deliberative process (System 2). We examine how the framing of incentives (gains versus losses) influences decision processing. To evaluate this, we design experiments based on a task developed to distinguish between intuitive and deliberative thinking. Replicating previous research, we find that losses elicit more cognitive effort. Most importantly, we also find that losses differentially reduce the incidence of intuitive answers, consistent with triggering a shift between these modes of cognition. We find substantial heterogeneity in these effects, with young men being much more responsive to the loss framing. To complement these findings, we provide robustness tests of our results using aggregated data, the imposition of a constraint to hinder the activation of System 2, and an analysis of incorrect, but unintuitive, answers to inform hybrid models of choice.
The main principles underpinning measurement for healthcare improvement are outlined in this Element. Although there is no single formula for achieving optimal measurement to support improvement, a fundamental principle is the importance of using multiple measures and approaches to gathering data. Using a single measure falls short in capturing the multifaceted aspects of care across diverse patient populations, as well as all the intended and unintended consequences of improvement interventions within various quality domains. Even within a single domain, improvement efforts can succeed in several ways and go wrong in others. Therefore, a family of measures is usually necessary. Clearly communicating a plausible theory outlining how an intervention will lead to desired outcomes informs decisions about the scope and types of measurement used. Improvement teams must tread carefully to avoid imposing undue burdens on patients, clinicians, or organisations. This title is also available as Open Access on Cambridge Core.
Chapter 3 presents localized peace enforcement theory. It first discusses the challenges facing individuals involved in a communal dispute. Reflecting on these obstacles to peaceful dispute resolution, the chapter outlines a formal micro-level theory of dispute escalation between two individuals from different social groups who live in the same community. It explains how international intervention shapes escalation dynamics. The chapter then shifts the focus to local perceptions of intervener impartiality, which the theory posits are a key determinant of whether a UN intervention succeeds in preventing the onset of violence. The identifies the importance of multilateralism, diversity, and the nonuse of force as critical factors shaping local perceptions and, as a result, UN peacekeeping effectiveness. Critically, the theory does not suggest that UN peacekeepers will always succeed, or that all kinds of UN peacekeepers will succeed. Indeed, perceptions of UN peacekeepers vary depending on the troop-contributing country and the identity of the civilians involved in the dispute. The chapter closes with a discussion of the most important hypotheses derived from the theory.
This paper provides results on a form of adaptive testing that is used frequently in intelligence testing. In these tests, items are presented in order of increasing difficulty. The presentation of items is adaptive in the sense that a session is discontinued once a test taker produces a certain number of incorrect responses in sequence, with subsequent (not observed) responses commonly scored as wrong. The Stanford-Binet Intelligence Scales (SB5; Riverside Publishing Company, 2003) and the Kaufman Assessment Battery for Children (KABC-II; Kaufman and Kaufman, 2004), the Kaufman Adolescent and Adult Intelligence Test (Kaufman and Kaufman 2014) and the Universal Nonverbal Intelligence Test (2nd ed.) (Bracken and McCallum 2015) are some of the many examples using this rule. He and Wolfe (Educ Psychol Meas 72(5):808–826, 2012. https://doi.org/10.1177/0013164412441937) compared different ability estimation methods in a simulation study for this discontinue rule adaptation of test length. However, there has been no study, to our knowledge, of the underlying distributional properties based on analytic arguments drawing on probability theory, of what these authors call stochastic censoring of responses. The study results obtained by He and Wolfe (Educ Psychol Meas 72(5):808–826, 2012. https://doi.org/10.1177/0013164412441937) agree with results presented by DeAyala et al. (J Educ Meas 38:213–234, 2001) as well as Rose et al. (Modeling non-ignorable missing data with item response theory (IRT; ETS RR-10-11), Educational Testing Service, Princeton, 2010) and Rose et al. (Psychometrika 82:795–819, 2017. https://doi.org/10.1007/s11336-016-9544-7) in that ability estimates are biased most when scoring the not observed responses as wrong. This scoring is used operationally, so more research is needed in order to improve practice in this field. The paper extends existing research on adaptivity by discontinue rules in intelligence tests in multiple ways: First, an analytical study of the distributional properties of discontinue rule scored items is presented. Second, a simulation is presented that includes additional scoring rules and uses ability estimators that may be suitable to reduce bias for discontinue rule scored intelligence tests.
Lord developed an approximation for the bias function for the maximum likelihood estimate in the context of the three-parameter logistic model. Using Taylor's expansion of the likelihood equation, he obtained an equation that includes the conditional expectation, given true ability, of the discrepancy between the maximum likelihood estimate and true ability. All terms of orders higher than n−1 are ignored where n indicates the number of items. Lord assumed that all item and individual parameters are bounded, all item parameters are known or well-estimated, and the number of items is reasonably large. In the present paper, an approximation for the bias function of the maximum likelihood estimate of the latent trait, or ability, will be developed using the same assumptions for the more general case where item responses are discrete. This will include the dichotomous response level, for which the three-parameter logistic model has been discussed, the graded response level and the nominal response level. Some observations will be made for both dichotomous and graded response levels.
Rationale and the actual procedures of two nonparametric approaches, called Bivariate P.D.F. Approach and Conditional P.D.F. Approach, for estimating the operating characteristic of a discrete item response, or the conditional probability, given latent trait, that the examinee's response be that specific response, are introduced and discussed. These methods are featured by the facts that: (a) estimation is made without assuming any mathematical forms, and (b) it is based upon a relatively small sample of several hundred to a few thousand examinees.
Some examples of the results obtained by the Simple Sum Procedure and the Differential Weight Procedure of the Conditional P.D.F. Approach are given, using simulated data. The usefulness of these nonparametric methods is also discussed.
Samejima has recently given an approximation for the bias function for the maximum likelihood estimate of the latent trait in the general case where item responses are discrete, generalizing Lord's bias function in the three-parameter logistic model for the dichotomous response level. In the present paper, observations are made about the behavior of this bias function for the dichotomous response level in general, and also with respect to several widely used mathematical models. Some empirical examples are given.
The purpose of this paper is to present a hypothesis testing and estimation procedure, Crossing SIBTEST, for detecting crossing DIF. Crossing DIF exists when the difference in the probabilities of a correct answer for the two examinee groups changes signs as ability level is varied. In item response theory terms, crossing DIF is indicated by two crossing item characteristic curves. Our new procedure, denoted as Crossing SIBTEST, first estimates the matching subtest score at which crossing occurs using least squares regression analysis. A Crossing SIBTEST statistic then is used to test the hypothesis of crossing DIF. The performance of Crossing SIBTEST is evaluated in this study.
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee’s ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators. A numerical example is presented to illustrate how to apply the formulae to evaluate the impact of uncertainty about item parameters on ability estimation and the appropriateness of estimating ability using the regular MLE or WLE method.
For analyses with missing data, some popular procedures delete cases with missing values, perform analysis with “missing value” correlation or covariance matrices, or estimate missing values by sample means. There are objections to each of these procedures. Several procedures are outlined here for replacing missing values by regression values obtained in various ways, and for adjusting coefficients (such as factor score coefficients) when data are missing. None of the procedures are complex or expensive.
The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample standardized regression coefficients are also biased in general, although it should not be a concern in practice when the sample size is not too small. Monte Carlo results imply that, for both standardized and unstandardized sample regression coefficients, SE estimates based on asymptotics tend to under-predict the empirical ones at smaller sample sizes.
The test information function serves important roles in latent trait models and in their applications. Among others, it has been used as the measure of accuracy in ability estimation. A question arises, however, if the test information function is accurate enough for all meaningful levels of ability relative to the test, especially when the number of test items is relatively small (e.g., less than 50). In the present paper, using the constant information model and constant amounts of test information for a finite interval of ability, simulated data were produced for eight different levels of ability and for twenty different numbers of test items ranging between 10 and 200. Analyses of these data suggest that it is desirable to consider some modification of the test information function when it is used as the measure of accuracy in ability estimation.
Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.
A universal basic income is widely endorsed as a critical feature of effective governance. It is also growing in popularity in an era of substantial collective wealth alongside growing inequality. But how could it work? Current economic policies necessarily influence wealth distributions, but they are often sufficiently complicated that they hide their inefficiencies. Simplifications based on network science can offer plausible solutions and even offer ways to base universal basic income on merit. Here we will examine a case study based on a universal basic income for researchers. This is an important case because numerous funding agencies currently require costly proposal processes with high administrative costs. These are costly for the proposal writers, their evaluators, and the progress of science itself. Moreover, the outcomes are known to be biased and inefficiently managed. Network science can help us redesign funding allocations in a less costly and potentially more equitable way.
As its name indicates, algorithmic regulation relies on the automation of regulatory processes through algorithms. Examining the impact of algorithmic regulation on the rule of law hence first requires an understanding of how algorithms work. In this chapter, I therefore start by focusing on the technical aspects of algorithmic systems (Section 2.1), and complement this discussion with an overview of their societal impact, emphasising their societal embeddedness and the consequences thereof (Section 2.2). Next, I examine how and why public authorities rely on algorithmic systems to inform and take administrative acts, with special attention to the historical adoption of such systems, and their impact on the role of discretion (Section 2.3). Finally, I draw some conclusions for subsequent chapters (Section 2.4).
This Element engages with the epistemic significance of disagreement, focusing on its skeptical implications. It examines various types of disagreement-motivated skepticism in ancient philosophy, ethics, philosophy of religion, and general epistemology. In each case, it favors suspension of judgment as the seemingly appropriate response to the realization of disagreement. One main line of argument pursued in the Element is that, since in real-life disputes we have limited or inaccurate information about both our own epistemic standing and the epistemic standing of our dissenters, personal information and self-trust can rarely function as symmetry breakers in favor of our own views.
A core normative assumption of welfare economics is that people ought to maximise utility and, as a corollary of that, they should be consistent in their choices. Behavioural economists have observed that people demonstrate systematic choice inconsistences, but rather than relaxing the normative assumption of utility maximisation they tend to attribute these behaviours to individual error. I argue in this article that this, in itself, is an error – an ‘error error’. In reality, a planner cannot hope to understand the multifarious desires that drive a person’s choices. Consequently, she is not able to discern which choice in an inconsistent set is erroneous. Moreover, those who are inconsistent may view neither of their choices as erroneous if the context reacts meaningfully with their valuation of outcomes. Others are similarly opposed to planners paternalistically intervening in the market mechanism to correct for behavioural inconsistencies, and advocate that the free market is the best means by which people can settle on mutually agreeable exchanges. However, I maintain that policymakers have a legitimate role in also enhancing people’s agentic capabilities. The most important way in which to achieve this is to invest in aspects of human capital and to create institutions that are broadly considered foundational to a person’s agency. However, there is also a role for so-called boosts to help to correct basic characterisation errors. I further contend that government regulations against self-interested acts of behavioural-informed manipulation by one party over another are legitimate, to protect the manipulated party from undesired inconsistency in their choices.
The identified victim effect is the phenomenon in which people tend to contribute more to identified than to unidentified victims. Kogut and Ritov (Journal of Behavioral Decision Making, 18(3), 157–167, 2005) found that the identified victim effect was limited to a single victim and driven by empathic emotions. In a pre-registered experiment with an online U.S. American MTurk sample on CloudResearch (N = 2003), we conducted a close replication and extension of Experiment 2 from Kogut and Ritov (Journal of Behavioral Decision Making, 18(3), 157–167, 2005). The replication findings failed to provide empirical support for the identified single victim effect hypothesis since we found no evidence of differences in willingness to contribute when comparing a single identified victim to a single unidentified victim (η2p = .00, 90% CI [0.00, 0.00]), and no indication for the target article’s interaction between singularity and identifiability (original: η2p = .062, 90% CI [0.01, 0.15]; replication: η2p = .00, 90% CI [0.00, 0.00]). Extending the replication to conduct a conceptual replication of Kogut and Ritov (Organizational Behavior and Human Decision Processes, 104(2), 150–157, 2007), we investigated a boundary condition of the effect—group belonging. We found support for an ingroup bias in helping behaviors and indications for empathic emotions and perceived responsibility contributing to this effect. We discuss differences between our study and the target article and implications for the literature on the identified victim effect.