We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Vale–Maurelli (VM) approach to generating non-normal multivariate data involves the use of Fleishman polynomials applied to an underlying Gaussian random vector. This method has been extensively used in Monte Carlo studies during the last three decades to investigate the finite-sample performance of estimators under non-Gaussian conditions. The validity of conclusions drawn from these studies clearly depends on the range of distributions obtainable with the VM method. We deduce the distribution and the copula for a vector generated by a generalized VM transformation, and show that it is fundamentally linked to the underlying Gaussian distribution and copula. In the process we derive the distribution of the Fleishman polynomial in full generality. While data generated with the VM approach appears to be highly non-normal, its truly multivariate properties are close to the Gaussian case. A Monte Carlo study illustrates that generating data with a different copula than that implied by the VM approach severely weakens the performance of normal-theory based ML estimates.
Most item response theory models are not robust to violations of conditional independence. However, several modeling approaches (e.g., conditioning on other responses, additional random effects) exist that try to incorporate local item dependencies, but they have some drawbacks such as the nonreproducibility of marginal probabilities and resulting interpretation problems. In this paper, a new class of models making use of copulas to deal with local item dependencies is introduced. These models belong to the bigger class of marginal models in which margins and association structure are modeled separately. It is shown how this approach overcomes some of the problems associated with other local item dependency models.
It is widely believed that a joint factor analysis of item responses and response time (RT) may yield more precise ability scores that are conventionally predicted from responses only. For this purpose, a simple-structure factor model is often preferred as it only requires specifying an additional measurement model for item-level RT while leaving the original item response theory (IRT) model for responses intact. The added speed factor indicated by item-level RT correlates with the ability factor in the IRT model, allowing RT data to carry additional information about respondents’ ability. However, parametric simple-structure factor models are often restrictive and fit poorly to empirical data, which prompts under-confidence in the suitablity of a simple factor structure. In the present paper, we analyze the 2015 Programme for International Student Assessment mathematics data using a semiparametric simple-structure model. We conclude that a simple factor structure attains a decent fit after further parametric assumptions in the measurement model are sufficiently relaxed. Furthermore, our semiparametric model implies that the association between latent ability and speed/slowness is strong in the population, but the form of association is nonlinear. It follows that scoring based on the fitted model can substantially improve the precision of ability scores.
In this work, we consider extensions of the dual risk model with proportional gains by introducing dependence structures among gain sizes and gain interarrival times. Among others, we further consider the case where the proportionality parameter is randomly chosen, the case where it is a uniformly random variable, as well as the case where we may have upward as well as downward jumps. Moreover, we consider the case with causal dependence structure, as well as the case where the dependence is based on the generalized Farlie–Gumbel–Morgenstern copula. The ruin probability and the distribution of the time to ruin are investigated.
Preservation of stochastic orders through the system signature has captured the attention of researchers in recent years. Signature-based comparisons have been made for the usual stochastic order, hazard rate order, and likelihood ratio orders. However, for the mean residual life (MRL) order, it has recently been proved that the preservation result does not hold true in general, but rather holds for a particular class of distributions. In this paper, we study whether or not a similar preservation result holds for the mean inactivity time (MIT) order. We prove that the MIT order is not preserved from signatures to system lifetimes with independent and identically distributed (i.i.d.) components, but holds for special classes of distributions. The relationship between these classes and the order statistics is also highlighted. Furthermore, the distribution-free comparison of the performance of coherent systems with dependent and identically distributed (d.i.d.) components is studied under the MIT ordering, using diagonal-dependent copulas and distorted distributions.
Recently, the relevation transformation has received further attention from researchers, and some interesting results have been developed. It is well known that the active redundancy at component level results in a more reliable coherent system than that at system level. However, the lack of study of this problem with relevation redundancy prevents us from fully understanding such a generalization of the active redundancy. In this note we deal with relevation redundancy to coherent systems of homogeneous components. Typically, for a series system of independent components, we have proved that the lifetime of a system with relevation redundancy at component level is larger than that with relevation redundancy at system level in the sense of the usual stochastic order and the likelihood ratio order, respectively. For a coherent system with dependent components, we have developed a sufficient condition in terms of the domination function to the usual stochastic order between the system lifetime with redundancy at component level and that at system level.
Response inhibition refers to an organism’s ability to suppress unwanted impulses, or actions and responses that are no longer required or have become inappropriate.In the stop-signal task, participants perform a response time task (go task), and occasionally, the go stimulus is followed by a stop signal after a variable delay, indicating subjects to withhold their response (stop task). The main interest of modeling is in estimating the unobservable latency of the stopping process as a characterization of the response inhibition mechanism. Here we analyze and compare the underlying assumptions of different models, including parametric and non-parametric versions of the race model. New model classes based on the concept of copulas are introduced and a number of unsolved problems facing all existing models are pointed out.
As indicated in Chapter Two, during its history, Chinese underwent a dramatic typological change from being a rigid wh-movement language to being a complete wh- in situ language. Roughly speaking, in the period of Old Chinese, wh-words used as objects needed to be fronted to the preverbal position , though Chinese was an SVO language, like English . However, during the first half of Middle Chinese, wh- movements were gradually replaced by wh- in situ.
This chapter discusses a fundamental change in copula structures from Old to Middle Chinese, which in turn caused a series of consequent developments that significantly altered the texture of Chinese grammar at the time. Old Chinese lacks such a copula verb, as there is no linking verb between the subject and the complement, and the copula construction at the time was obligatorily marked by a sentence-final particle yě that followed the complement, unlike other verbs.
This article investigates the development of wh-in-situ questions in French by examining a three-year kindergarten dataset of spontaneous productions with 16 children between 2;5 and 5;11. The distribution of the wh-phrases is statistically examined in relation to age, verb form (Fixed be form c’est ‘it is’ vs. Free be forms vs. Free lexical verbs), and grammatical category of the wh-word (Pronoun vs. Adverb). Results show that wh-in-situ remains prevalent throughout the period despite a steady increase in wh-ex-situ. Verb form (Fixed vs. All free forms) is a discriminating variable for the wh-position in all three years, and it interacts with the category of the wh-word. The Fixed be form c’est favours in-situ wh-pronouns (c’est qui Taz ?), whereas the Free forms favour wh-ex-situ questions, and massively co-occur with wh-adverbs (combien ça coûte ?). The emergence of the ex-situ qu’est-ce que ‘what is it that’, as opposed to the in-situ quoi ‘what’, is identified as a factor accounting for the gradual increase in wh-ex-situ. Finally, most outliers (wh-in-situ with Free forms) are shown to belong to the same paradigm as c’est in-situ questions: non-presuppositional questions, which are visible from the frequent use of là ‘there’, like c’est, a deictic item.
This chapter considers various models that focus largely on serially dependent variables and the respective methodologies developed with a COM–Poisson underpinning. This chapter first introduces the reader to the various stochastic processes that have been established, including a homogeneous COM–Poisson process, a copula-based COM–Poisson Markov model, and a COM–Poisson hidden Markov model. Meanwhile, there are two approaches for conducting time series analysis on time-dependent count data. One approach assumes that the time dependence occurs with respect to the intensity vector. Under this framework, the usual time series models that assume a continuous variable can be applied. Alternatively, the time series model can be applied directly to the outcomes themselves. Maintaining the discrete nature of the observations, however, requires a different approach referred to as a thinning-based method. Different thinning-based operators can be considered for such models. The chapter then broadens the discussion of dependence to consider COM–Poisson-based spatio-temporal models, thus allowing both for serial and spatial dependence among variables.
Quantifying tail dependence is an important issue in insurance and risk management. The prevalent tail dependence coefficient (TDC), however, is known to underestimate the degree of tail dependence and it does not capture non-exchangeable tail dependence since it evaluates the limiting tail probability only along the main diagonal. To overcome these issues, two novel tail dependence measures called the maximal tail concordance measure (MTCM) and the average tail concordance measure (ATCM) are proposed. Both measures are constructed based on tail copulas and possess clear probabilistic interpretations in that the MTCM evaluates the largest limiting probability among all comparable rectangles in the tail, and the ATCM is a normalized average of these limiting probabilities. In contrast to the TDC, the proposed measures can capture non-exchangeable tail dependence. Analytical forms of the proposed measures are also derived for various copulas. A real data analysis reveals striking tail dependence and tail non-exchangeability of the return series of stock indices, particularly in periods of financial distress.
In this paper, we study the estimation of a scale parameter from a sample of lifetimes of coherent systems with a fixed structure. We assume that the components are independent and identically distributed having a common distribution which belongs to a scale parameter family. Some results are obtained as well for dependent (exchangeable) components. To this end, we will use the representations for the distribution function of a coherent system based on signatures. We prove that the efficiency of the estimators depends on the structure of the system and on the scale parameter family. In the dependence case, it also depends on the baseline copula function.
The distribution of human leukocyte antigens in the population assists in matching solid organ donors and recipients when the typing methods used do not provide sufficiently precise information. This is made possible by linkage disequilibrium (LD), where alleles co-occur more often than random chance would suggest. There is a trade-off between the high bias and low variance of a broad sample from the population and the low bias but high variance of a focused sample. Some of this trade-off could be alleviated if sub-populations shared LD despite having different allele frequencies. These experiments show that Bayesian estimation can balance bias and variance by tuning the effective sample size of the reference panel, but the LD as represented by an additive or multiplicative copula is not shared.
Traditional logic dominates Western thinking by centering thinking on propositions and thereby restricting the meaning of "being" to its derivative, categorial meaning. In Heidegger’s view, it fails in this way to realize the promise of a philosophical logic, one that is capable of tracing traditional logic and thinking generally back to their foundation, i.e., the being/unconcealment of the logos from which they are derived. This chapter examines how, as a first step toward realizing that promise, Heidegger questions the supremacy of logic in Western thinking through a “critical deconstruction” of four theses underlying it: the thesis that judgment is the place of truth rather than vice versa, that the copula exhausts the meaning of "being," that nothingness originates from negation rather than vice versa, and that the predicative structure of propositions constitutes the essence of language. In conclusion, the chapter suggests that the construction ultimately accompanying Heidegger’s deconstruction is to be found, not in language as Dasein’s comportment, but in the revealing capacity of tautology to which he appeals in his final seminar (1973).
The choice of a copula model from limited data is a hard but important task. Motivated by the visual patterns that different copula models produce in smoothed density heatmaps, we consider copula model selection as an image recognition problem. We extract image features from heatmaps using the pre-trained AlexNet and present workflows for model selection that combine image features with statistical information. We employ dimension reduction via Principal Component and Linear Discriminant Analyses and use a Support Vector Machine classifier. Simulation studies show that the use of image data improves the accuracy of the copula model selection task, particularly in scenarios where sample sizes and correlations are low. This finding indicates that transfer learning can support statistical procedures of model selection. We demonstrate application of the proposed approach to the joint modelling of weekly returns of the MSCI and RISX indices.
The framework of this paper is a defence of Burnet’s construal of Apology 30b2–4. Socrates does not claim, as he is standardly translated, that virtue makes you rich, but that virtue makes money and everything else good for you. This view of the relation between virtue and wealth is paralleled in dialogues of every period, and a sophisticated development of it appears in Aristotle. My philological defence of the philosophically preferable translation extends recent scholarly work on εἶναι in Plato and Aristotle to γίγνεσθαι, which is the main verb in the disputed sentence. When attached to a subject, both verbs make a complete statement on their own, but a statement that is further completable by adding a complement. The important point is that the addition of a complement does not change the meaning of the verb from existence to the copula. Proving this is a lengthy task which takes me into some of the deeper reaches of Platonic and Aristotelian ontology, and into discussion of whether Greek ever acquired a verb that corresponds to modem verbs of existence. I conclude that even when later authors such as Philo Judaeus, Sextus Empiricus and Plotinus debate what we naturally translate as issues of existence, none of the verbs they use (εἶναι, ὑπάρχειν, ὑφεστηκέναι) can be said to have existential meaning.
Chapter 4 provides examples of reanalyses, rather than different choices, that are prompted by the Principle of Determinacy. The first change involves the reanalysis of a loosely adjoined phrase as a subject argument because a topicalized subject does not result in an optimal computation. The principle also accounts for changes involving copulas, both the change from demonstrative to copula and from topic to subject. Auxiliaries and quantifiers in English provide fertile ground to investigate determinacy, because these move from lower to higher heads and specifiers, respectively. It is shown that auxiliary movement indeed violates determinacy and that options exist to circumvent it, e.g. skipping T and reanalyzing as a higher functional head. Floating quantifiers do not violate determinacy because they first move as QPs and subsequent moves are of DPs.
The study provides comparative risk analyses of Australia’s three Victorian dairy regions. Historical data were used to identify business risk and financial viability. Multivariate distributions were fitted to the historical price, production, and input costs using copula models, capturing non-linear dependence among the variables. Monte Carlo simulation methods were then used to generate cash flows for a decade. Factors that influenced profitability the most were identified using sensitivity analysis. The dairies in the Northern region have faced water reductions, whereas those of Gippsland and South West have more positive indicators. Our analysis summarizes long-term risks and net farm profits by utilizing survey data in a probabilistic manner.
This paper investigates the issue of stochastic comparison of multi-active redundancies at the component level versus the system level. Based on the assumption that all components are statistically dependent, in the case of complete matching and nonmatching spares, we present some interesting comparison results in the sense of the hazard rate, reversed hazard rate and likelihood ratio orders, respectively. And we also obtain two comparison results between relative agings of resulting systems at the component level and the system level. Several numerical examples are provided to illustrate the theoretical results.