Hostname: page-component-6bf8c574d5-956mj Total loading time: 0 Render date: 2025-02-22T22:03:11.941Z Has data issue: false hasContentIssue false

Prospects for Analogue Confirmation

Published online by Cambridge University Press:  09 June 2022

Paul Bartha*
Affiliation:
University of British Columbia, Department of Philosophy, Vancouver, British Columbia, Canada
Rights & Permissions [Opens in a new window]

Abstract

In analogical reasoning, observations about one or more source domains provide varying degrees of support for a conjecture about a target domain. Norton (2021) challenges the usefulness of formal models of analogical inference. Other philosophers (Dardashti et al. 2019) develop just such formal models in order to show how analogue experiments can confirm a hypothesis, even when the target domain is inaccessible. This paper defends the value of quasi-formal models of analogical reasoning. Such models are broadly compatible with Norton’s position, but help to clarify the structure of analogical reasoning and to identify basic requirements for a good analogical inference.

Type
Symposia Paper
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the Philosophy of Science Association

1. Introduction

In broad terms, analogical arguments use observations about one or more source domains to provide inductive support for a hypothesis about a target domain (Bartha Reference Bartha2010). Let S and T stand for source and target domains, and let P(S) and P*(T) represent the known positive analogy (analogous properties). Let Q(S) represent a further property of the source and let Q*(T) be the conjectured analogy. The positive analogy, together with Q(S), is meant to provide inductive support for Q*(T).

Some such arguments are very strong. Tests conducted on model ships (the source) provide highly reliable information about the stability of full-size vessels (the target) (Sterrett Reference Sterrett, Magnani and Bertoletti2017a). Some analogical arguments have intermediate strength. Neuroscientists interested in the genetic mechanisms that lead to neurodegenerative disorders in humans employ animal models, typically mice, to generate, support or refine hypotheses about how these diseases may be caused or treated in humans (Ahmad-Annuar et al. Reference Ahmad-Annuar, Tabrizi and Fisher2003; Fisher and Bannerman Reference Fisher and Bannerman2019). Finally, modest or even weak analogical arguments are useful in showing that a hypothesis is somewhat plausible. Anthropologists use knowledge about an object produced by a familiar culture to motivate potential explanations about a similar artefact from a vanished culture (Chapman and Wylie Reference Chapman and Wylie2016).

For the purposes of this paper, we may suppose that the above distinction lines up with a familiar Bayesian distinction between three grades of inductive support. Let E represent observations about the source domain and H a hypothesis about the target. Roughly speaking, strong analogical confirmation corresponds to Pr(H/E) > r for some threshold r, incremental (intermediate) analogical confirmation corresponds to Pr(H/E) > Pr(H), and analogical plausibility may be interpreted as establishing non-negligible prior probability Pr(H).

This article argues that we can and should develop models for evaluating these three classes of analogical argument. Any attempt to develop such models must contend with two current discussions about analogical reasoning. The first is a challenge from Norton, who suggests that formal models of analogical reasoning are fruitless. The second, a debate about analogical inferences with inaccessible target domains, provides impetus for developing just such models.

For Norton (Reference Norton2021), a formal model is an abstract, universal schema that sets standards for a good argument. Norton’s material theory rejects universal models for any form of inductive inference. Instead, inductive inferences are warranted by “local” material facts. Inductive schemas, to the extent that they are valid, derive their legitimacy from these local facts (Reference Norton2021, 270). In the case of analogical reasoning, however, Norton rejects formal schemas altogether. He argues that in evaluating an analogical inference, scientists can and should abandon abstract rubrics and focus on empirical investigation of the source and target domains. In a similar spirit, Currie writes that a formal approach can “obscure the local warrants” of analogical inferences and “misses where the action is” (Reference Currie2018, 197). I agree with Norton and Currie that local facts do the heavy lifting in assessing analogical arguments. I argue, however, that this orientation towards local facts still leaves room for what I call quasi-formal models or guidelines.

Consider the problem of inaccessible target domains. Many physicists and philosophers believe that experiments on black hole analogues can confirm the existence of Hawking radiation in real black holes (Dardashti et al. Reference Dardashti, Thébault and Winsberg2017, Reference Dardashti, Hartmann, Thébault and Winsberg2019). The target domain is inaccessible because actual black holes, in relevant respects, are astronomically remote. Hawking radiation cannot be detected from earth. In historical sciences, such as archaeology and evolutionary biology, scientists use analogies to make inferences about the distant past. The target domains are partially inaccessible because they are historically remote. Currie (Reference Currie2018, 27) stresses that analogy is one of the primary resources for investigating such “distant targets.”

The key challenge for analogical reasoning about inaccessible target domains pertains to incremental confirmation. There is room for philosophical debate about whether the analogue gravity experiments should increase our degree of belief in the existence of Hawking radiation.Footnote 1 Similarly, there are optimists and pessimists about analogue confirmation in archaeology.Footnote 2 I suggest that Norton’s material theory of analogy cannot resolve these debates. If the target domain is inaccessible, then empirical investigation, almost by definition, cannot settle disagreements about whether an analogical argument provides incremental confirmation, plausibility, or neither. Consequently, the debate about inaccessible targets motivates us to reconsider formal approaches to analogical inference.

I shall steer a middle course between pessimism and optimism by arguing for the importance of quasi-formal models of analogical reasoning. The starting point is the thesis that good analogical arguments are closely related to background generalizations or uniformities, but this relationship is different for the three types. Strong analogical arguments are “powered” by a pre-established generalization that creates a reliable correlation between features of the source and target domains. Weak analogical arguments proceed in the opposite direction: they argue, tentatively, from observed similarities to a conjectured generalization. Finally, intermediate analogical arguments rely on partially articulated generalizations that generate correlations of intermediate strength. In all three cases, the “action” is local, but the quasi-formal models reveal argument structures with different roles for the background generalizations. These models can be helpful in assessing the prospects for analogue confirmation, even when the target domains are inaccessible or partially inaccessible.

Sections 2 and 3 provide examples accompanied by quasi-formal models for strong and weak analogical arguments. Section 4 uses additional examples to motivate the need for distinctive forms of intermediate analogical confirmation. Section 5 explores the value of Bayesian models, which illuminate the prospects for analogical confirmation but also imply some basic limitations on analogical reasoning about inaccessible targets.

2. Strong analogical confirmation

Experiments and observations on a source domain sometimes lead to highly reliable predictions about the target domain. This type of analogical reasoning, important in practical settings, draws upon empirical observation and deep theoretical understanding of both domains.

As an illustration, Sterrett (Reference Sterrett, Magnani and Bertoletti2017a, Reference Sterrett, Magnani and Bertoletti2017b) provides historical and philosophical examination of the method of physically similar systems. One of her examples is the work of William Froude, a 19th century English engineer, on model ships. Model ships had long been used in the design of full-size vessels, but predictions were unreliable. Froude (Reference Froude1874) determined that the key concept was a dimensionless characteristic now known as the Froude number,

$$F = {v \over {\sqrt {lg} }},$$

where v represents ship speed, l is the characteristic length of the hull and g is the gravitational constant. A model ship with the same Froude number as a full-sized vessel can be used to predict the residual resistance of the water due to created waves and eddies. Froude’s Law of Comparisons states that if S and T are ships with the same Froude number (FS = FT), then their residual resistance is in the ratio of the cubes of their lengths:

$${{{R_T}} \over {{R_S}}} = {{l_T^{\;\;3}} \over {l_S^{\;\;3}}}$$

Froude’s results allowed for “the estimation, with reasonable accuracy, of the resistance and horse-power of full-sized ships from experiments with small and inexpensive models” (Taylor Reference Taylor1907, 418).

As Sterrett explains, efforts to analyze the structure of such arguments culminated in papers by Edgar Buckingham (Reference Buckingham1914a, Reference Buckingham1914b). Buckingham begins with a characterization of physically similar systems S and S′: “If the relation in S′ is of the same form as the relation in S and is describable by the same equation, then the two systems are physically similar as regards this relation” (Reference Buckingham1914b, 353). In general, we don’t need to know the underlying physical laws. It is enough to know that certain dimensionless quantities (such as the Froude number) determine the feature of interest, and that these dimensionless quantities are identical in the two systems. Buckingham writes Π1, …, Πp for the dimensionless parameters and Ψ(Π1, …, Πp) = 0 for the reduced equation that indicates the dependence relation. He states: “If the values of the dimensionless parameters… are the same for S and S′, then we can determine the values of any [physical variable] Qi in S′ given the others, and given values of Qi in S” (Reference Buckingham1914a).

Based both on Buckingham’s characterization and the analysis of Sterrett (Reference Sterrett, Magnani and Bertoletti2017b), the “reduced equation” pattern is exhibited in the following quasi-formal model (Figure 1):

Figure 1. Reduced equation model.

The model is quasi-formal because correct application requires expertise about “local facts,” and in particular a grasp of the range of applicability for the reduced equation. The template is nevertheless useful because it illustrates the structure and requirements for this type of strong analogue confirmation. In particular:

  • Background knowledge should include a well-confirmed generalization that lets us move reliably between features of the source and target domains;

  • The analogical inference is predictive: the inferred conclusion, Q(T), is a particular feature of the target rather than an explanatory hypothesis.

3. Weak analogical arguments (plausibility)

Analogical reasoning is commonly used to show that a conjecture is plausible, i.e., worthy of serious consideration. Bartha (Reference Bartha2010) proposes that such arguments are successful if they establish the potential truth of a generalization that covers both source and target domains. Bartha’s articulation model is based on a two-step evaluation. The first step is to articulate the prior association, a causal or logical relationship among the properties of the source domain. The second step is to assess the potential for generalization by verifying that no crucial element of the prior association lacks an analogue in the target domain.

As an illustration, consider the acoustical analogy, employed by some 19th century physicists seeking to explain the discrete lines in the visible spectrum of Hydrogen.Footnote 3 Around 1870, Stokes suggested that the lines might be explained using a model analogous to a vibrating string or tuning fork. If such a model were correct, then we could identify the frequencies with some type of oscillation. We should expect to find that frequencies f n of the spectral lines are integral multiples of a fundamental frequency f 1, and therefore that the frequencies should be related by simple whole-number ratios. Although Stoney (in 1871) found that some of the frequencies could be related in this way, there were many missing spectral lines (see Maier Reference Maier1981). Furthermore, the whole-number ratios that he discovered involved the numbers 20, 27, and 32—hardly simple ratios. As Bartha (Reference Bartha2010) suggests, the acoustical analogy has an initial measure of plausibility but, on close scrutiny, fails to satisfy the criterion of potential for generalization.

Consider the structural difference between the acoustical analogy and Froude’s use of analogical reasoning based on model ships. In the latter case, the background generalization, Froude’s Law, is well understood in advance and drives the analogical reasoning. In the acoustical analogy, the analogical inference is powered in the reverse direction: from observed features of the two domains (discrete frequencies) towards a possible generalization that is not fully articulated. This is an abductive analogical inference rather than a predictive one: its purpose is to suggest the kind of hypothesis that might explain the spectral lines of hydrogen. We can represent the inference with the following diagram:

Q represents an explanatory feature of the source whose analogue is projected to hold for the target. In this case, the positive analogy, E and E*, is the observed evidence of discrete frequencies in whole number ratios. The dashed arrows point towards a tentative background generalization. The argument fails for lack of evidence that spectral line frequencies f n occur in the right ratios. The logical structure is captured in the quasi-formal model of Figure 2.

Figure 2. Acoustical analogy.

Weak analogical arguments derive their cogency from the possibility of a background generalization, in sharp contrast to strong analogical arguments. Quasi-formal models such as Figure 2 indicate the overall structure and provide guidance. For instance, we can test such arguments by looking for target analogues of certain key observable features of the source domain.

4. Incremental (intermediate) analogical confirmation

Incremental analogical confirmation is widely accepted in many contexts and controversial in others. Given appropriate protocols, a positive result for a medical treatment tested on an animal model counts as evidence (incremental confirmation) for its effectiveness in humans. Currie (Reference Currie2018) maintains that analogies in the historical sciences play a similar function. He identifies a crucial role for background uniformities in such arguments (the “analogue” is the source domain): “One does not move from analogue-features to the target having features without mediation. The mediation in historical science is often via some process type that is taken to have been active in both analogue and target…” (Reference Currie2018, 197). The pattern clearly differs from the models of sections 2 and 3. We require prior knowledge of a background uniformity, in contrast to mere plausibility arguments. But it may be a “process type” or broad uniformity rather than a precise one, in contrast to strong analogical arguments. Particular facts about the two domains are used to refine the uniformity.

Intermediate analogue confirmation, it seems, is “powered” in both directions. This section develops this idea informally, using two examples.

4.1 Mochica pots: refining a background uniformity

Donnan (Reference Donnan1971) uses ethnographic analogy to explain the significance of odd markings on the necks of Moche clay pots found in the Peruvian Andes (Donnan Reference Donnan1971). Donnan learned that contemporary Peruvian potters in the region employ similar markings, known as signáles, to indicate ownership when multiple potters fire their pots in a common kiln. Analogical reasoning suggests that the marks served the same purpose for the Mochica (100 BCE – 800 CE). The conclusion is strengthened by direct historical analogy: the present-day population is linked to Mochica ancestors.

The pattern of inference may be characterized as follows. Donnan starts with a broad background uniformity X: production processes operate in the same way in historically and culturally linked groups. There is a strong positive analogy: P(S) and P*(T) represent similar markings (signáles) in the two domains. Q(S) represents the known explanation for signáles for contemporary Peruvian potters. Q*(T) is the conjecture that Mochica potters also used signáles to indicate ownership. In order to make this inference, Donnan refines X to a more specific uniformity X ′: “ceramic technologies… are maintained over long periods of time” (Donnan Reference Donnan1971, 466). This refined uniformity supports the analogical conclusion.

It might seem that this analogy provides incremental confirmation. Donnan makes no such claim, contenting himself with an assertion of plausibility: “The ethnographic analogy does offer a possible explanation for the marks… and provides an interesting hypothesis which could be tested when more data are made available” (Reference Donnan1971, 466). I return to this point shortly.

4.2 Neurodegenerative disease: engineering the source domain

Researchers studying neurodegenerative diseases rely on animal models, typically mice, to understand how the diseases work in humans. The reasoning fits within a broad conception of analogical reasoning. Consider SMA, spinal muscular atrophy, a disease caused by defective motor neurons. Humans have one copy of the SMN1 (survival motor neuron 1) gene and up to four copies of SMN2, a “backup” gene that imperfectly duplicates the protein-producing function of SMN1. Mutations in SMN1 result in SMA, a disease in which motor neurons in the brain stem and spinal cord gradually die. The death rate is inversely related to the amount of functional SMN2.

Mice are used to study SMA, but the genetic mechanism is different. Mice have a single SMN gene (Fisher and Bannerman Reference Fisher and Bannerman2019). If one or both alleles are normal, the mouse is viable and does not develop SMA; if both alleles are mutated, the mouse dies in embryo. SMA never occurs in naturally born mice. For this reason, engineered mouse models are used. One SMN allele is deleted and the other is modified to resemble various mutations of human SMN2, the backup gene. The mouse is viable and develops SMA. Researchers can study the rate of neuron loss in relation to the amount of functional SMN, and they can test gene therapies. This research has had considerable success in developing treatments for humans.

Do the experiments on mice provide incremental confirmation for hypotheses about neurodegenerative diseases in humans? Arguably, yes. The reasoning begins with a broad background uniformity about genetic overlap: “99% of human genes have a mouse homolog and more than 90% of the genes that have been implicated in human disease are present in the mouse genome” (Ahmad-Annuar et al. Reference Ahmad-Annuar, Tabrizi and Fisher2003, 451). This provides a preliminary basis for using mouse models to explore genetic mechanisms and treatments for disease.Footnote 4 In the case of SMA, where the causal gene is known, researchers adopt a more precise approach by modifying the gene to create the mouse model. They can then rely on a more specific uniformity: modified SMN, SMN1, and SMN2 genes determine the rate of neuron loss.

In both Mochica pots and neurodegenerative disease, analogical reasoning begins with an independently established background uniformity which is then refined using known facts about the two domains. However, there is an important difference between the two examples that mirrors the earlier distinction (sections 2 and 3) between strong and weak analogical arguments. In both model ships and neurodegenerative disease, we have a predictive analogical argument. An accepted uniformity provides the logical basis for transferring properties from source to target.Footnote 5 By contrast, in acoustical analogy and Mochica pots, we have an abductive analogical argument. Observable similarities provide the logical basis for inferring the possibility of a partially articulated generalization that would explain the features of both domains. Such inferences are more tentative than predictive analogical arguments because it is difficult to exclude alternative explanatory hypotheses. This may account for Donnan’s caution in Mochica pots: the ethnographic analogy demonstrates plausibility rather than confirmation.

To summarize: the two examples suggest preliminary conclusions about incremental analogical confirmation. First: prior knowledge of a relevant background uniformity appears to be a sine qua non for this type of confirmation, and greater precision in the background uniformity increases the strength of the argument. Second: predictive analogical arguments are much better candidates for incremental confirmation than abductive analogical arguments.

5. Bayesian analysis of analogical confirmation; inaccessible domains

The provisional conclusions above might motivate us to seek formal or quasi-formal models for incremental analogical confirmation. Would such models inevitably reinforce those conclusions? Dardashti et al. (Reference Dardashti, Hartmann, Thébault and Winsberg2019) (henceforth [DHTW Reference Dardashti, Hartmann, Thébault and Winsberg2019]) propose a Bayesian analysis of incremental “analogue confirmation”Footnote 6 and argue for its potential applicability to black holes and other inaccessible target domains.

In this section, I first review why Dardashti et al. turn to a Bayesian analysis. I then present the elements of their Bayesian model (DHTW Reference Dardashti, Hartmann, Thébault and Winsberg2019), an excellent framework for thinking about incremental analogical confirmation. In the end, I believe that there is a significant problem with its application to the black hole example. Nevertheless, Bayesian models can significantly advance the discussion of analogical reasoning about inaccessible target domains, in some cases by showing how analogies provide confirmation and in other cases by establishing limitations.

The motivation for developing a Bayesian analysis of analogue confirmation comes from an earlier paper (Dardashti et al. Reference Dardashti, Thébault and Winsberg2017). There, the authors adopt a version of the sine qua non assumption above. Analogue confirmation for a hypothesis about an inaccessible target domain depends upon a prior assumption of universality: variation in physical type between source and target is irrelevant to the physical properties of interest. This type of background knowledge seems feasible if the target domain is partially accessible to observation, but the authors suggest that even if the target is inaccessible, “model-external and empirically grounded arguments”, or MEEGA, might still be given for universality (Reference Dardashti, Thébault and Winsberg2017, 73). In the black hole example, the challenge is to justify an assumption, call it X, that laboratory analogues of black holes belong to the same universality class as actual black holes. Given X, the observation of phenomena analogous to Hawking radiation in the analogue experiments should provide incremental confirmation for Hawking radiation in actual black holes.

The Bayesian analysis of Dardashti et al. (Reference Dardashti, Hartmann, Thébault and Winsberg2019) appears to make two improvements on the 2017 paper. First, it provides a clear and very general framework that specifies the link between universality (X) and incremental analogical confirmation. Second, it replaces the assumption X with the seemingly weaker assumption 0 < Pr(X) < 1. We need only assume non-zero prior probability for universality, which should be acceptable to any open-minded Bayesian.

In their general analysis, Dardashti et al. begin with an analogy between models A and M of the source and target domains, respectively. They introduce four binary variables:

  • X: Universality assumptions hold. (X = 1 iff domains are in same universality class.)

  • M: The model M of the target is empirically adequate.

  • A: The model A of the source is empirically adequate.

  • E: Empirical evidence is observed for the model A.

They make four assumptions:

  1. (1) 1 > Pr(X) > 0: Universality has intermediate prior probability.

  2. (2) Pr(M/X) > Pr(M/¬X): Universality supports M.

  3. (3) Pr(A/X) > Pr(A/¬X): Universality supports A.

  4. (4) Pr(E/A) > Pr(E/¬A): E is supported by A.

From these assumptions, one can prove that the observation of evidence E in the source domain provides incremental confirmation for the adequacy of the target model:

$${\rm{Pr}}\left( {{\rm{M}}/{\rm{E}}} \right) \gt {\rm{Pr}}\left( {\rm{M}} \right).$$

This constitutes analogue confirmation. Notably, the derivation requires significant assumptions about independence. For instance, M and A are assumed to be independent conditional on the value of X.

The analysis allows for analogue confirmation even if the target domain is inaccessible, and as already noted, it substitutes the weak assumption (1) for the much more demanding requirement of model-independent justification for the universality assumption, X. Applied to the black hole example, A stands for a model of a black hole analogueFootnote 7, M for a model of a black hole, and E for the observation in the laboratory of an analogue to Hawking radiation. After arguing that conditions (1) – (4) are satisfied, the authors conclude that the observation of phenomena analogous to Hawking radiation can incrementally confirm the reality of Hawking radiation in actual black holes.

My major objection to this analysis is that we achieve no gain in generality by replacing the assumption X with the assumption 1 > Pr(X) > 0. A statistical version of universality is still required for the derivation to work. As acknowledged by Dardashti et al. (Reference Dardashti, Hartmann, Thébault and Winsberg2019), the Bayesian network assumptions and (1) – (4) imply, with probability 1, that A and M are biased in the same way: Pr(M/A) > Pr(M) and Pr(M/¬A) < Pr(M) regardless of the value of Pr(X), so long as 0 < Pr(X) < 1. The physical differences between source and target make no difference; correlation (with probability 1) is guaranteed by the introduction of the binary variable X and the other assumptions. It seems that the requirement for a universality assumption is no weaker than in the 2017 paper, and we face the same basic difficulty: how to provide independent justification when the target domain is entirely inaccessible. My concern is similar to one expressed by Crowther et al. (Reference Crowther, Linnemann and Wüthrich2021), who insist that analogue confirmation depends on prior confirmation that source and target belong to a common universality class.

In short, we do not yet have a convincing Bayesian analysis demonstrating the possibility of incremental analogical confirmation when the target domain is inaccessible in all relevant respects.Footnote 8 Instead, the Bayesian analysis appears to point to the conclusion suggested by the examples of section 4: there can be no incremental confirmation without a relevant background generalization that generates a correlation between the relevant variables in the source and target domains. Still, as noted earlier, even this limitative result demonstrates the value of the Bayesian approach in understanding the structure of analogical confirmation.

6. Conclusion

I close with two optimistic comments about analogies and inductive inference. First: although I remain skeptical about incremental analogical confirmation for totally inaccessible targets, there are ways to justify the relevant background assumptions for partially inaccessible targets. We saw this for neurodegenerative diseases. There may be an indirect route to incremental analogical confirmation for Hawking radiation, based either on general theoretical considerations or on accessible knowledge about black holes.

Second: the distinction between confirmation and plausibility arguments is not always critical. In the Bayesian framework, plausibility arguments count towards overall probability and are part of the logic of confirmation. As Reiss (Reference Reiss2019) suggests, there is much to be said for broadening our understanding of model-based reasoning (and by extension, analogical reasoning) beyond a narrow focus on confirmation.

Acknowledgments

I would like to acknowledge extremely valuable questions, comments and suggestions from Karim Thebault, Peter Evans, Patricia Palacios, Jennifer Jhun, Doreen Fraser, and other participants in the PSA session and an earlier online workshop on analogies. Doreen Fraser merits special thanks for organizing the workshop and PSA session. I also acknowledge helpful comments from two anonymous reviewers.

Footnotes

1 Crowther et al. (Reference Crowther, Linnemann and Wüthrich2021), in particular, reject claims of confirmation. They readily accept, however, a role for the analogue experiments in plausibility arguments.

2 Chapman and Wylie (Reference Chapman and Wylie2016) review decades-long debates.

3 This example expands on the discussion in Bartha (Reference Bartha2010). For a historical account, see Maier (Reference Maier1981).

4 Ahmad-Annuar et al. (Reference Ahmad-Annuar, Tabrizi and Fisher2003) distinguish between the “genotype driven” approach in which the causal gene for the disease is known, and the “phenotype driven” approach in which the causal gene is unknown and mouse models are used to explore potential genetic pathways. The appeal to the broad background uniformity provides broad justification for pursuing both approaches.

5 The uniformity in neurodegenerative disease lacks the precision of model ships. The genetic pathway in mice is not an exact replica of what it is in humans, and there are individual variations in both species. The uniformity might be analyzed in statistical terms, which yields incremental rather than strong confirmation (see note 8).

6 Their terminology.

7 An important part of the arguments in both Dardashti et al. (Reference Dardashti, Thébault and Winsberg2017) and DHTW (Reference Dardashti, Hartmann, Thébault and Winsberg2019) is the existence of multiple black hole analogues, but I am unable to address this point here.

8 It is not difficult to construct alternative Bayesian models for incremental analogical confirmation. One simple idea (which I am unable to discuss here) is to generalize the analysis of strong confirmation from section 2, using statistical relations in place of Buckingham’s deterministic equations. In all such models, however, there appears to be unavoidable reliance upon an independently established background uniformity.

References

Ahmad-Annuar, Azlina, Tabrizi, Sarah J., and Fisher, Elizabeth M. C.. 2003. “Mouse Models as a Tool for Understanding Neurodegenerative Diseases.” Current Opinion in Neurology 16:451–58.10.1097/01.wco.0000084221.82329.29CrossRefGoogle ScholarPubMed
Bartha, Paul F. A. 2010. By Parallel Reasoning: The Construction and Evaluation of Analogical Arguments. New York: Oxford University Press.CrossRefGoogle Scholar
Buckingham, Edgar. 1914a. “Physically Similar Systems.” Journal of the Washington Academy of Sciences 93:347–53.Google Scholar
Buckingham, Edgar. 1914b. “On Physically Similar Systems: Illustrations of the Use of Dimensional Equations.” Physical Review 4:345–76.10.1103/PhysRev.4.345CrossRefGoogle Scholar
Chapman, Robert, and Wylie, Alison. 2016. Evidential Reasoning in Archaeology. London: Bloomsbury Academic.Google Scholar
Crowther, Karen, Linnemann, Neils S., and Wüthrich, Christian. 2021. “What We Cannot Learn from Analogue Experiments.” Synthese 198 (Suppl 16):37013726. https://doi.org/10.1007/s11229-019-02190-0.CrossRefGoogle Scholar
Currie, Adrian. 2018. Rock, Bone, and Ruin: An Optimist’s Guide to the Historical Sciences. Cambridge: The MIT Press.10.7551/mitpress/11421.001.0001CrossRefGoogle Scholar
Dardashti, Radin, Thébault, Karim P. Y., and Winsberg, Eric. 2017. “Confirmation via Analogue Simulation: What Dumb Holes Could Tell Us About Gravity.” British Journal for the Philosophy of Science 68 (2017): 5589.CrossRefGoogle Scholar
Dardashti, Radin, Hartmann, Stephan, Thébault, Karim, and Winsberg, Eric. 2019. “Hawking Radiation and Analogue Experiments: A Bayesian Analysis.” Studies in History and Philosophy of Modern Physics 67 (2019):111.CrossRefGoogle Scholar
Donnan, Christopher B. 1971. “Ancient Peruvian Potters’ Marks and Their Interpretation Through Ethnographic Analogy.” American Antiquity 36 (4):460–66.CrossRefGoogle Scholar
Fisher, Elizabeth M. C., and Bannerman, David M.. 2019. “Mouse Models of Neurodegeneration: Know Your Question, Know Your Mouse.” Science Translational Medicine 11 (493): eaaq1818.10.1126/scitranslmed.aaq1818CrossRefGoogle ScholarPubMed
Froude, William. 1874. “On Experiments with HMS Greyhound.” Transactions of the Royal Institution of Naval Architects 15:3673.Google Scholar
Maier, Clifford L. 1981. The Role of Spectroscopy in the Acceptance of the Internally Structured Atom, 1860–1920. New York: Arno Press.Google Scholar
Norton, John. 2021. The Material Theory of Induction, posted version of March 14, 2021. ∼https://sites.pitt.edu/∼jdnorton/papers/material_theory/Material_Induction_March_14_2021.pdf.CrossRefGoogle Scholar
Reiss, Julian. 2019. “Against External Validity.” Synthese 196:3103–21.10.1007/s11229-018-1796-6CrossRefGoogle Scholar
Sterrett, Susan. 2017a. “Physically Similar Systems – A History of the Concept.” In Springer Handbook of Model-Based Science, edited by Magnani, Lorenzo and Bertoletti, Tomasso, 377411. Dordrecht: Springer.10.1007/978-3-319-30526-4_18CrossRefGoogle Scholar
Sterrett, Susan. 2017b. “Experimentation on Analogue Models.” In Springer Handbook of Model-Based Science, edited by Magnani, Lorenzo and Bertoletti, Tomasso, 857878. Dordrecht: Springer.CrossRefGoogle Scholar
Taylor, David W. 1907. “Simple Explanation of Model Basin Methods.” Scientific American 97 (23):418–35.CrossRefGoogle Scholar
Figure 0

Figure 1. Reduced equation model.

Figure 1

Figure 2. Acoustical analogy.