Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-02-12T04:29:31.117Z Has data issue: false hasContentIssue false

Biological Information as Choice and Construction

Published online by Cambridge University Press:  01 January 2022

Rights & Permissions [Opens in a new window]

Abstract

A causal approach to biological information is outlined. There are two aspects to this approach: information as determining a choice between alternative objects and information as determining the construction of a single object. The first aspect has been developed in earlier work to yield a quantitative measure of biological information that can be used to analyze biological networks. This article explores the prospects for a measure based on the second aspect and suggests some applications for such a measure. These two aspects are not suggested to exhaust all the facets of biological information.

Type
Research Article
Copyright
Copyright © The Philosophy of Science Association

1. Introduction

Biological development is classically assumed to reflect the expression of information accumulated in the genome during evolution (Mayr Reference Mayr1961; Jacob Reference Jacob1970). Major textbooks and popular science presentation of biology rely on this picture (e.g., Alberts et al. Reference Alberts2013). Leading biologists are also attracted to this view (Williams Reference Williams1992, 10; Maynard Smith and Szathmary Reference Maynard Smith and Szathmary1995, Reference Maynard Smith and Szathmary2000; Jablonka Reference Jablonka2002). On closer scrutiny, however, the role of information in biology seems purely instrumental: it serves either as a metaphor or as a tool for big data analyses; biology does not yet have a theory of life as an information-processing phenomenon (Sarkar Reference Sarkar, Sarkar and Cohen1996; Godfrey-Smith Reference Godfrey-Smith2000). The aim of this article is to offer some scientific substance to such a theory.

Several theoretical and philosophical approaches have interpreted living systems as information-processing systems. One tradition identifies information with meaning, interpretation, and intentionality (Barbieri Reference Barbieri2007; Shea Reference Shea2007). A second tradition, which I espouse here, identifies information with patterns of association between objects (Dretske Reference Dretske1981).

I start from the sense of information introduced by Crick in his sequence hypothesis and central dogma of molecular biology, which was to become massively influential in biology: “Information … means the precise determination of sequence” (Crick Reference Crick1958, 153; see Kay Reference Kay2000). Information here is causal (Šustar Reference Šustar2007). Crick introduced this conception in an attempt to understand how DNA and RNA carry biological specificity for the synthesis of proteins, an idea that parallels the modern contrast philosophers draw between specific causes and other necessary, background factors to obtain an effect (Woodward Reference Woodward2010). Griffiths and Stotz (Reference Griffiths and Stotz2013, chap. 4) have argued that Crick’s sense of information vindicates the idea that factors other than DNA are also sources of information for biomolecules, a phenomenon they called ‘distributed specificity’. This idea needs substantiation.

I explore here this idea and develop an approach to biological information as a measurable and distinctive aspect of biological systems. This approach has two facets, inspired from, respectively, Shannon’s and Kolmogorov’s approaches in information theory. On the one hand, we have a measure of the relative, complementary influence of several causes of the same event (sec. 2). This concerns the choice between alternative objects and is blind to the information content of each object. This approach has been extensively discussed and applied elsewhere. On the other hand, we have measures of the complexity of a single object, independently of any particular set of alternatives (sec. 3). These can measure the information inherent to a biomolecule and the quantity of information in a molecule that can be attributed to a particular source. The computability of these latter measures, however, is problematic; in practice, tentative measures ought to be used. The role of randomness in creating information is outlined (sec. 4). I sketch potential developments for a Kolmogorov-inspired approach (sec. 5) and argue that it is a potentially fruitful yet challenging biological research program (sec. 6). The two approaches are not straightforwardly reducible to one another and are not suggested to exhaust all aspects of biological information.

2. Causal Specificity: Information as Choice

Recent work has defined an information-theoretic measure of the ‘specificity’ of a cause for an effect, the extent to which a cause precisely determines an effect, and applied this measure to biological problems (Griffiths et al. Reference Griffiths, Pocheville, Calcott, Stotz, Kim and Knight2015; Pocheville, Griffiths, and Stotz Reference Pocheville, Griffiths, Stotz and Leitgeb2017; Weber Reference Weber2017; Calcott, Pocheville, and Griffiths Reference Calcott, Pocheville and Griffiths2018). This work develops earlier, qualitative discussions of ‘causal specificity’ in philosophy (Woodward Reference Woodward2010) and converges with formal work on causation in complex systems theory (see n. 1).

Causal specificity is measured using Shannon information theory, which conceives information as a reduction in uncertainty (Shannon Reference Shannon1948; Cover and Thomas Reference Cover and Thomas2006). Uncertainty, measured in bits, can be understood as the average number of binary (yes/no) questions that are required to determine the value of an unknown variable. A variable is said to share mutual information with another variable when it reduces our uncertainty about that variable. Mutual information measures the association between two variables: the more two variables are associated, the more each of them answers questions about the value of the other. Causal specificity can be measured by the mutual information between values of a cause variable set by an intervention and the value of a putative effect variable (Griffiths et al. Reference Griffiths, Pocheville, Calcott, Stotz, Kim and Knight2015, 538). Formally, the causal specificity of C for E when controlling for a putative background B is given by the following formula (Pocheville et al. Reference Pocheville, Griffiths, Stotz and Leitgeb2017):Footnote 1

I(C^;E|B^)=bp(b^)cp(c^|b^)ep(e|c^,b^)log2p(e|c^,b^)p(e|b^).

The ^ (hat) on a variable is an operator indicating that its value is set by an intervention rather than observed (Pearl Reference Pearl2009).Footnote 2 This operator transforms the symmetrical mutual information, representing observed association, into an asymmetric measure of causal influence, representing how much experimentally intervening on C while controlling for B affects E. If C is not a cause of E, then I(C^;E|B^)=0. Reciprocally, if C is a cause of E, then there exists at least one set of background variables B (which can be empty) such that I(C^;E|B^)>0 (Pocheville Reference Pocheville2018a).

This measure of causal specificity seems to capture one aspect of Crick’s, and the above cited biologists’, conception of information as ‘precise determination’. It can be used to compare the causal contributions of genetic and epigenetic causes to the production of biomolecules (Griffiths et al. Reference Griffiths, Pocheville, Calcott, Stotz, Kim and Knight2015). It can be applied to objects other than biomolecules and is a practical tool for the analysis of biological networks (Tononi et al. Reference Tononi, Sporns and Edelman1999; Calcott et al. Reference Calcott, Pocheville and Griffiths2018; Pocheville Reference Pocheville2018b).

There is, however, a blind spot in Shannon information theory: it is silent about the information content of the objects themselves. For example, it makes no difference to the amount of information that DNA carries about RNA whether the DNA strands are three or 1,416 nucleotides long. What matters is only the number of values that the variable ‘DNA’ can take and the probability distribution over those values. Arguably, the longer the sequences, the greater the number of possible alternatives, and thus the greater the potential causal specificity of these alternatives.Footnote 3 Still, in an actual case, the number of alternatives can be zero, and a very long DNA sequence can therefore have null causal specificity for its own transcript. Causal specificity represents a sense of information that enables us (or causes the system) to choose between a set of well-defined alternatives with a well-defined probability distribution. This is ‘information as choice’. If what we are interested in is the information content of a single object, another branch of information theory, Kolmogorov complexity, is more appropriate. It is to this second aspect of information that I now turn.

3. Kolmogorov Meets Crick: Information as Construction

Kolmogorov complexity can measure the complexity of a single object (Grünwald and Vitányi Reference Grünwald and Vitányi2003; Li and Vitányi Reference Li and Vitányi2008). The intuitive idea is that the more complex the object, the longer its description needs to be. The Kolmogorov complexity is the length of the shortest description enabling one to reconstruct the object using a computer (or, more precisely, a universal Turing machine).Footnote 4 Kolmogorov complexity also provides a measure of the amount of information in an object about another object. This is measured by the algorithmic mutual information: it is the amount of program length that one saves when describing one object given a description of the other object for free. Algorithmic mutual information is symmetrical.

Obviously, the length of the shortest description will depend not only on the object at stake but also on the language (the description method) used to write the program generating the object. However, the lengths of the shortest descriptions in two different languages will be the same up to a given translation constant, independent of the object itself. The reason is that the translation from one language to another can be described by a program, of which the length is fixed (which gives the constant of translation). In this sense, the Kolmogorov complexity is an objective property of the object.

A drawback of Kolmogorov complexity is that it is provably uncomputable: there is no computer program that, given any string as an input, returns its Kolmogorov complexity as an output. In itself, this is an interesting negative result: if what we are interested in is complexity in this sense, then what we want to know is simply uncomputable. In practice, one can bound the complexity of binary objects using diverse lossless compression methods (e.g., those used in the zip file format). Indeed, the compressed object is a (hopefully shorter) description enabling one, together with a decompression program, to reconstruct the initial object. The length of the description is then the length of the compressed file plus a constant, the length of the decompression program. This measurement is tentative, not definitive, as other, potentially unknown compression methods might compress the object more. For the sake of the argument, we assume for the moment that we are given a reasonable compression method.

Kolmogorov complexity can be used to explore what Crick meant when he described the determination of proteins by nucleic acids as the “detailed residue-by-residue transfer of sequential information” (Crick Reference Crick1970, 561), where nucleotides would form a quaternary alphabet and amino acids a vigesimal one. Two kinds of questions can be addressed: about how much information there is in a given biological object and, closer to Crick’s thinking, about how much information in an object comes from another.Footnote 5

The complexity of a strand of DNA, for instance, can be approached by measuring the length of the compressed sequence. Telomeres provide an interesting limit case. They are nucleotide sequences at the end of chromosomes, consisting of a repetitive pattern (e.g., TTAGGG in humans and many other species). Telomeres are elongated by an enzyme, called telomerase, which embeds an RNA sequence as a template (Hiyama, Hiyama, and Shay Reference Hiyama, Hiyama, Shay and Hiyama2009). It is not difficult to come up with a program describing a given telomeric sequence in a compact way. Whatever the length of a telomere, it can be described by a template for the repeated pattern and the number of repeats (fig. 1, algorithm 1). A naive observer would surely think that telomeres do not contain much information and, in particular, not much sequential information. The intuition here coincides with the low Kolmogorov complexity of these sequences.

Figure 1. Four algorithms illustrating an algorithmic approach to biological functioning.

The situation looks quite different for coding sequences. There does not seem to be, at first sight, as easy a way to compress these sequences as we did with telomeres, and their Kolmogorov complexity is probably substantially higher: a program to reconstruct a coding sequence may have to spell it out explicitly—or at least to spell out significant aspects of the sequence (fig. 1, algorithm 2). This lower compressibility coincides with the intuition that coding sequences carry sequential information—and even that it is their function to carry sequential information. However, coding sequences do not carry a maximal amount of sequential information: as an anonymous reviewer noticed, coding sequences are structured and are thus expected to be compressible to some extent—as are noncoding, so-called ‘junk’ DNA sequences containing a significant number of repetitive elements and duplications.Footnote 6 Note that the intrinsic amount of information in a sequence is independent of whether the sequence is inserted in a region that will actually undergo transcription. Arguably, even a coding sequence carries no information about any transcript if it is not transcribed, but it nevertheless carries sequential information tout court.

I now turn to the second question, asking how much information there is in an object about another object. For the sake of the argument, suppose that the world is as Crick supposed in 1958: the accuracy of information transfers is high, which we idealize by assuming that transcription (of DNA into RNA) and translation (of RNA into polypeptides) are error-free, deterministic processes. I ignore splicing and other posttranscriptional processes, which will be treated elsewhere. As described above, one can estimate the amount of information in DNA about RNA by their algorithmic mutual information, that is, I(DNA:RNA)=K(RNA)K(RNA|DNA*).Footnote 7 The shared information between DNA and RNA is substantial: the transcription process is all about replacing the nucleotides by their complementary ones, with the proviso that A’s in the coding DNA sequence are complementary to U’s (not T’s) in the RNA sequence. To see this sharing of information, compare the lengths of an algorithm spelling out the RNA explicitly (similar to algorithm 2) and one treating transcription generically (fig. 1, algorithm 3). The difference in length would increase with sequence length. This corresponds to the fact that sequential information is transferred from DNA to RNA through transcription. If transcription is errorless, the sequential information in RNA that does not come from DNA, measured by the remainder complexity K(RNA|DNA*), is a constant, independent of the sequence. Algorithmic mutual information between biological sequences has been used in the past decade with various aims, such as the building of phylogenetic trees according to the amount of information needed to transform one DNA sequence into another (see, e.g., Chen, Kwong, and Li [Reference Chen, Kwong and Li2000], Li et al. [Reference Li, Badger, Chen, Kwong, Kearney and Zhang2001], Chen et al. [Reference Chen, Li, Ma and Tromp2002], and Vinga [Reference Vinga2014] for a review).

Since the ‘true’ Kolmogorov complexity is uncomputable, an algorithmic approach relies on a bet: that the language of description and compression methods captures interesting and relevant aspects of the object at stake. This is not to say that the approach is necessarily entirely arbitrary: once these methods are agreed on, researchers can agree on the measures obtained for finite sequences. If a particular language gives particularly interesting results (e.g., saving biological appearances, leading to new questions, predictions, and generalizations), then this language becomes a theoretical entity worth discussing in its own right. In the remainder of the article, I outline what I deem desirable features for such a language and substantially develop the algorithmic approach to take into account the fact that biological systems are not, strictly speaking, deterministic, universal Turing machines. What I aim at is not an application of conventional algorithmic information theory to biology, but a specifically biological approach to information inspired by the Kolmogorov branch of information theory.

4. Randomness as the Source of Information

I made several idealizing assumptions in the previous sections. Let us now relax the assumption that cellular processes are deterministic. The argument here will remain theoretical: there is no room to take sides on whether, and how, randomness is actually realized in biology.Footnote 8

Random events, by definition, cannot be determined in advance by an algorithm. This means that randomness in the generation of a sequence creates information de novo. In biological terms, this means that any random point mutation, any error of transcription, and so forth, if they are genuinely random, can create information in the Kolmogorov sense. As seen in the previous section, this also means that randomly generated sequences contain more information than highly structured sequences. From an algorithmic point of view, randomness is, ultimately, the only way to create information.

This information need not always be functional, that is, of any use to the cell. That it may sometimes be so, however, is a reasonable assumption. There are several biological examples suggesting that randomness plays a key role in biological functioning (Kupiec Reference Kupiec1983; Heams Reference Heams2014). Gene shuffling in the immune system of jawed vertebrates provides one such example regarding biological sequences. It enables a great variety of antibodies to be produced, orders of magnitude more numerous than the genes producing them, increasing the chance of matching potentially threatening antigens (Cooper and Alder Reference Cooper and Alder2006).

This tension between information and function is why it is crucial to distinguish them. One might be interested in how information flows in biological systems without committing oneself to a particular account of biological function. More importantly, if one is interested in whether and how information leads to function, a concept of biological information as necessarily biologically functional will beg the question.

5. A Language for the Cell

Kolmogorov complexity allowed us to flesh out the idea of information as construction. Now we need to kick that ladder away and ask what information as construction actually looks like in living systems. I suggest that it ought to be measured using a particular programming language: the language of the cell itself, in which available programming functions mimic actual operations by which molecules are produced. It goes without saying that what I evoke here is not the ‘true’ language, but a model of a language of the cell.

The idea of a language of the cell takes us away from treating cells as universal Turing machines and from the genuine Kolmogorov complexity K, to consider a more biological algorithmic complexity, the Kolmogorov complexity in the chosen biological language (hereafter denoted KB). For instance, algorithmic mutual information is symmetric: there is as much information in DNA about RNA as there is in RNA about DNA, that is, I(RNA:DNA)=I(DNA:RNA). But not all operations are possible in a cell. A central feature of molecular biology is that flows of information are asymmetrical. Crick’s ‘central dogma’ (still widely held today) states which flows of information between biomolecules are possible and which are not. If no reverse-transcriptase is present, for instance, no information can flow from RNA to DNA. In ‘biologically’ algorithmic terms, this means that a biological program aiming at reconstructing a DNA sequence being given an RNA sequence as an input would fare no better than a program being given no input, and we would obtain KB(DNA|RNA*)=KB(DNA). This means that we would get, as for a biological analogue of algorithmic mutual information,

IB(RNADNA)=KB(DNA)KB(DNA|RNA*)=0.

(The subscript B again denotes that the measure is defined using the chosen biological language, and the arrow now reflects that it can be asymmetric.)Footnote 9 The reciprocal, as we have seen above, is very different: when DNA is transcribed into RNA, KB(RNA|DNA*)=C, where C is a constant not depending on the sequences. Assuming, for the sake of presentation, that KB(RNA)=KB(DNA), we would obtain

IB(DNARNA)=KB(RNA)KB(RNA|DNA*)=KB(DNA)C.

Thus, contrary to its genuine counterpart, biological algorithmic mutual information would not be expected to be always symmetrical, reflecting the directionality of possible information flows.

In the same vein, not all sequences can be produced by a given cell. In algorithmic information theory, a universal Turing machine can emulate any other Turing machine, which means that there is no sequence that a particular machine can produce that a universal machine cannot produce. By contrast, if the cell lacks a programming function, for instance, if it lacks a nucleic acid template, or if some nucleotides do not belong to its alphabet, then some sequences may be impossible to produce. In this case, the information needed to produce an impossible sequence is ill-defined; in other words, the amount of information needed to produce the sequence is indefinite. Even on an evolutionary time scale, the amount of information needed to acquire the programming function (if it is acquired) and produce the previously impossible sequence could be orders of magnitude greater than the length of the sequence.

Granted that some operations are impossible, how are we to describe the set of primitive programming functions that, by contrast, are possible?

As we have seen above, the complexity of an object depends on the language used to describe it. An example will flesh out this idea. Assume, say, that ‘Transcribe’ is given for free by the language and that the description of the function is short: say, just a few letters. Contrast this with a DNA sequence of several kilobases. This DNA sequence appears much more informational than the function ‘Transcribe’.Footnote 10 Now, imagine that ‘Transcribe’ is not given for free by the language, but that one has to write a program for this function, using other, more primitive, available functions. I exemplify such a program in algorithm 4 (fig. 1); it could be made much longer by describing explicitly the dynamics of chemical bonds in a binary manner (assuming for the sake of the argument that this would be feasible). Conversely, the description of a long DNA sequence can be very short. For instance, nominal genes are usually described not by their full sequence, but by a nickname like ‘p53’. This nickname is enough, on most occasions, for biologists to communicate about the processes at stake. A language can lack the function ‘Transcribe’ but have a built-in function ‘P53’ dedicated to returning the full sequence of the gene. In such a language, descriptions of transcription would be complex (informational) and that of DNA simple. Thus, one needs to be cautious about the language of description before assigning any particular object a privileged informational role, much in the same way that one needs to be cautious about specifying the probability distributions when using Shannon information theory.

I propose that the primitive functions should be those that enable us to understand the processes of interest. Assume, for instance, that our interest lies in understanding the flows of sequential information between biological polymers. Then assuming that ‘Transcribe’ and ‘Translate’ are given as primitive functions is fine: if they are errorless, they are not difference makers with regard to the final sequences of the products (an assumption that I made in algorithm 3). Generally speaking, it makes sense to consider as primitives those operations that are not difference makers with regard to the outputs of interest, and as inputs those very difference makers: the genericity for functions and the specificity for inputs. Incidentally, it is good algorithmic practice to write functions for generic operations and give them specific variables as inputs. This is not unlike causal specificity: once the generic functional relationships in the causal model are set, information flows from difference makers.

6. Payoff of the Approach

The algorithmic approach sketched above may promote research on biological systems, although not without significant challenges.

Even the best current specifically designed compression algorithm may overestimate biological complexity because the algorithm may not have compressed the object sufficiently. On the positive side, compression algorithms may actually tend to parallel the biological processes that have produced the sequences at stake. For instance, if DNA translocation is frequent, then an algorithm that pays due attention to translocation should be more likely to compress a DNA sequence. Conversely, considering that most strings are random in the algorithmic sense, it is highly unlikely that a series of refined algorithms will converge, if they do convergence, toward something other than the processes involved in producing the sequences. It is highly unlikely that the cell will, by chance, produce a string that is compressible by other means than some of its own means of production (or the corresponding models of these means). In other words, improving these algorithms may yield a better grasp of functions that are in fact available in the language of the cell.

Just as an algorithm may overestimate complexity, however, it can also underestimate biological complexity. Because cells are not universal Turing machines, a biological sequence may be more complex than its algorithmic counterpart. For instance, a cell may need a complex process to resist random perturbations when duplicating a sequence, while a universal Turing machine, being deterministic, would not. Similarly, a short sequence may require a complex machinery or a complex evolutionary history to produce it. Just as biological complexity can shortcut algorithmic complexity (when a cell generates randomness), it can also exceed it.

7. Conclusion

This article aimed to give substance to the idea of biological information—an idea that has grounded significant aspects of informal biological thought for the past 50 years. Crick’s seminal use, in molecular biology, of the term ‘information’, meaning the precise determination of sequence, is grounded on causation, not meaning or representation. I inflected this idea in two ways, corresponding to two aspects of information theory: the precise determination of a single output from a set of alternatives (‘information as choice’) and the precise determination of the sequence of a single output (‘information as construction’). These two aspects can be traced back to Crick, whose idea of information as construction—to rephrase in our terms—was an attempt to provide an explanation of information as choice, in the sense of biological specificity (Crick Reference Crick1958, Reference Crick1970). This suggests that Griffiths and Stotz’s (Reference Griffiths and Stotz2013) idea of distributed specificity is theoretically richer than initially envisioned.

Information as choice is captured by causal specificity, proposed elsewhere to be measured by the Shannon mutual information between values of a cause set by an intervention and observations of the effect. This measure can be applied to causal graphs, such as those representing gene regulatory or animal signaling networks, and has numerous potential applications in biology.

Information as construction is captured by the Kolmogorov complexity of a sequence and the algorithmic mutual information between two sequences. These measures capture the intuition that there is something in common between a program generating a sequence and the biological processes of transcription and translation. I insisted, however, that there is more to biology than discrete, deterministic computing: randomness plays a central role in biological functioning. A similar point could be made regarding the nondiscrete nature of biological phenomena. From the point of view of Kolmogorov complexity, randomness creates information. Such information is not necessarily functional, and distinguishing between information and function is a necessary step toward better understanding how information can lead to function.

I proposed that biological algorithmic complexity ought to be measured using a biologically relevant programming language—the language in which the cell performs its own operations. In such a language, some operations, such as reverse translation, will be impossible. This means that the biological complexity of a sequence can far exceed its own length, making it very different from nonbiological algorithmic complexity. In planned future work, I will take up the challenge of fleshing out the ‘language of the cell’ and articulating the choice and construction aspects of biological information.

Footnotes

Maël Montévil is warmly thanked for insightful discussions and coining the language of the cell. A manuscript in preparation (Pocheville and Montévil 2018) develops the present paper. Two anonymous reviewers and the editor made numerous very helpful suggestions, and the audience at the 2016 PSA meeting provided kind feedback. Paul Griffiths and Karola Stotz mentored the project, and Maureen O’Malley provided kind encouragement. This research was supported by the Judith and David Coffey Life Lab (headed by Jean Yang), Charles Perkins Centre. This publication was made possible through the support of two grants from the Templeton World Charity Foundation (TWCF0063 to Paul Griffiths, TWCF0242 to Arnaud Pocheville). The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the Templeton World Charity Foundation.

1. This measure has been previously proposed in cognitive sciences (Tononi, Sporns, and Edelman Reference Tononi, Sporns and Edelman1999) and in computational sciences (Korb, Hope, and Nyberg Reference Korb, Hope, Nyberg, Emmert-Streib and Dehmer2009). Closely related measures have been proposed by Ay and Polani (Reference Ay and Polani2008) and Janzing et al. (Reference Janzing, Balduzzi, Grosse-Wentrup and Schölkopf2013). Transfer entropy is another ‘information as choice’ measure—although correlational in character, not causal (Lizier and Prokopenko Reference Lizier and Prokopenko2010)—that has been applied to the study of the origins of life (Walker, Davies, and Ellis Reference Walker, Davies and Ellis2017).

2. As an anonymous reviewer noticed, the term p(c^|b^) implies that the intervention on C be potentially dependent on a previous intervention on B, which seems to contradict the very idea of an intervention. In chosen applications, however, one may decide that interventions are independent and that p(c^|b^)=p(c^). When the terms differ, the intervention on C can be thought of as a partial intervention, breaking all causal links pointing to C but some stemming from B.

3. The Shannon entropy of a source emitting sequences of length l asymptotically tends toward the expected Kolmogorov complexity (see the next section) of the sequences as l→∞. Potential causal specificity and expected Kolmogorov complexity thus go hand in hand. I leave to future work to make this connection more explicit (see Grünwald and Vitányi Reference Grünwald and Vitányi2003, 518; Li and Vitáni Reference Li and Vitányi2008, 187; Balduzzi Reference Balduzzi2011).

4. A Turing machine consists of a finite program capable of manipulating a linear list of cells (each containing a symbol, from a finite set of symbols), accessing one cell at a time. A universal Turing machine is one that can imitate any other Turing machine (Li and Vitányi Reference Li and Vitányi2008, 24).

5. On the algorithmic approach in biology, see, e.g., Yockey (Reference Yockey2005, 170) and especially Chaitin (Reference Chaitin, Levine and Tribus1979, Reference Chaitin2012) and the discussion by Artmann (Reference Artmann, Sarkar and Plutynski2008, 32–37). I lack space to review the independent convergences and divergences of the account proposed here.

6. Whether the compressibility of sequences is an inevitable aspect of their biological function is precisely a question I wish to address in the long term.

7. Where DNA* is the shortest program generating DNA.

8. Doing so properly would require developing an account of measurement in biology, as those developed for deterministic chaos and quantum indeterminacy. In deterministic chaos, any finite measurement of the initial condition will leave aside information (the amount of which is infinite) that will manifest itself after a certain time in the system (Montévil Reference Montévil2018, secs. 2.1, 2.3). Quantum indeterminism represents another entry to a physicalist view of the appearance of information (Stamos [Reference Stamos2001] and the responses).

9. I follow the notation used for one asymmetrical, causal version of Shannon mutual information (Ay and Polani Reference Ay and Polani2008).

10. Many biologists and some philosophers routinely ascribe to DNA a privileged informational role. One way to reconstruct this idea is to consider that they implicitly assume such a language.

References

Alberts, Bruce, et al. 2013. Essential Cell Biology. 4th ed. New York: Garland Science.CrossRefGoogle Scholar
Artmann, Stefan. 2008. “Biological Information.” In A Companion to the Philosophy of Biology, ed. Sarkar, Sahotra and Plutynski, Anya. New York: Wiley.Google Scholar
Ay, Nihat, and Polani, Daniel. 2008. “Information Flows in Causal Networks.” Advances in Complex Systems 11 (1): 1741..CrossRefGoogle Scholar
Balduzzi, David. 2011. “Information, Learning and Falsification.” Preprint arXiv:1110.3592.Google Scholar
Barbieri, Marcello. 2007. Introduction to Biosemiotics: The New Biological Synthesis. Dordrecht: Springer Science & Business Media.CrossRefGoogle Scholar
Calcott, Brett, Pocheville, Arnaud, and Griffiths, Paul E.. 2018. “Signals That Make a Difference.” British Journal for the Philosophy of Science. doi:10.1093/bjps/axx022.CrossRefGoogle Scholar
Chaitin, Gregory J. 1979. “Toward a Mathematical Definition of ‘Life.’” In The Maximum Entropy Formalism, ed. Levine, R. D. and Tribus, M., 477–98. Cambridge, MA: MIT Press.Google Scholar
Chaitin, Gregory J. 2012. Proving Darwin: Making Biology Mathematical. New York: Vintage.Google Scholar
Chen, Xin, Kwong, Sam, and Li, Ming. 2000. “A Compression Algorithm for DNA Sequences and Its Applications in Genome Comparison.” In Proceedings of the Fourth Annual International Conference on Computational Molecular Biology. New York: ACM.Google Scholar
Chen, Xin, Li, Ming, Ma, Bin, and Tromp, John. 2002. “DNACompress: Fast and Effective DNA Sequence Compression.” Bioinformatics 18 (12): 1696–98..CrossRefGoogle ScholarPubMed
Cooper, Max D., and Alder, Matthew N.. 2006. “The Evolution of Adaptive Immune Systems.” Cell 124:815–22.CrossRefGoogle ScholarPubMed
Cover, Thomas M., and Thomas, Joy A.. 2006. Elements of Information Theory. New York: Wiley.Google Scholar
Crick, Francis. 1958. On Protein Synthesis. Symposium of the Society for Experimental Biology XII. New York: Academic Press.Google ScholarPubMed
Crick, Francis 1970. “Central Dogma of Molecular Biology.” Nature 227 (5258): 561–63..CrossRefGoogle ScholarPubMed
Dretske, Fred I. 1981. Knowledge and the Flow of Information. Oxford: Blackwell.Google Scholar
Godfrey-Smith, Peter. 2000. “On the Theoretical Role of ‘Genetic Coding.’Philosophy of Science 67:2644.CrossRefGoogle Scholar
Griffiths, Paul E., Pocheville, Arnaud, Calcott, Brett, Stotz, Karola, Kim, Hyunju, and Knight, Rob. 2015. “Measuring Causal Specificity.” Philosophy of Science 82 (4): 529–55..CrossRefGoogle Scholar
Griffiths, Paul E., and Stotz, Karola. 2013. Genetics and Philosophy: An Introduction. New York: Cambridge University Press.CrossRefGoogle Scholar
Grünwald, Peter D., and Vitányi, Paul M. B.. 2003. “Kolmogorov Complexity and Information Theory: With an Interpretation in Terms of Questions and Answers.” Journal of Logic, Language and Information 12 (4): 497529..CrossRefGoogle Scholar
Heams, Thomas. 2014. “Randomness in Biology.” Mathematical Structures in Computer Science 24 (3).CrossRefGoogle Scholar
Hiyama, Keiko, Hiyama, Eiso, and Shay, Jerry W.. 2009. “Telomeres and Telomerase in Humans.” In Telomeres and Telomerase in Cancer, ed. Hiyama, Keiko. New York: Springer Science & Business Media.CrossRefGoogle Scholar
Jablonka, Eva. 2002. “Information: Its Interpretation, Its Inheritance, and Its Sharing.” Philosophy of Science 69 (4): 578605..CrossRefGoogle Scholar
Jacob, François. 1970. La logique du vivant: Une histoire de l’hérédité. Paris: Gallimard.Google Scholar
Janzing, Dominik, Balduzzi, David, Grosse-Wentrup, Moritz, and Schölkopf, Bernhard. 2013. “Quantifying Causal Influences.” Annals of Statistics 41 (5): 2324–58..CrossRefGoogle Scholar
Kay, Lily E. 2000. Who Wrote the Book of Life? A History of the Genetic Code. Stanford, CA: Stanford University Press.Google Scholar
Korb, Kevin B., Hope, Lucas R., and Nyberg, Erik P.. 2009. “Information-Theoretic Causal Power.” In Information Theory and Statistical Learning, ed. Emmert-Streib, Frank and Dehmer, Matthias, 231–65. Boston: Springer.Google Scholar
Kupiec, J. J. 1983. “A Probabilist Theory for Cell Differentiation, Embryonic Mortality and DNA C-Value Paradox.” Speculations in Science and Technology 6 (5): 471–78..Google Scholar
Li, Ming, Badger, Jonathan H., Chen, Xin, Kwong, Sam, Kearney, Paul, and Zhang, Haoyong. 2001. “An Information-Based Sequence Distance and Its Application to Whole Mitochondrial Genome Phylogeny.” Bioinformatics 17 (2): 149–54..CrossRefGoogle ScholarPubMed
Li, Ming, and Vitányi, Paul. 2008. An Introduction to Kolmogorov Complexity and Its Applications. 3rd ed. New York: Springer Science & Business Media.CrossRefGoogle Scholar
Lizier, J. T., and Prokopenko, M.. 2010. “Differentiating Information Transfer and Causal Effect.” European Physical Journal B 73 (4): 605–15..CrossRefGoogle Scholar
Maynard Smith, John, and Szathmary, Eors. 1995. The Major Transitions in Evolution. Oxford: Oxford University Press.Google Scholar
Maynard Smith, John, and Szathmary, Eors 2000. The Origins of Life: From the Birth of Life to the Origin of Language. Oxford: Oxford University Press.Google Scholar
Mayr, Ernst. 1961. “Cause and Effect in Biology.” Science 134:1501–6.CrossRefGoogle ScholarPubMed
Montévil, Maël. 2018. “Possibility Spaces and the Notion of Novelty: From Music to Biology.” Synthese. doi:10.1007/s11229-017-1668-5.CrossRefGoogle Scholar
Pearl, Judea. 2009. Causality: Models, Reasoning, and Inference. 2nd ed. New York: Cambridge University Press.CrossRefGoogle Scholar
Pocheville, Arnaud. 2018a. “New Tools for Thinking about Causation.” Manuscript, University of Sydney.Google Scholar
Pocheville, Arnaud 2018b. “Signals That Represent the World.” Manuscript, University of Sydney.Google Scholar
Pocheville, Arnaud, Griffiths, Paul E., and Stotz, Karola. 2017. “Comparing Causes—an Information-Theoretic Approach to Specificity, Proportionality and Stability.” In Proceedings of the 15th Congress of Logic, Methodology and Philosophy of Science, ed. Leitgeb, Hannes et al. London: College Publications.Google Scholar
Pocheville, Arnaud, and Montévil, Maël. 2018. “Giving Substance to Biological Information.” Manuscript, University of Sydney.Google Scholar
Sarkar, Sahotra. 1996. “Biological Information: A Skeptical Look at Some Central Dogmas of Molecular Biology.” In The Philosophy and History of Molecular Biology: New Perspectives, ed. Sarkar, Sahotra and Cohen, R. S., 187232. Boston Studies in the Philosophy of Science, vol. 183. Dordrecht: Kluwer.CrossRefGoogle Scholar
Shannon, Claude. 1948. “A Mathematical Theory of Communication.” Bell System Technical Journal 27:379423.CrossRefGoogle Scholar
Shea, Nicholas. 2007. “Representation in the Genome and in Other Inheritance Systems.” Biology and Philosophy 22 (3): 313–31..CrossRefGoogle Scholar
Stamos, David N. 2001. “Quantum Indeterminism and Evolutionary Biology.” Philosophy of Science 68 (2): 164–84..CrossRefGoogle Scholar
Šustar, Predrag. 2007. “Crick’s Notion of Genetic Information and the ‘Central Dogma’ of Molecular Biology.” British Journal for the Philosophy of Science 58 (1): 1324..CrossRefGoogle Scholar
Tononi, Giulio, Sporns, Olaf, and Edelman, Gerald M.. 1999. “Measures of Degeneracy and Redundancy in Biological Networks.” Proceedings of the National Academy of Sciences 96 (6): 3257–62..CrossRefGoogle ScholarPubMed
Vinga, Susana. 2014. “Information Theory Applications for Biological Sequence Analysis.” Briefings in Bioinformatics 15 (3): 376–89..CrossRefGoogle ScholarPubMed
Walker, Sara Imari, Davies, Paul C. W., and Ellis, George F. R.. 2017. From Matter to Life: Information and Causality. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Weber, Marcel. 2017. “Discussion Note: Which Kind of Causal Specificity Matters Biologically?Philosophy of Science 84 (3): 574–85..CrossRefGoogle Scholar
Williams, George C. 1992. Natural Selection: Domains, Levels, and Challenges. Oxford: Oxford University Press.Google Scholar
Woodward, James. 2010. “Causation in Biology: Stability, Specificity, and the Choice of Levels of Explanation.” Biology and Philosophy 25 (3): 287318..CrossRefGoogle Scholar
Yockey, Hubert P. 2005. Information Theory, Evolution, and the Origin of Life. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Figure 0

Figure 1. Four algorithms illustrating an algorithmic approach to biological functioning.