Introduction
The heat death of the universe is a possible ultimate fate of the universe in which the universe has diminished to a state of no thermodynamic free energy and therefore can no longer sustain processes that increase entropy. This theory stems from the second law of thermodynamics, which states that entropy tends to increase in an isolated system. From this, the theory infers that if the universe lasts for a sufficient time, it will asymptotically approach a state where all energy is evenly distributed. In other words, according to this theory, in nature there is a tendency to the dissipation (energy loss) of mechanical energy (motion); hence, by extrapolation, there exists the view that the mechanical movement of the universe will run down, as work is converted to heat, in time because of the second law of thermodynamics. In this work, we will comment on the discovery of isolated dynamical systems (Zak Reference Zak2016a), which can decrease entropy in violation of the second law of thermodynamics, while resemblances of these systems to living systems provides reasons to hypothesize that ‘Life’ can slow down ‘heat death’ of the Universe, and that can be associated with the Purpose of Life.
Self-controlled dynamics
The starting point of our approach are the Madelung equations (Madelung Reference Madelung1926), that represent a hydrodynamics version of the Schrödinger equation
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn1.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn2.gif?pub-status=live)
Here ρ and S are the components of the wave function
${\rm \psi} = \sqrt {\rm \rho} {\rm e}^{iS/\hbar} $
and
$\hbar $
is the Planck constant divided by 2π. The last term in equation (2) is known as quantum potential and F represents a classical potential. From the viewpoint of Newtonian mechanics, equation (1) expresses continuity of the flow of probability density and equation (2) is the Hamilton–Jacobi equation for the action S of the particle of mass m. Actually the quantum potential in equation (2), as a feedback from equations (1) to (2), represents the difference between the Newtonian and quantum mechanics, and therefore, it is solely responsible for fundamental quantum properties.
The Madelung equations (1) and (2) can be converted to the Schrödinger equation using the ansatz
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn3.gif?pub-status=live)
where ρ and S are real functions.
Our approach is based upon a modification of the Madelung equation, and in particular, upon replacing the quantum potential with a different Liouville feedback, Fig. 1.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20171219070028-76825-mediumThumb-S147355041700009X_fig1g.jpg?pub-status=live)
Fig. 1. Classic physics, quantum physics and physics of life.
In Newtonian physics, the concept of probability ρ is introduced via the Liouville equation
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn4.gif?pub-status=live)
generated by the system of ordinary differential equations (ODEs)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn5.gif?pub-status=live)
where v is velocity vector with F having the dimensions of a force per unit mass.
It describes the continuity of the probability density flow originated by the error distribution
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn6.gif?pub-status=live)
in the initial condition of ODE (6).
Let us generalize equation (2) in the following form
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn7.gif?pub-status=live)
where v is a velocity of a hypothetical particle.
This is a fundamental step in our approach: in Newtonian dynamics, the probability never explicitly enters the equation of motion. In addition to that, the Liouville equation generated by equation (7) is, in contrast to equation (4), nonlinear with respect to the probability density ρ
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn8.gif?pub-status=live)
and therefore, the system (7), (8) departs from Newtonian dynamics. However although it has the same topology as quantum mechanics (since now the equation of motion is coupled with the equation of continuity of probability density), it does not belong to it either. Indeed equation (7) is more general than the Hamilton–Jacoby equation (2): it is not necessarily conservative, and the feedback F is not necessarily the quantum or classical potential although further we will impose some restriction upon it that links F to the concept of information. The relation of the system (7), (8) to Newtonian and quantum physics is illustrated in Fig. 1.
Remark. Here and below we make distinction between the random variable v(t) and its values V in probability space.
Following (Zak Reference Zak2016a), we consider the force F that plays the role of a feedback from the Liouville equation (8) to the equation of motion (7). Turning to one-dimensional case, let us specify this feedback as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn9.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn10.gif?pub-status=live)
Then equation (9) can be reduced to the following:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn11.gif?pub-status=live)
and the corresponding Liouville equation will turn into the nonlinear partial differential equation (PDE)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn12.gif?pub-status=live)
(see the remark above).
This equation is known as KdV-Burgers (Korteweg-de Vries-Burgers) PDE. The mathematical theory behind the KdV equation became rich and interesting and, in the broad sense, it is a topic of active mathematical research. A homogeneous version of this equation that illustrates its distinguished properties is a nonlinear PDE of parabolic type. But a fundamental difference between the standard KdV-Bergers equation and equation (12) is that equation (12) dwells in the probability space and therefore, it must satisfy the normalization constraint
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn13.gif?pub-status=live)
However as shown in (Zak Reference Zak2016a), this constraint is satisfied: in physical space it expresses conservation of mass, and it can be easily scale-down to the constraint (13) in probability space. That allows one to apply all the known results directly to equation (12). However it should be noticed that all the conservation invariants have different physical meaning: they are not related to conservation of momentum and energy, but rather impose constraints upon the Shannon information.
In physical space, equation (12) has many applications from shallow waves to shock waves and solitons. However, application of solutions of the same equations in probability space is fundamentally different. Analysis of equations (11)–(13) performed in (Zak Reference Zak2016a) discovered non-Newtonian properties of their solutions such as randomness, entanglement and probability interference typical for quantum systems. But the most surprising property of these equations that may have fundamental philosophical implications was a capability of their solutions to violate the second law of thermodynamics, and we will demonstrate it below. For that purpose consider the simplest case of the system (11)–(13) assuming that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn14.gif?pub-status=live)
and find the change of entropy H
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn15.gif?pub-status=live)
At the same time, the original system (11), (12) is isolated: it has no external interactions. Indeed the information force equation (9) is generated by the Liouville equation that, in turn, is generated by the equation of motion (11). In addition to that, the particle described by ODE (11) is in equilibrium
$\dot v = 0$
prior to activation of the feedback (9). Therefore the solution of equations (11) and (12) can violate the second law of thermodynamics, and that means that this class of dynamical systems does not belong to physics as we know it. This conclusion triggers the following question: are there any phenomena in Nature that can be linked to dynamical systems (11), (12)? The answer will be discussed bellow.
Thus despite the mathematical similarity between equation (12) and the KdV-Bergers equation, the physical interpretation of equation (12) is fundamentally different: it is a part of the dynamical system (11), (12) in which equation (12) plays the role of the Liouville equation generated by equation (11). As follows from equation (15), this system, being isolated and being in equilibrium, has the capability to decrease entropy, i.e. to move from disorder to order without external resources. In addition to that, as shown in (Zak Reference Zak2016a), the system displays transition from deterministic state to randomness.
This property represents departure from classical and quantum physics, and, as shown in (Zak Reference Zak2012), provides a link to behaviour of livings. That suggests that this kind of dynamics requires extension of modern physics to include physics of life.
The process of violation of the second law of thermodynamics is illustrated in Fig. 2: the higher values of ρ propagate faster than lower ones. As a result, the moving front becomes steeper and steeper, and that leads to formation of solitons (c 3 > 0), or shock waves (c 3 = 0) in probability space. This process is accompanied by decrease of entropy.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_fig2g.jpeg?pub-status=live)
Fig. 2. Formation of shock waves and solitons in probability space.
As shown in (Zak Reference Zak2010), there is another mechanism of violation of the second law of thermodynamics where instead of formation of shock waves and solitons in probability space, a negative diffusion takes place in the Liouville equation. That occurs if
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn16.gif?pub-status=live)
A physicist or a biologist may ask the following question: for how long the entropy of the system described by equations (11)–(13) could decrease? The answer is simple: it decreases forever if we are still in mathematical world. Indeed, as shown in (Zak Reference Zak2016a), the solution to equations (11)–(13) asymptotically approaches a soliton (in probability space) with asymptotic decrease of entropy. However the next question is much harder: what is a chance that this mathematical system could represent an idealized model of living system?
In order to prove that self-controlled dynamical systems exist not only in mathematical world, but in real world as well we will turn to models of livings. But prior to that we notice that the Madelung equation does belong to the class of self-controlled ODE while describing quantum mechanics. However its solutions do not violate the second law of thermodynamics, which means that not every self-controlled ODE possesses such capability.
It should be noticed that the work of the information force (9) could be attributed to internal energy of specific processes in livings in the same way in which the work of the quantum potential is attributed to energy of spin. In case of livings, the substructure that similar to spin in quantum mechanics could be presented in the form of a mathematical model of mind, introduced in (Zak Reference Zak2014).
Biological interpretation of self-controlled dynamics
The recent statement about completeness of the physical picture of our Universe made in Geneva raised many questions, and one of them is the ability to create Life and Intelligence out of physical matter without any additional entities. The main difference between living and non-living matter is in directions of their evolution: it has been recently recognized that the evolution of livings is progressive in a sense that it is directed to the highest levels of complexity if complexity is measured by an irreducible number of different parts that interact in a well-regulated fashion. Such a property is not consistent with the behaviour of isolated Newtonian systems that cannot increase their complexity without external forces. That difference created so called Schrödinger paradox: in a world governed by the second law of thermodynamics, all isolated systems are expected to approach a state of maximum disorder; since life approaches and maintains a highly ordered state – one can argue that this violates the Second Law implicating a paradox, (Schrödinger Reference Schrödinger1944). But livings are not isolated due to such processes as metabolism and reproduction: the increase of order inside an organism is compensated by the increase in disorder outside this organism, and that removes the paradox. Nevertheless it is still tempting to find a mechanism that drives livings from disorder to order. As shown above, moving from a disorder to order is not a prerogative of open systems: an isolated system can do it without help from outside. However such system cannot belong to the world of the modern physics: it belongs to the world of living matter, and that lead us to the concept of an intelligent particle – the first step to physics of livings. In order to introduce such a particle, we start with an idealized mathematical model of livings by addressing only one aspect of Life: a biosignature, i.e. mechanical invariants of Life and in particular, the geometry and kinematics of intelligent behaviour disregarding other aspects of Life such as metabolism and reproduction. By narrowing the problem in this way, we are able to extend the mathematical formalism of physics’ First Principles to include description of intelligent behaviour. At the same time, by ignoring metabolism and reproduction, we can make the system isolated, and it will be a challenge to find such activity of livings, which could be modelled by isolated dynamical systems. In this paper we hypothesize that the sought activity could be associated with human intuition recalling that intuition is defined as a kind of immediate knowledge or awareness not based upon some logical process – a form of insight that brings together appropriately relationships between the elements of a problem or situation.
The proposed model illuminates the ‘border line’ between living and non-living systems. The model introduces a L-particle (particle of Life) that, in addition to Newtonian properties, possesses the ability to process information. The probability density can be associated with the self-image of the L-particle as a member of the class to which this particle belongs, while its ability to convert the density into the information force – with the self-awareness (both these concepts are adopted from psychology). Continuing this line of associations, the equation of motion (see equation (11)) can be identified with a motor dynamics, while the evolution of density (see equation (12)) – with a mental dynamics. Actually the mental dynamics plays the role of the Maxwell sorting demon: it rearranges the probability distribution by creating the information force and converting it into a force that is applied to the particle. One should notice that mental dynamics describes evolution of the whole class of state variables (differed from each other only by initial conditions) and that can be associated with the ability to generalize that is a privilege of living systems. Continuing our biologically inspired interpretation, it should be recalled that the second law of thermodynamics states that the entropy of an isolated system cannot decrease. This law has a clear probabilistic interpretation: increase of entropy corresponds to the passage of the system from less probable to more probable states, while the highest probability of the most disordered state (that is the state with the highest entropy) follows from a simple combinatorial analysis. However, this statement is correct only if there is no Maxwell’ sorting demon, i.e. nobody inside the system is rearranging the probability distributions. But this is precisely what the Liouville feedback is doing: it takes the probability density ρ from equation (12), creates functions of this density, converts them into a force and applies this force to the equation of motion (11). As demonstrated by equation (15), the evolution of the probability density may lead to the entropy decrease ‘against the second law of thermodynamics’.
Obviously the last statement should not be taken literary; indeed, the proposed model attempts to captures only those aspects of the living systems that are associated with their behaviour, and in particular, with their motor-mental dynamics, since other properties are beyond the dynamical formalism. Therefore, such physiological processes that are needed for the metabolism are not included into the model. That is why this model is in a formal disagreement with the second law of thermodynamics while the living systems are not. Indeed, applying the second law of thermodynamics, we consider our system as isolated one while the underlying real system is open due to other activities of livings that were not included in our model. Nevertheless, despite these limitations, the L-particle model attempts to capture the ‘magic’ of Life: the ability to create analogies to ‘self-image’ and ‘self-awareness’ and move from disorder to the order.
Remark. Maxwell's Demon is an imaginary creature that the mathematician James Clerk Maxwell created to contradict the second law of thermodynamics. The demon is trying to create more useful energy from the system than there was originally. Equivalently he was decreasing the randomness of the system (by ordering the molecules according to a certain rule), which is decreasing the entropy. No such violation of the second law of thermodynamics has ever been found in physics.
From psychological viewpoint the proposed model can be interpreted as representing interactions of the L-particle, or living agent with the self-image and the images of other agents via the mechanisms of self-awareness. In order to associate these basic concepts of psychology with our mathematical formalism, we have to recall that living systems can be studied in many different spaces such as physical (or geographical) space as well as abstract (or conceptual) spaces. The latter category includes, for instance, social class space, sociometric space, social distance space, semantic space est. Turning to our model, one can identify two spaces: the physical space v, t in which the agent state variables
$v_i = \dot x_i $
evolve, (see equations (11)) and an abstract space in which the probability density of the agent’ state variables evolve (see equation (12)). The connection with these spaces have been already described earlier: if equations (11) are run many times starting with the same initial conditions, one will arrive at an ensemble of different random solutions, while equation (12) will show what is the probability for each of these solutions to appear. Thus, equation (12) describes the general picture of evolution of the communicating agents that does not depend upon particular initial conditions. Therefore, the solution of this equation can be interpreted as the evolution of the self- and non-self images of the agents that jointly constitutes the collective mind in the probability space, Fig. (3).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_fig3g.gif?pub-status=live)
Fig. 3. Collective mind.
Based upon that, one can propose the following interpretation of the model of communicating agents: considering the agents as L-particles, one can identify equations (11) as a model simulating their motor dynamics, i.e. actual motions in physical space, while equation (12) as the collective mind composed of mental dynamics of the agents. Such an interpretation is evoked by the concept of reflection in psychology. Reflection is traditionally understood as the human ability to take the position of an observer in relation to one's own thoughts. In other words, the reflection is the self-awareness via the interaction with the image of the self. Hence, in terms of the phenomenological formalism proposed above, a non-living system may possess the self-image, but it is not equipped with the self-awareness, and therefore, this self-image is not in use. On the contrary, in living systems the self-awareness is represented by the information forces that send information from the self-image (12) to the motor dynamics (11). Due to this property that is well-pronounced in the proposed model, a living agent can run its mental dynamics ahead of real time, (since the mental dynamics is fully deterministic, and it does not depend explicitly upon the motor dynamics) and thereby, it can predict future expected values of its state variables; then, by interacting with the self-image via the information forces, it can change the expectations if they are not consistent with the objective. Such a self-controlled dynamics provides a major advantage for the corresponding living agents, and especially, for biological species: due to the ability to predict future, they are better equipped for dealing with uncertainties, and that improves their survivability. It should be emphasized that the proposed model, strictly speaking, does not discriminate living systems of different kinds in a sense that all of them are characterized by a self-awareness-based feedback from mental (12) to motor (11) dynamics. However, in primitive living systems (such as bacteria or viruses) the self-awareness is reduced to the simplest form that is the self/no-self discrimination; in other words, the difference between the living systems is represented by the level of complexity of that feedback.
A broad range of other properties of similarity between livings and self-controlled dynamics were addressed in (Zak Reference Zak2008, Reference Zak2016b, Reference Zak2014): optimization, abstraction, generalization, cooperation and competition as a privilege of livings were performed by solutions of self-controlled dynamics.
Thus, the proposed model suggests a unified description of the progressive evolution of living systems. Based upon this model, one can formulate and implement the principle of maximum increase of complexity that governs the large-time-scale evolution of living systems.
However despite of such a remarkable resemblance between the self-controlled systems and properties of livings, our goal is still not achieved: we did not find any processes in livings’ activity that violates the second law of thermodynamics. In order to achieve that goal, we will take a closer look to processes associated with human intuition.
Remark. Complexity describes the behaviour of a system or model whose components interact in multiple ways and follow local rules, meaning there is no reasonable higher instruction to define the various possible interactions. It is measured by an irreducible number of different parts that interact in a well-regulated fashion.
Human intuition
A human intelligence, and in particular, its most mysterious kind – intuition – has always been an enigma for physicists and an obstacle for artificial intelligence. It was well understood that human behaviour and in particular, the decision making process, is governed by feedbacks from the external world, and this part of the problem was successfully simulated in the most sophisticated way by control systems. However, in addition to that, when the external world does not provide sufficient information, a human turns for ‘advice’ to his experience, and that is associated with intuition. In other words, intuition is a phenomenon of the mind that describes the ability to acquire knowledge without inference or the use of reason.
In more-recent psychology, intuition can encompass the ability to know valid solutions to problems and decision-making. For example, the recognition primed decision (RPD) model explains how people can make relatively fast decisions without having to compare options. It was found that under time pressure, high stakes and changing parameters, experts used their base of experience to identify similar situations and intuitively choose feasible solutions. Thus, the RPD model is a blend of intuition and analysis. The intuition is the pattern-matching process that quickly suggests feasible courses of action. The analysis is the mental simulation, a conscious and deliberate review of the courses of action. However, more detailed analysis of psychological and philosophical aspects of intuition is out of scope of this paper and we concentrate upon mathematical modelling of intuition.
In this section, intuition-based intelligence is implemented by a feedback from the self-image (a concept adapted from psychology) and we will illustrate its physical model in connection with the decision-making process.
A decision making process can be modelled by a time evolution of a vector π whose components π i (i = 1, 2…N) present a probability distribution over N different choices. The evolution of this vector can be written in the form of a Markov chain:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn17.gif?pub-status=live)
where P ij is the transition matrix representing a decision making policy. If P ij = const., the process (17) approaches some final distribution π ∞ regardless of the initial state π0. In particular, in the case of doubly stochastic transition matrix, i.e. when
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn18.gif?pub-status=live)
all the final choices become equally probable
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn19.gif?pub-status=live)
i.e. the system approaches its thermodynamics limit, which is characterized by the maximum entropy. When the external world is changing, such a rigid behaviour is unsatisfactory and the matrix P ij has to be changed accordingly, i.e. p ij = p ij (t). Obviously this change can be implemented only if the external information is available and there are certain sets of rules for correct responses. However, in real world situations, the number of rules grows exponentially with the dimensionalities of external factors and therefore, any man-made device fails to implement such rules in full.
The main departure from this strategy can be observed in human approach to decision-making process. Indeed, faced with an uncertainty, a human uses an intuition-based approach relying upon his previous experience and knowledge in the form of certain invariants or patterns of behaviour that are suitable for the whole class of similar situations. Such ability follows from the fact that a human possesses a self-image and interacts with it. This concept which is widely exploited in psychology has been known as far back as to ancient philosophers, but so far its mathematical formalization has never been linked to the decision-making model (17).
First we will start with an abstract mathematical question: can the system (17) change its evolution, and consequently, its limit distribution, without any external ‘forces’? The formal answer is definitely positive. Indeed, if the transition matrix depends upon the current probability distribution
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn20.gif?pub-status=live)
then the evolution (17) becomes nonlinear, and it may have many different scenarios depending upon the initial state π0. In particular case (20), it could ‘overcome’ the second law of thermodynamics decreasing its final entropy by using only the ‘internal’ resources. Indeed let us assume that the objective of the system is to approach the deterministic state
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn21.gif?pub-status=live)
Then as shown in (Zak Reference Zak2016a), if the feedback is chosen as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqnU1.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn22.gif?pub-status=live)
the evolution of the probability π1 can be presented as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn23.gif?pub-status=live)
in which p 11 and p 22 are substituted from equations (22).
It is easily verifiable that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn24.gif?pub-status=live)
i.e. the objective is achieved due to the ‘internal’ feedback (22).
The implementation of the stochastic process, which probabilities are described by the Markov chains (17) with the feedback (22) has been described in (Zak Reference Zak2016a). This stochastic process can be simulated by quantum recurrent nets (QRN) (See Fig. 4).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_fig4g.gif?pub-status=live)
Fig. 4. A one-dimensional quantum recurrent network.
This QRN is described by the following set of difference equations with constant time delay
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn25.gif?pub-status=live)
The curly brackets are intended to emphasize that σ1 is to be taken as a measurement operation with the effect similar to those of a sigmoid function in classical neural networks.
An initial state, |ψ (0)>, is fed into the network, transformed under the action of a unitary operator, U, subjected to a measurement indicated by the measurement operator M{ } and the result of the measurement is used to control the new state fed back into the network at the next iteration. One is free to record, duplicate or even monitor the sequence of measurement outcomes, as they are all merely bits and hence constitute classical information. Moreover, one is free to choose (for computational purposes) the function used during the reset phase, including the possibility of adding no offset state whatsoever. Such flexibility makes the QRN architecture remarkably versatile. To simulate a Markov process, it is sufficient to return just the last output state to the next input at each iteration (Zak Reference Zak2011).
From physical viewpoint, the example described above can be associated with a particle that escapes from the Brownian motion using its own ‘internal effort’ in violation of the second law of thermodynamics while the entropy decreases from a very large value to zero (Zak Reference Zak2010). In other words, as a result of interaction with his own image and without any ‘external’ enforcement, the decision maker can depart from the thermodynamics limit (19) of his performance ‘against the second law.’ Obviously, the enforcement in the form of the feedback (22) is an internal one since the image (17) is the uniquely defined product of the dynamical evolution (25), i.e. such a ‘free will’ effort is in a disagreement with the second law of thermodynamics. The philosophical consequences of this result have been discussed in (Zak Reference Zak2016a). It is easy to conclude that the system equations (25) and (17) represents a finite-difference approximation of the dynamical system equations (11) and (12), respectively, where equation (25) describes motor dynamics and equation 17) – mental dynamics. Obviously equations (22) correspond to the feedback equations (9) and (18) – to the normalization constraint equation (13).
Models and reality
Does Nature make use of models ‘offered’ by mathematics? The history of mathematics demonstrates that it does. Actually our problem is to find a match between a ‘granted’ by mathematics model of self-controlled dynamics and some natural phenomena. A part of the problem was already solved: we found a match between self-controlled systems that do not violate the second law of thermodynamics and livings. However the most difficult part is still open: what natural phenomena could match a special type of self-controlled systems that cannot be described by conventional physics since they violate the second law of thermodynamics? In order to find the answer, let us turn to the processes in human brain. The information provided by neurophysicists is the following: of all the objects in the universe, the human brain is may be the most complex: so it is no surprise that, despite the glow from recent advances in the science of the brain and mind, we still find ourselves squinting in the dark somewhat. Most of neuroscientists think that the brain is not computable and no engineering can reproduce it and that human consciousness cannot be replicated in silicon because most of its important features are the result of unpredictable, nonlinear interactions among billions of cells. Intuition (as used in this paper) is among a dozen of the brain processes that cannot be reduced neither to Newtonian nor to quantum physics and that may increase chances that enlarging contemporary physics with the capability to violate the second law of thermodynamics could help, under the assumption that contemporary physics is capable of representing brain processes. The information provided by neuroscience makes connection between intuition and implicit or unconscious recognition memory that arises from information that was not attended to, but which is processed and can subsequently be retrieved, without ever entering into conscious awareness. The study also provides evidence that the retrieval of explicit and implicit memories involves distinct neural substrates and mechanisms. The distinction between explicit and implicit memory has been recognized for centuries. It was known that implicit memories could influence behaviour, because a human can learn to perform new motor skills despite having severe deficits in other forms of memory. Thus, the term implicit memory refers to the phenomenon whereby previous experience, of which one is not consciously aware, can aid performance on specific tasks. And that is all that can be provided by neuroscience. In view of that, we will try to complement the concept of intuition by mathematical consideration. The idea is illustrated below. We start with a problem of finding the global maximum of a surface, (see Fig. 5). A rational agent (a robot) will perform the sequence of following steps: (1) he will find the points with zero first derivatives by solving the system of algebraic equations, (2) compute all the components of the curvature tensor at each point he found, (3) select only those points, which have all the curvature components negative, (4) compute the values of the function in the selected points and find the global maximum.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20171219070028-93508-mediumThumb-S147355041700009X_fig5g.jpg?pub-status=live)
Fig. 5. Finding the global maximum.
This is the simplest algorithm (without ‘human tricks’), which can be executed by a robot. However its cost grows exponentially as a function of dimensionality of the surface, as it is for any global optimization problem. That is why this class of algorithms became the major obstacle for progress in artificial intelligence.
The alternative we propose exploits a special type of self-controlled dynamics that solve the global maximum problem bypassing all exponentially complex operations. The idea of this algorithm is the following: introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Following (Zak Reference Zak2008), we briefly describe the algorithm. For that reason let us replace the feedback equation (9) by the following
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn26.gif?pub-status=live)
Here ρ*(v) is a preset probability density satisfying the constraints (13) and ξ is a positive constant with dimensionality [1/s]. As follows from equation (26), f has dimensionality of a force per unit mass that depends upon the probability density ρ, and therefore, it can be associated with the concept of information, so we will call it the information force. In this context, the coefficient ξ can be associated with the Planck constant that relates Newtonian and information forces. But since we are planning to deal with livings that belong to the macro-world, ξ must be of order of a viscose friction coefficient.
With the feedback (26), equations (7) and (8) take the form, respectively
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn27.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn28.gif?pub-status=live)
The last equation has the analytical solution
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn29.gif?pub-status=live)
Subject to the initial condition
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn30.gif?pub-status=live)
that satisfies the constraint (13).
This solution converges to a preset stationary distribution ρ*(v). Obviously the normalization condition for ρ is satisfied if it is satisfied for ρ0 and ρ* Indeed,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn31.gif?pub-status=live)
Rewriting equation (29) in the form
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn32.gif?pub-status=live)
one observes that ρ ≥ 0 at all t ≥ 0 and − ∞ < V < ∞.
As follows from equation (29), the solution of equaition (28) has an attractor that is represented by the preset probability density ρ*(v). Substituting the solution (29) into equation (27), one arrives at the ODE that simulates the stochastic process with the probability distribution (29)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn33.gif?pub-status=live)
It is reasonable to assume that the solution (29) starts with a maximum entropy when
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn34.gif?pub-status=live)
As a result of that assumption, all the randomness is supposed to be preset in the form of a Brownian motion characterized by large entropy. At this point, we deviate from the case considered in (Zak Reference Zak2008) to demonstrate the violation of the second law of thermodynamics. Indeed as follows from equation (29), the probability density ρ0(v) attracted to the preset distribution ρ*(v) while the initial entropy change from large entropy of Brownian motion to the entropy of the preset distribution H* that could be much smaller than the initial one. At the same time, the original system (27), (28) is isolated: it has no external interactions. Indeed the information force equation (26) is generated by the Liouville equation that, in turn, is generated by the equation of motion (27). In addition to that, the particle described by ODE (27) is in equilibrium
$\dot v = 0$
prior to activation of the feedback (9). Therefore the solution of equations (27) and (28) could violate the second law of thermodynamics, and that means that this class of dynamical systems does not belong to physics as we know it.
The approach is generalized to n-dimensional case simply by replacing v with a vector v = v 1, v 2, …v n since equation (28) does not include space derivatives
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn35.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn36.gif?pub-status=live)
The idea of the proposed algorithm in more details is the following: introduce a positive function ψ(v 1, v 2, …v n ), |v i | < ∞ to be maximized as the probability density ρ*(v 1, v 2, …v n ) to which the solution of equation (32) is attracted. Then the larger value of this function will have the higher probability to appear. The following steps are needed to implement this algorithm:
-
1. Build and implement the n-dimensional version of the model equations (31) and (32) as an analog devise
(37)$$\eqalign{&\dot v_i = \displaystyle{{{\rm e}^{ - t}} \over {n\{ [{\rm \rho} _0 (v) - {\rm \rho} {^\ast} (v)]{\rm e}^{ - t} + {\rm \rho} {^\ast} (v)\}}} \int_{ - \infty} ^{v_i} {[{\rm \rho} _0 ({\rm \zeta} ) - {\rm \rho} {^\ast} (} {\rm \zeta} )]d{\rm \zeta},\cr & \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad i = 1,2, \ldots n.}$$
-
2. Normalize the function to be maximized
(38)$$\bar {\rm \psi} (\{ v\} ) = \displaystyle{{{\rm \psi} (\{ v\} )} \over {\int_{ - \infty} ^\infty {{\rm \psi} (\{ v\} )d\{ v\}}}} $$
-
3. Using equation (32), evaluate time τ of approaching the stationary process to accuracy ε
(39)$${\rm \tau} \approx \ln \displaystyle{{1 - \bar {\rm \psi}} \over {{{\rm \varepsilon} \bar{\rm \psi}}}} $$
-
4. Substitute
$\bar {\rm \psi} $ instead of ρ* into equation (37) and run the system during the time interval τ.
-
5. The solution will ‘collapse’ into one of possible solutions with the probability
$\bar {\rm \psi} $ . Observing (measuring) the corresponding values of {v*}, find the first approximation to the optimal solution.
-
6. Switching the device to the initial state and then starting again, arrive at the next approximations.
-
7. The sequence of the approximations represents Bernoulli trials that exponentially improve the chances of the optimal solution to become a winner. Indeed, the probability of success ρ s and failure ρ f after the first trial is, respectively
(40)$${\rm \rho} _s = \bar {\rm \psi} _1, \quad {\rm \rho} _f = 1 - \bar {\rm \psi} _1 $$
Then the probability of success after M trials is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171219065929922-0826:S147355041700009X:S147355041700009X_eqn41.gif?pub-status=live)
Therefore, after polynomial number of trials, one arrived at the solution to the problem. Despite several computational advantages of this algorithm over existing algorithms, the basic problem in question is the implementability of analog simulations using Newtonian/quantum resources. Indeed, the model described by equations (31) and (32) does not belong to physical space, as we know it: it belongs to the expanded quantum space (see Fig. 1). This means that, in principle, the pure analogue simulation of this algorithm is impossible unless some digital device is included, (see (Zak Reference Zak2016a)). However our goal here is not an AI implementation, but rather a justification of the hypothesis that the best ‘implementation’ is human brain processes, and in particular, human intuition. Actually the example described above demonstrates that the self-controlled system bypass the exponentially complex operations and finds a shortcut leading to a fast solution of the problem due to the capability to violate the second law of thermodynamics. This explains why the chess world champion Kasparov has bitten the much more powerful supercomputer Deep Blue in 1996: the computer, as a rational robot, prior to each move computed and compared all the N following moves while the number of such moves grows exponentially as a function of N; on the contrary, Kasparov computed and compared only reasonable moves and the selection of such moves was provided by the ‘Maxwell demon’ that violated the second law of thermodynamics.
More fundamental mathematical approach that briefly described in the section ‘Self-controlled dynamics’ was performed by Zak (Reference Zak2014).
Philosophical implications
Discovery of isolated dynamical systems, which could violate the second law of thermodynamics and phenomenological resemblance of their behaviour to human brain processes, calls for revision of the concept of heat death.
The heat death of the universe is a possible ultimate fate of the universe in which the universe has diminished to a state of no thermodynamic free energy and therefore can no longer sustain processes that increase entropy (including computation and life). This theory stems from the second law of thermodynamics, which states that entropy tends to increase in an isolated system. From this, the theory infers that if the universe lasts for a sufficient time, it will asymptotically approach a state where all energy tends to be evenly distributed. In other words, according to this theory, in nature there is a tendency to the dissipation (energy loss) of mechanical energy (motion); hence, by extrapolation, there exists the view that the mechanical movement of the universe will run down, as work is converted to heat, in time because of the second law of thermodynamics. But could heat death be prevented? Based upon the above discovery we hypothesize, that, in principle, it is possible. However this conclusion is not the one, which represents our goal: nobody is seriously worried about heat death since the time scale of human life is negligible on the cosmological scale. What is more important is the role of life in physics, the reason of its promotion by Nature as well as its purpose – to affect the cosmological processes including possible prevention of the heat death.
So far we presented a speculative and analogical justification not for all livings, and not even for all humans, but only for ‘outstanding’ human who make great discoveries, and thereby decrease world entropy -over some finite time interval- by making world more ordered. For instance, Newton compressed all the mechanics information about the macro world into three parameters: mass, acceleration and force. Schrödinger expanded these results to the micro-world, etc. Using terminology of this work, such discoveries contributed more order into ‘mental dynamics’. However (continuing with this paper's speculative line of argument) later on they materialized into ‘motor dynamics’, i.e. into high technology and became more visible and better computable in terms of entropy.