# What if free will is not an illusion and can be reconciled with the known laws of physics?

-by I, Quantum

Synopsis – Free Will

1. The Question of Free Will
2. Dualities
3. About Quantum Mechanics and Biology
4. The Origin of Choice and the First/Third Person Duality
5. Crisscross Entanglement and the Nonlinear Schrödinger Equation
6. The Mind’s Eye and Free Will
7. Making Sense of Experimental Results in Neuroscience
8. What is it like to be an Electron?
9. Predictability
10. (stress, meditation, sex, understanding, self-awareness, Gödel’s incompleteness theorem, qualia of the senses, moral responsibility, vice, love, consciousness)

Powerpoint Summary of What if free will is not an illusion?

## What if the miracle behind evolution is quantum mechanics?

-by I, Quantum

Images- Left/Top: Drawing Hands by M. C. Escher, 1948, Right/bottom: Mandelbrot set by Reguiieee both via Wikipedia

# Free Will – Synopsis

The physicist Freeman Dyson once said: “…mind is already inherent in every electron, and the processes of human consciousness differ only in degree but not in kind from the processes of choice between quantum states which we call ‘chance’ when they are made by electrons.” In this essay, we assume Dyson was correct in his conjecture and explore the consequences. We start by supposing that a 1st person perspective exists in fundamental particles, like the electron, and that a duality exists between this and the 3rd person quantum mechanical description – two equivalent but different perspectives. For example, when electrons are fired through a Stern-Gerlach apparatus quantum mechanics says we will find the electron spin “up” a certain percentage of the time, and spin “down” the rest with nothing in between possible. From the electron’s perspective, it is forced to make a choice, with quantum probabilities manifesting as preferences for “up” vs “down”. In this way, the laws of quantum mechanics are followed precisely, the electron freely makes a choice in accordance with its preferences and the dual views agree. However, much is missing from the electron’s experience when contrasted with human consciousness: the electron has choice forced upon it – we fired it through the Stern-Gerlach apparatus – it cannot choose to not make a choice, it has no means to retain memories, no self-awareness, no mind’s eye, etc. Furthermore, electrons are interacting with the environment and being forced to make choices at a maddening rate of billions of times per second. With quantum entanglement things begin to change, however. Quantum entanglement is the only process within physics by which two or more fundamental particles can become “One system” of particles in a meaningful way. And, in practice, billions of particles have been entangled with no theoretical upper bound. We use recent theorems of quantum mechanics that show how sustained quantum entanglement in biological systems is possible despite fast decoherence rates. As entanglement grows more extensive we show how memory emerges in the entangled system, and how choices become possible without collapsing the system, while adding the vast computational power of a quantum system. At a certain point entanglement becomes so extensive that the system’s wave function interacts with itself forming a crisscross topology. This self-interaction gives rise to a nonlinear Schrödinger equation, which has known solutions, e.g. Davydov solitons and Fröhlich condensates. Such a nonlinearity may allow NP-complete problems to be solved in polynomial time. The crisscross entanglement topology is conceptually like the system regulating its’ own magnetic field ala’ the Stern-Gerlach apparatus. From the 1st person perspective this feels like the “mind’s eye” and allows the system to choose what choices to make – and whether to make a choice or not. This gives rise to free will. We show that all of this is consistent with the latest experimental results from Biology and Neuroscience. Our results paint a compelling picture of the evolution of life as a phenomenon of growing quantum entanglement in which more advanced conscious phenomena, like memory, self-awareness, a mind’s eye, and free will emerge with, and dual to, growing quantum entanglement. We further consider recent theories showing how quantum entanglement among the electron clouds in DNA stabilizes the molecule. We explore the consequences if such stabilizing effects can be extended to entanglement throughout organisms. If true, we can offer a natural, compelling explanation, in quantum mechanical terms, of a great number of subjective phenomena – the feeling of Oneness of the organism, stress, meditation, sex, understanding, self-awareness, empathy, moral responsibility, qualia of the senses, vice, love and consciousness. Furthermore, this would be a potential solution to the so-called combination problem of panpsychism, a variant form of the hard problem of consciousness.

# What if Free Will is NOT an Illusion AND can be Reconciled with the Known Laws of Physics?

(CC BY-NC 4.0)

I, Quantum

“It is remarkable that mind enters into our awareness of Nature on two separate levels. At the highest level, the level of human consciousness, our minds are somehow directly aware of the complicated flow of electrical and chemical patterns in our brains. At the lowest level, the level of single atoms and electrons, the mind of an observer is again involved in the description of events. Between lies the level of molecular biology, where mechanical models are adequate and mind appears to be irrelevant. But I, as a physicist, cannot help suspecting that there is a logical connection between the two ways in which mind appears in my Universe. I cannot help thinking that our awareness of our own brains has something to do with the process which we call ‘observation’ in atomic physics. That is to say, I think our consciousness is not just a passive epiphenomenon carried along by the chemical events in our brains, but is an active agent forcing the molecular complexes to make choices between one quantum state and another. In other words, mind is already inherent in every electron, and the processes of human consciousness differ only in degree but not in kind from the processes of choice between quantum states which we call ‘chance’ when they are made by electrons.” – Freeman Dyson, Physicist

1. The Question of Free Will
2. Dualities
3. About Quantum Mechanics and Biology
4. The Origin of Choice and the First/Third Person Duality
5. Crisscross Entanglement and the Nonlinear Schrödinger Equation
6. The Mind’s Eye and Free Will
7. Making Sense of Experimental Results in Neuroscience
8. What is it like to be an Electron?
9. Predictability
10. (stress, meditation, sex, understanding, self-awareness, Gödel’s incompleteness theorem, qualia of the senses, moral responsibility, vice, love, consciousness)

# I. The Question of Free Will

Do you believe in free will? Are you free to choose what you think, say, and do? Or, are your decisions the result of molecular interactions in your body, governed by the laws of physics, over which your control is merely illusory? Democritus (c. 460 – 370 B.C.) was the first to formalize this latter view, called determinism. He and his mentor, Leucippus, “claimed that all things, including humans, were made of atoms in a void, with individual atomic motions strictly controlled by causal laws” leaving no room for free will. He said, “by convention color, by convention sweet, by convention bitter, but in reality, atoms and a void” (via Wikipedia). Aristotle (384 – 322 B.C.), on the other hand, was the first to argue for the former perspective. In The Nicomachean Ethics he “lays out which actions deserve praise and which do not…” suggesting “we are in some way responsible for our actions” (from classicalwisdom.com) and alludes to free will. Shortly thereafter, Epicurus argued that as atoms moved through the void, there were occasions when they would ‘swerve’ from their otherwise determined paths, thus initiating new causal chains. Epicurus argued “that these swerves would allow us to be responsible for our actions, something impossible if every action was deterministically caused” from Wikipedia. Epicurus “did not say the swerve was directly involved in decisions, but, following Aristotle, he thought human agents have the autonomous ability to transcend” (causal laws) from Wikipedia. Philosophers and scientists would go on to debate free will for the next two thousand years, some for, and some against, and continue to do so to this day – with no resolution on the matter…

Figure 1: Bust of Aristotle (384-322 BC), By jlorenz1 – jlorenz1, CC BY 2.5, via Wikipedia

All of us feel that we have control of our choices. You certainly feel you have the power to continue or stop reading this essay if you so choose. It may, therefore, come as a surprise to you to know that many scientists and most physicists think that free will is just an illusion. Generally, they believe that your actions are governed by the laws of fundamental physics, which, in turn, describe the behavior of molecules, which, in turn, describe the behavior of neurons in your brain, which, in turn, determine your behavior. Their rationale can be summarized in the following two points:

First, scientists have been extremely successful at reductionism – finding a simple description of the world where a huge collection of phenomena, from the smallest scales of atoms to the largest scales of the Universe itself, can be explained in terms of a short list of 17 fundamental particles, and just four forces (see “Quantum Fields, the Real Building Blocks of the Universe” by David Tong (2017) for more). Whether we are talking about general relativity (large scale) or quantum mechanics (small scale), predictions of physics equations made long ago continue to be experimentally validated. For example, gravitational waves, predicted by the equations of general relativity in 1916, have recently been detected. They require an enormous four-kilometer-long apparatus, called LIGO, so sensitive it can detect movements of a billionth of a billionth of a meter ($10^{-18}$). Similarly, the predictions of quantum mechanics have been experimentally validated time and time again. For example, the quantum mechanical prediction of the magnetic moment of an electron has been verified to 11 decimals places (see here for more). Given these foundations, reductionism describes a world in which these rigorous fundamental laws determine the behavior of atoms which determine the chemistry of molecules from which emerge the behavior of biological organisms and so on right up to the entire workings of our own minds – a tree, if you will, with its roots in physics, upon which chemistry and then biology are built. In this video (2013) physicist Sean Carroll likens the emergence of free will to the emergence of temperature in a room full of molecules. No single molecule has a temperature, molecules have kinetic energy, but looked at collectively they do. Temperature derives from the average kinetic energy of molecules. And, temperature is obviously very useful in describing the world around us. Similarly, in this video another physicist, Max Tegmark (2014), compares the emergence of free will to the emergence of the property of wetness from water – no single water molecule has the property of “wetness”, but collectively, water obviously does. The reductionist view is extremely compelling to scientists and receives such widespread acceptance it can feel like dogma.

Figure 2: The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth By MissMJ – Own work by uploader, PBS NOVA [1], Fermilab, Office of Science, United States Department of Energy, Particle Data Group, Public Domain via Wikipedia

Second, even though living organisms seem starkly different from all the inanimate matter in the reductionist tree, recent experiments in neuroscience have seemingly directly reinforced the reductionist view. In the 1980’s neurologist Benjamin Libet performed experiments on patients (Libet’s experiments) and observed that a “readiness potential” (RP), an EEG based measure of activity, in the motor cortex of the brain was correlated with actions like lifting a finger. The subjects were shown an easy-to-read visualization of a clock (see Libet’s clock here) and were asked to report the time shown when they decided to lift their finger. When the time reported by the subjects was compared with the timing of the readiness potential, it was found the RP increased a few hundred milliseconds before the subject reported making the decision. This suggested that the RP was determining the patient’s actions and, therefore, that their feeling of free will was illusory. More recently, neuroscientists have been able to insert electrodes into the brains of living patients and record the firings of individual neurons – see this TEDx video “Free Won’t” by Moran Cerf (2015). With the electrodes still hooked up to the patient, neuroscientists predict when and which button she will choose to press. The video shows the patient trying to outwit the machine, but it seems to know which button she intends to press before she can do so! (see more in Fried et al. (2010) and Soon et. al. (2008)) Such results suggest that actions might be deterministically predictable, and, if that is true, then there is no room for free will, right?

Still, there is something that feels incomplete about this rational, reductionist, objective description of the world. Perhaps the trouble with it is just that – it is objective! Consider this: even if there is neural activity that predicts simple action, and, even if there are many examples of properties in Nature that emerge, science only offer a description of this deep, rich subjective 1st person experience and this empowering feeling of free will in the 3rd person! The claim that the subjective experience emerges at some point feels arbitrary and trivial. Think about Isaac Asimov’s famous sci-fi novel “I, Robot”. If it was instead named “Wet Robot”, or “Square Robot”, or “Hot Robot”, it just wouldn’t have had anywhere near the same affect! Well, ok, you might have thought for a second about Hot Robot until you realized I was just talking about temperature! All kidding aside, there is a profound difference there. Something much deeper is going on. There are powerful subjective sensual qualities to life’s experiences which simply are not addressed in the scientific framework. Nobel prizing winning physicist, Erwin Schrödinger, in his book ‘What is Life?‘ (1944), had this to say:

“In this chapter I have tried by simple examples, taken from the humblest of sciences, namely physics, to contrast the two general facts (a) that all scientific knowledge is based on sense perception, and (b) that none the less the scientific views of natural processes formed in this way lack all sensual qualities and therefore cannot account for the latter“. – Erwin Schrödinger, What is Life? (1944)

Times have certainly changed regarding physics as the “humblest of sciences”, but Schrödinger’s point is as true today as ever! The reason science can’t say anything about this essential quality of life is as physicist Sean Carol specifically says (here): “we (scientists) realized that we are not smart enough to learn about the world just by thinking about it, we have to go out and look at it”. That’s all well and good for scientific objectivity, but, when we invoke this modus operandi, we are necessarily looking at the world from the 3rd person perspective. Never will you see in a scientific journal articles like: “How it feels when I subject myself to a THz laser“! (although we look forward to this paper: “What is it like to see entanglement?“) Here is Sean debating Alan Wallace, a Buddhist scholar, who charges “introspection plays no role in modern science”: “The Nature of Reality: A Dialogue Between a Buddhist Scholar and a Theoretical Physicist“. Wallace is correct. If we want to understand where our internal, 1st person perspective comes from, and our apparent free will, we are forced to undertake a subjective exploration. That’s not to say we want to abandon the external, objective success of science – we want to supplement it! Can we do both? Suppose we could reconcile the 1st person experience with the 3rd person description of science as dual views of the same thing? That is, suppose it is possible to follow all the laws of physics and have free will…

# II. Dualities

Dualities are common in science. In the field of linear programming (LP), for example, a mathematical optimization problem involving 20 variables and 10 constraints in its primal form can be transformed into its dual form consisting of 10 variables and 20 constraints. Sometimes LP problems are easier to solve in their dual rather than primal form. Since they are dual to each other, if you solve one you solve the other. This technique is the secret behind Support Vector Machines in machine learning, and how they overcome the so called “curse of dimensionality“. Certain machine learning problems involving, say, hundreds of thousands of input variables (e.g. pixels), but far fewer examples, can be solved in their dual form much more easily.

Figure 3: Black hole diagram showing event horizon and singularity at the center – the subject of the AdS/CFT duality from UCSD Center for Astrophysics and Space Sciences

Einstein‘s theory of relativity is essentially a theory about dual descriptions of the world depending on one’s reference frame. A clock moving at near the speed of light will appear to slow to a crawl to a stationary observer, but an observer moving along in the same reference frame as the clock will see time pass normally. Reconciling these two reference frames is the work of the Lorentz transformations and they always do reconcile. Today, one of the most exciting dualities in science is the research in theoretical physics concerning the AdS/CFT duality. In this duality, one can use quantum field theory (Conformal Field Theory or CFT) to describe the quantum entanglement of particles on the boundary surface of a spacetime – like at the event horizon of a black hole, or, one can use gravity to describe spacetime curvature (Anti-de Sitter space or AdS) in the interior (the Bulk) and get the exact same results (see this video by Leonard Susskind for a simple explanation). Coincidence? The two theories are independent from each other, and have stumped physicists for nearly one hundred years as to how to unify them. But, now we are getting our first hints of how they come together – it now seems they offer descriptions of the Universe that are dual to each other. Two ways of describing the same thing. Two perspectives on the same phenomena. Certain problems in black hole research are easier to solve from the gravity perspective, and others are easier from the quantum field theory perspective. Since they are dual to each other, if you solve one you solve them both.

Descartes’ was the first to talk about a duality as it relates to the subjective 1st person experience (see Mind-Body Dualism). In his description, there was mind “stuff” (res cogitans) and matter “stuff” (res extensa). He believed that humans possessed this mind stuff but that animals were mindless machines saying: “without minds to direct their bodily movements…animals must be regarded as unthinking, unfeeling, machines that move like clockwork.”. In the argument we will follow in this essay, we are not looking at the 1st person/3rd person duality in the Cartesian sense, i.e. we are not talking about a dualism between a non-material spirit and matter as two separate things, but, rather, are looking at the problem as reconciling two views of the same underlying physical matter and processes, governed always by the laws of physics. One of these views is internal and subjective, the other external and objective, and represent, like the dualities in linear programming, Relativity Theory, and AdS/CFT, two ways of describing the same thing.

When we speak to others we have a subjective, 1st person experience of what it is like to be us speaking to them. We are perhaps happy, nervous, relieved, angry, or some complex mixture of these emotions and more. The conversation may be an easy one to have or a difficult one. Those listening to us have a 3rd person description of us speaking too. They may notice what we said, how we appeared while speaking. They may describe our speech as sad, excited, or some complex mixture of adjectives. If they attach electrodes to the neurons of our brain, they can provide an even more detailed objective description of us, but it still will be only that – an objective description. And, all this works vice-versa as well: our objective description of them, and, for each, their subjective experience of being themselves. These two descriptions of the same phenomenon, are dual to each other – they are two alternative ways of describing the same interaction. A subjective/objective, internal/external, 1st person/3rd person duality. Furthermore, we readily trust that others have this 1st person perspective even though we will never be able to experience it for ourselves. We will never be able to be them. Nor will we be able to prove that they are indeed having a subjective experience, yet it does not seem like a reach to believe it. In the same sense, we hope to make a plausible, circumstantial, and compelling case for free will even if we will never be able to prove it.

The problem of bridging the gap between the 1st and 3rd person perspectives is known as the “hard problem of consciousness” and was coined by the philosopher David Chalmers in his book “The Conscious Mind: In Search of a Fundamental Theory” (1996). However, here we are specifically not talking about the deep and complex subject of human consciousness. Human consciousness is characterized, I think we can all agree, by several things including: a 1st person perspective, choice and a sense of free will, a sense of self/self-awareness, a memory of the past, expectations of the future, qualia of the senses, emotions, a feeling of life, an awareness of death, and so on. Here, for starters, we want to approach a watered-down version of the hard problem of consciousness and only talk about the first two items in the list: a 1st person perspective and choice.

But, before we can dive in, we will need to talk about quantum mechanics and indeterminism. The scientific world outside of quantum mechanics is entirely deterministic. Take the case of the planets in our solar system: once we specify the initial coordinates, momenta, and a gravitational constant, the future orbits of all those planets are precisely determined. That is because the General Theory of Relativity, which describes gravity, is entirely deterministic. At the macroscopic level, the level of electrical circuits and bar magnets, determinism applies to electromagnetism as well. The equations of chemistry are mostly deterministic too. But, when we look really closely at the natural world, down to the level of quantum mechanics, we see a world that is indeterminant, governed by probabilities. On macroscopic scales these quantum probabilities average out and so the world appears deterministic – for example the trajectory of a baseball, or a rocket, appears precisely predictable. But, look up close at the atom and we can only calculate probabilities of finding, say, an electron, in a certain location, and nothing more than probabilities. The picture of the atom is not one of electrons orbiting a nucleus analogous to our solar system. Rather, fluctuating quantum waves of the spherical harmonics (along with a radial component) describe the location of the electrons and do so only probabilistically. It doesn’t appear there are any hidden variables either (see Bell’s theorem for more). That is, it is not that our description of the atom is incomplete. It is simply that, at its most fundamental level, the Universe is indeterminant. Still, these probabilities are governed precisely by quantum mechanical wave functions and it is not clear, even with indeterminism, how free will might enter into the laws of physics. In fact, physicists have argued specifically that the indeterminism of quantum mechanics does not imply free will. Their argument is based on the belief that (a.) quantum fluctuations are too small to affect decision making in the brain, and (b.) that there is no freedom in quantum mechanical laws – the outcomes of experiments are generated randomly as specified by the wave function in a probabilistic sense. However, recent theoretical and experimental developments in quantum biology, and a new perspective on randomness, which we will describe here, will open the door and allow free will to exist and to be understood in compliance with physical law. To proceed further, however, we first must introduce the reader to some of the concepts of quantum mechanics and how they may impact biological systems.

# III. About Quantum Mechanics & Biology

We’ll start with the assumption that the laws of physics are sufficiently correct as written. Sufficiently correct means that for purposes of reconciling to our 1st person experience we don’t need to discover any new physics, such as a 5th force, or any new secret particle. The laws of physics may be refined in the future, sure, but for our purposes here we’ll assume we’ve got conceptually everything we need and see where we can go with it. This is particularly important regarding quantum mechanics. Quantum mechanics is one of the most successful, most validated, and most accurate theories ever. But, it is an especially weird subject matter and nothing we can say here can make it intuitive or sensible. The hard truth is, when we look up close at the Universe, things look very different than the reality we are accustomed too. One strange concept, called quantum measurement, connects the observer with the observed in an intimate way. For instance, the angle from which we look at an electron affects the direction it is spinning. Without touching, or otherwise disturbing the electron, the mere direction from which we view it affects the state it will be in! Now, the observer does not have to be a person, it can be any macroscopic device, such as a computer, a camera, photomultiplier tube, something called a Stern-Gerlach apparatus, and much more.

To this day, there is no scientific consensus about how to interpret the strange nature of quantum mechanics. However, very recently results have finally shed light on this problem. This paper, entitled “Understanding quantum measurement from the solution of dynamical models” by A. Allahverdyan et al. (2013) has shown how quantum measurement can be understood. The paper describes some very complex mathematics, yet, a conceptually simple framework and involves some things called quantum dynamical models, statistical mechanics, and “asymptotic cascades to pointer states“. It also describes how quantum measurements occur without any abrupt or time-irreversible
collapse of the wave function. When the coupling between the measuring apparatus and the system is dominant, the system entangles with the measuring apparatus and then cascades to a pointer state registering the measurement, but, when the coupling to the environment dominates, decoherence occurs (when information about the system leaks into the environment). We won’t dwell on the lengthy mathematical details but encourage the interested reader to dive in as this paper is tantamount to a sensible interpretation of quantum mechanics. For our purposes, it will be a sufficient approximation to follow the Copenhagen interpretation and view observation as causing the wave function of the observed to collapse (see Born rule). The wave function will collapse to states dictated by the measurement device (called basis states). This strange point connecting the observer and the observed will turn out to be essential to our attempts to reconcile the 1st and 3rd person perspective, so, below we give a, hopefully, illustrative example.

Suppose we place an electron in a toy box through a door in the top as shown in (figure 4). The electron is spin “up” (spinning in a right-handed way about an axis as depicted by the arrow – the arrow points along the axis of rotation like an axis between the north and south poles of the Earth). Technically, electron spin is a little bit more complex than this description, but, for our purposes, this will suffice to illustrate the concepts involved (the interested reader can dive in deeper to electron spin here). Next, we close the door, and open another door on the front side of the box. Fifty percent of the time the electron will be found pointing toward us (out of the page), and the other fifty percent away from us. There is no way to determine definitively which direction (toward or away) the electron will be spinning, all we can calculate are the probabilities. Whichever side of the box we open, that will determine the axis about which the electron is spinning. If we open a side door, the electron will be found spinning about that axis – toward or away with equal probability. However, if we place the electron in the box through the top, close the door, then re-open the top door, the electron will still be in the same state we left it with near one-hundred-percent probability.

Figure 4: A spinning electron is inserted through the top of a box with spin pointing up (counter-clockwise rotation about the axis). The door is closed. A door on the front is then opened. The electron can only be found pointing toward the observer or away. In this case, it will do so with 50/50% probability. The side of the box that we open determines the axis of the electron – a strange aspect of quantum mechanics that intimately connects the observer with the observed.

Our toy box is a metaphor for a spinning electron in a magnetic field. In the laboratory, electrons may only have spin +1/2 $\hbar$ or -1/2 $\hbar$ (where $\hbar$ = Planck’s constant / $(2pi)^{1/2}$). Nothing in between is allowed by Nature. Practically speaking, the electron’s spin is measured by placing it in an external magnetic field. The electron’s spin causes it to produce a magnetic field of its’ own. The electron will always either align its own magnetic field with the external field, or, it will be opposite to it. If opposite, the electron will emit a photon of light and flip to align, since this is a lower energy state (just like bar magnets will flip so that their magnetic fields align). This short video “Visualization of Quantum Physics” (2017) provides a graphic introduction to quantum mechanics, although it is talking about a particle moving freely in space, rather than the spin state of a particle.

Another strange aspect of quantum mechanics is entanglement. We can entangle electrons so that they become one
system of electrons. In such an entangled system, we can know all there is to know about the system without knowing anything about the individual state of a particle within the system (see a more detail description here). The system of electrons has literally become one
thing. Moreover, entanglement is the only phenomenon like this in all of physics, that is, it is the only means by which particles may become one system of particles larger than themselves. Furthermore, there is no theoretical limit on how many particles may be entangled together or how complex the system may be. Now, there are many ways to entangle electrons, but one way is to bring them close together so that they “measure” each other (so their magnetic coupling to one another is stronger than the coupling to the surrounding environment). When one electron measures the other, it will either find it spinning the same direction or opposite – just like any measurement must. The electron is not big enough to force a single outcome like we see when we open the toy box, so the two electrons will sit in a quantum superposition of states: they with both be spinning in the same direction and spinning opposite at the same time (we refer the reader back to this paper for a detailed mathematical description of what “big enough” is). They will remain entangled in a superposition until stronger external interactions, like with the environment or macroscopic observers, cause decoherence or specifically measure the system. Generally, in the laboratory, this happens very fast, like femtoseconds (one millionth of one billionth of a second) and is the reason most scientists are skeptical of quantum mechanics playing anything but a trivial role in biological systems. This paper, “The Importance of Quantum Decoherence in Brain Processes” by Max Tegmark (1999) shows, for example, that whole neurons are way too big to exist in a superposition of states for biologically relevant timescales due to decoherence. Still, there is abundant evidence that quantum superpositions and entanglement exist in biological systems on smaller scales. A comprehensive list can be found in the nascent field of quantum biology for which the reader can find an excellent introduction in this book: “Life on the Edge: The Coming Age of Quantum Biology” by J. Al-Khalili and J. McFadden (2014). The best studied example of quantum effects in biology occurs in the process of photosynthesis in the FMO complex of green sulfur bacteria where quantum states are observed to persist for as long as picoseconds (one trillionth of a second) and which allows plants to convert light from the sun into chemical energy in a perfect, fast, 100% efficient process.

Figure 5: Quantum biologyphotosynthesis. Diagram of the FMO complex. Light excites electrons in an antenna. The quantum exciton then transfers through various proteins in the FMO complex to the reaction center to further photosynthesis. by By OMM93 – Own work, CC BY-SA 4.0 via Wikipedia

Decoherence rates are highly dependent on the surrounding environment and dynamics of the system. For example, if it is really, really cold (~ absolute zero), and isolated, then entanglement can theoretically last on the order of seconds or longer. The dynamics are critical too – like if something keeps cyclically pushing particles together – entanglement can, again, in theory, last much longer. If the cyclicality happens faster than decoherence, entanglement may, in theory, be sustained indefinitely. Interestingly, biomolecules in organisms are really buzzing, vibrating at frequencies ranging from nanoseconds to femtoseconds. Comparing these times to the picosecond decoherence times observed in photosynthesis and a plausible means by which quantum effects may persist in biological systems becomes apparent. The interested reader can find a mathematical description of this type of dynamic entanglement in these papers:

Persistent Dynamic Entanglement from Classical Motion: How Bio-Molecular Machines can Generate Nontrivial Quantum States” by G. G. Guerreschi, J. Cai, S. Popescu, and H.J. Briegel (2012)

Dynamic entanglement in oscillating molecules and potential biological implications” by J. Cai, S. Popescu, and H.J. Briegel (2010)

Generation and propagation of entanglement in driven coupled-qubit systems” by J. Li and G.S. Paraoanu (2010)

Steady-state entanglement in open and noisy quantum systems” by L. Hartmann, W. Dür, and H.J. Briegel (2005).

They describe a theory by which entanglement can be sustained in noisy, warm environments in which no static entanglement can survive. In other words, the way physicists try to build quantum computers today by completely shielding the qubits from the outside world probably won’t work in biological systems, but this isn’t the only way to go about it. This is an important point and is critical for the reconciliation we propose in this essay. We should note that this theory of dynamic entanglement has not been experimentally verified, but no one, so far, has done the experiments to look for it. For the rest of this essay, we will assume that this theory pans out experimentally and entanglement will be found to be sustainable in biological systems.

Now, suppose we entangle two electrons so that their spins are pointing parallel (see here for more about how this is accomplished in practice). We can place them in two separate boxes as shown in (figure 6). When we open any other door of the box, even if the boxes are at opposite ends of the galaxy, the electrons will be found to be spinning parallel to each other. The measurement of one instantaneously affects the state of the other no matter how far away it is. This is the property of quantum mechanics that Einstein labeled “spooky action at a distance”. Nonetheless, experiment after experiment has supported this strange property (see “Physicists address loophole in tests of Bell’s inequality using 600-year-old starlight“).

Figure 6: Two entangled electrons placed into two boxes through the top, separated by a galaxy, then opened on the side (or any side for that matter), will always be found spinning in the same direction -it could be either left or right, but A and B will always point the same way. Picture of Milky Way Galaxy here.

Once two electrons are entangled we can perform quantum operations on them to put the system into a superposition of four different states at once: (1) “A” up and “B” down, (2) “A” down and “B” up, (3) “A” up and “B” up, and (4) “A” down and “B” down. Entanglement is not limited to just electrons – photons (light), nuclei, molecules, many forms of quasiparticles including macroscopic collections of billions of particles can be entangled (see, for example, SQUIDs, phonons or solitons). To the extent these configurations have binary states (electrons: spin up or spin down, photons: right-handed or left-handed polarization, etc.) they may represent quantum bits or qubits like in a quantum computer (although analog quantum computing is just as plausible as a binary system). In the case of electrons, spin-up could represent a value of, say, “0” and spin-down a value of, say, “1”, to perform powerful quantum computations. The power of such computations is derived from the quantum computer’s ability to be in superposition of many states at once, equal to $2^n$, where n is the number of qubits. So, the two-electron system can be in $2^2=4$ states, but a system of eighty electrons could be in $2^{80}$ states at once – more states than there are atoms in the observable Universe. To imagine how this works, imagine a computer that produces a clone of itself with each tick of the CPU clock. For a 1-GHz clock speed (typical), every billionth of a second the computer doubles the number of states that it is in. So, after one nanosecond, we effectively have two computers working on the problem simultaneously. After two nanoseconds, four computers. After three nanoseconds, eight, etc. Entangled superpositions of this sort are the secret behind the legendary computing power of quantum computers (even though they barely exist yet! J).

However, to take advantage of all these superpositions, we must find a clever way to make them interfere with each other. We can’t look at the result until the very end of the computation and when we do the superposition will collapse. If we can get the states to interfere in the right way we can get the system to be in a superposition that is concentrated, with near 100% probability, in a state that corresponds to a solution to our problem. This is the case with a special algorithm known as Shor’s algorithm that can solve certain NP problems in polynomial time. NP problems are those that require exponential time on a classical computer to solve (see more on P vs NP here, and here). Shor’s algorithm uses something called the quantum Fourier transform to achieve this speed-up and is used to factor large integers. This is an important problem in cryptography and is the “go to” technique that encrypts substantially all the traffic over the internet. For example, factoring a 500-digit integer will take longer than the age of the Universe on a classical computer, but less than two seconds on a quantum computer – hat tip to John Preskill for the stats, see his great introductory video lecture on quantum computing here (2016). To perform such computations, it is thought to require about 1,000 qubits. Other examples of NP problems include the infamous traveling salesman problem. It is an open problem whether all NP problems can be solved in polynomial time on a quantum computer. The vast majority of physicists and computer scientists think this is unlikely, however.

Figure 7: Quantum subroutine in Shor’s algorithm By Bender2k14 – Own work. Created in LaTeX using Q-circuit CC BY-SA 4.0 via Wikipedia

It is time to note, though, that there is a strange, unproven, powerful possibility lurking in quantum mechanics. All the quantum computers that have been designed to date utilize something called the Linear Schrödinger Equation (LSE) – an ubiquitous form in quantum mechanics. That means that the qubits are contained by forces external to the qubit system itself, like an external magnetic field. A much rarer, and controversial, situation in quantum physics involves the Non-Linear Schrödinger Equation (NLSE) and this occurs only when the quantum state wave functions interact with themselves. NLSE systems may potentially have a profound impact on quantum computation because they can theoretically solve NP-complete problems in polynomial time (only if the time evolution of the Schrödinger equation turns out to be nonlinear), and this means they can solve all NP problems in polynomial time. The interested reader can dive in here: “Nonlinear Quantum Mechanics Implies Polynomial-Time Solution for NP-Complete and #P Problems” by D. Abrams, S. Lloyd (1998), and in chapter 5 here “NP-Complete Problems and Physical Reality” by S. Aaronson (2005). So far, such a system has never been implemented, nor even a theoretical design proven though the idea continues to be debated. Here is the latest on the subject: “Nonlinear Optical Quantum-Computing Scheme Makes a Comeback” by D. Brod and J. Combes (2016).

The NLSE does appear in the study of a number of special physical systems, , for example, in Bose-Einstein condensates (BEC) (see this paper on building a quantum transistor using the NLSE in a BEC), in fiber optic systems, and also, especially interestingly, in biology: like the Davydov Alpha-Helix Soliton which forms a complex quasiparticle that transports energy up and down the chain of protein molecules, and in Fröhlich condensates which have recently been observed experimentally in biomolecules upon exposure to THz radiation (see Lundholm, et al. 2015). The interested reader can dive into Weinberg’s original paper (1989) on the NLS Equations here for more, and further enhancements of the theory here and here that address some difficulties. Also, see here for the development of the NLSE using Ricati equations.

To get some hands-on experience with basic quantum computers, IBM has a 5-qubit machine, albeit strictly implementing a linear Schrödinger equation, online right now that is freely available to everyone. Go to www.ibmexperience.com to learn more.

Figure 8: Examples of solutions to Non-Linear Schrödinger Equations. Absolute value of the complex
envelope of exact analytical breather solutions of the Nonlinear Schrödinger (NLS) equation in nondimensional form. (A) The Akhmediev breather; (B) the Peregrine breather; (C) the Kuznetsov–Ma breather. From: Miguel Onorato, Davide Proment, Günther Clauss and Marco Klein (2013) “Rogue Waves: From Nonlinear Schrödinger Breather Solutions to Sea-Keeping Test”. PLoS One 8(2): e54629, doi: 10.1371/journal.pone.0054629, PMC 3566097 via Wikipedia

# IV. The Origin of Choice and the First/Third Person Duality

Getting back to free will, a natural first question is then: when and how does the 1st person perspective arise? We know we have it, but at what level does this perspective emerge? Does a single particle have it? An atom? A molecule? A large configuration of molecules? Only a certain configuration of molecules? A cell? Only a special configuration of cells? All of these seem somewhat arbitrary and it feels implausible to suggest that the 1st person perspective is not present in any level of matter until the final, critical particle or molecule is added, and then, suddenly, a whole new 1st person perspective emerges. There are formal arguments against free will as an emergent phenomenon from the field of philosophy detailed here, for example, supervenience, but these don’t seem any less baffling than emergence itself. Besides, sleep can be induced in humans with certain anesthetics, radically altering their 1st person experience temporarily. This suggests that whatever does give rise to the 1st person perspective, it is dynamic, transitory, and depends on the environment. So, where to start?

This paper, “The Strong Free Will Theorem” by S. Conway and S. Kochen (2008), describes a rigorous theorem of physics called the free will theorem, the meaning of which the authors summarize as follows: “if people have free will then so must elementary particles”. Perhaps a more precise description of the results of this paper is that if people’s actions are not determined solely by their histories than elementary particles are indeterminant as well. Scott Aaronson in “The Ghost in the Turing Machine” (2013) describes the theorem more cynically: “for the indeterminism that’s relevant here is ‘only’ probabilistic: indeed, (people and elementary particles) could be replaced by simple dice-throwing or quantum-state-measuring automata without affecting the theorem at all”. Suffice it to say, we won’t dwell on the arguments of the theorem here (if you’d like to dive deeper check out this video here, or this talk by S. Conway here). Instead, elementary particles, like the electron, seem like as good a starting place as any. After all, they are the most fundamental things in the Universe, and, therefore, feel like a less arbitrary place to start. So, we’ll jump on Freeman Dyson’s bandwagon (opening quotation) and run with it and see where it takes us…

In 1922, in the Stern-Gerlach experiment, silver atoms were fired through a magnetic field as depicted in (figure 9). The magnetic field deflected the atoms upward or downward depending on the direction the silver atoms were spinning. Classical physics expected a continuous distribution on a detector on the far side of the apparatus (see -4- in figure 9) because the silver atoms could be spinning in any direction. The distribution that was actually seen was two singular points (see -5- in figure 9) proving that the silver atom’s spin was quantized. We’ve already said that the electron’s spin is quantized and if we perform this experiment on electrons instead of silver atoms we will see the same result – all the electrons will end up at one of two spots depending on whether their spin is pointing up or down. Just like in our toy box metaphor, this Stern-Gerlach apparatus is one way of measuring the spin of the electron. Now, physicists know exactly how to calculate the quantum mechanics of this problem and it says we will see the electron end up at the upper point with fifty percent probability, and at the lower point with fifty percent probability. But, that is all it will say. There is no way to know more about the electron’s spin than this probability – there is no way to predict it. It is truly random.

Figure 9: Stern–Gerlach experiment: silver atoms travel through an inhomogeneous magnetic field and are deflected up or down depending on their spin. 1: furnace. 2: beam of silver atoms. 3: inhomogeneous magnetic field. 4: expected result. 5: what was actually observed. Image and caption by Tatoute at Wikipedia.

So, now, let’s just suppose the 1st person perspective is present in the electron and it does make a choice. From the electron’s perspective, it feels the quantum probability distribution manifest as a matter of preference. In this case, since the quantum probabilities are equal, it is indifferent, it prefers both choices equally and so it selects one with equal probability. To the physicist, the electron follows the laws of quantum mechanics exactly, the wave function collapses to one of two outcomes that are equally probable as given by the rules of quantum mechanics. Both the 3rd person and 1st person descriptions of the event are present, valid, and equivalent – as, obviously, they must be if we are going to claim the electron has a 1st person perspective dual to quantum mechanics. Now, notice that we forced this choice (measurement) upon the electron. It did not have any say in the matter, we just fired it through the Stern-Gerlach apparatus. Nor does it have any memory of the measurement afterward. After our experiment, the electron goes on its merry way constantly being measured, forced to make choices by the surrounding environment, and having no means to retain a memory of those choices nor any future anticipation of choices to come. No past, no future, just moments of “now”. No understanding, no self-awareness, no consciousness as we know it, just repetitive, uncontrollable, forced choice. Practically speaking, we are talking about roughly nanoseconds before an average electron interacts with something again and is measured – these moments are very brief and fleeting indeed. In a subsequent section, we will explore in more detail what it is like to be an electron.

There is nothing magical about a fifty-fifty proposition by the way. The electron could have been prepared, prior to entering the Stern-Gerlach apparatus, in a superposition of 90% up and 10% down, or 70% down and 30% up, etc. Quantum mechanics predicts the outcome for the 3rd person view precisely, as given by the skewed wave function, and the electron, in the 1st person, experiences the higher probability choice as feeling more compelling – like ice cream versus spinach – and chooses accordingly. Measurement is still random in 3rd person and free choice in the 1st. Measurement and choice are dual to each other. Dual descriptions of the same thing.

Figure 10: A single electron fired through a Stern-Gerlach apparatus. Electrons enter the apparatus moving right to left in a quantum superposition of states – each electron is both spin “up” and spin “down”. The electrons interact with the magnetic field of the Stern-Gerlach apparatus and are projected onto a detector. Those with spin “up” end up at point “A”, with spin “down”, at point “B”. The electron remains in a superposition of “up” and “down” all the way up until measurement at “A” or “B”.

Now, what happens if we run this experiment with a system of 3 entangled electrons connected in some way (a 3 electron chain) as shown in (figure 11)? Several things change in very interesting ways (we ignore the technical details and practical challenges in doing this for now). First, there are four possible outcomes instead of just two: (1) we can have all 3 spins pointing up, (2) two spins up and one spin down, (3) two spins down and one spin up, or (4) all three spins pointing down. Incidentally, states (1) and (4) form something known in quantum computing as a GHZ state (Greenberger-Horne-Zeilinger state). Also, note there are three different combinations of forming state (2) and state (3), but just one way to form states (1) and (4). Second, the system may exist in a superposition of up to eight different states at once, $2^3=8$, so our little electron system is beginning to hint of a basic quantum computer. Third, even after measurement, unless all the spins are pointing up (point A in figure 11) or all the spins are down (point D in figure 11), the system is still left in a superposition of states. There are three different states it could be in assuming it ended up at point B (figure 11), and three different states it could be in if it ended up at point C (figure 11). Measurement does not destroy the system. It reduces the superposition, certainly, but the entanglement persists. Fourth, if the electrons end up at B or C (in figure 11), the superposition retains something that could serve as elementary memory: all the remaining three states have a symmetry to them – the sum of their spins is +1/2 (at B), or -1/2 (at C). If the system is subjected to the same measurement again, it will end up at the same spot – a memory of the prior measurement results is stored in the superposition. The system is in an eigenstate of the measurement with eigenvalue +1/2 (at B), or -1/2 (at C). For example, looking at (figure 11), all three states of the electron system, if it ends up at B, have two spins up and one spin down. Each spin up is +1/2, each down spin is -1/2, so the sum for each state is +1/2 (e.g.+ ½ + ½ – ½). Last, because we measured an aggregate property (its total spin) of the system, the entanglement is not broken – it remains one system and can be measured again. We will come back to how this quantum memory can then be converted into a more permanent memory in subsequent sections. Other properties of the system can, in some cases, be measured too, without disrupting the system. This can happen if, in quantum-speak, the measurement operators commute (the interested reader can dive deeper into quantum operators here). For example, we can measure the momentum of an electron along the x-axis without disrupting the momentum along the y-axis. Different directions of spin can’t be measured simultaneously, however, but, some quasiparticles, which we will come to later, have infinitely many conserved quantities and can therefore be measured in infinitely many ways.

In the 1st person the entangled system chose from four different subjective preferences, say ice cream, pretzels, brussels sprouts, or spinach. Some choices may be more likely than others with the electron’s subjective preference for each equal to the quantum probability of that outcome. A rudimentary memory persists of its choices. If subjected to the same choices again, the answer will be the same. And, the system’s identity as one thing persists beyond making a choice – it lives to choose again! J

Figure 11: Chain of three entangled electrons represented as
$(e^-)(e^-)(e^-)$ are fired through the Stern-Gerlach apparatus. (3rd person description): The 3-electron-system going into the apparatus may be in a superposition of eight states simultaneously. The magnetic field will deflect the electrons where they will be detected (measured) at one of four locations, labeled A, B, C, or D, depending on the spin state. The probability the electron-system is in each state is given by a coefficient $\alpha_k$, where sum $\alpha_k^2$= 1. States with greater net spins are deflected more severely. If deflected to locations C or B, the electrons still persist in a superposition, albeit a reduced one. In any case the system remains entangled. (1st person description): The 3-electron-system must make a choice upon traveling through the apparatus. The appeal of each choice is equivalent to the quantum probability of finding the system in that state. The electron system after its choice retains a memory of that choice and its identity as “one thing” persists.

As the number of entangled electrons grows larger, say to 100, or 1000, suddenly our system takes on the appearance of a powerful quantum computer – it could be in $2^{100}$ or $2^{1000}$ different states simultaneously. The system of electrons still does not get to decide when to make a choice – we continue to force choice upon it by firing it through the apparatus, but the number of choices available to it, and the vastness of the superposition that can survive a choice increases substantially. For example, say a system of 100 electrons was found to have total spin of +1/2 up. There are ~$10^{29}$ ways that 100 electrons, each having spin +1/2 or -1/2 can sum to a net spin of +1/2, so the system could still be in a superposition of this many states even after measurement. Also, the system could potentially choose from an array of 200 different possibilities (there are 200 different possible total spins). The degree of freedom of choice has increased substantially, although nothing resembling our free will yet, and a complex memory reflecting the results of those choices has emerged.

# V. Crisscross Entanglement and the NonLinear Schrödinger Equation

Now, what happens if we have so many entangled electrons that the chain of particles crosses itself (as shown in figure 12)? That is to say, suppose the entanglement is sufficiently extensive that we no longer need the Stern-Gerlach apparatus. The tail of the chain of entangled particles (labeled “B” in figure 12) can effectively serve as the macroscopic magnet in the Stern-Gerlach apparatus while the front (labeled “A” in figure 12) acts as our chain of electrons passing through. Some especially interesting things now begin to happen.

Figure 12: System of entangled electrons that crosses itself. The region of the entangled chain near end “B” acts like the Stern-Gerlach magnetic field, and the electrons near end “A” play the traditional role of spinning particles in the experiment. The entire chain is one entangled system so it can simultaneously be in states forming various virtual Stern-Gerlach apparatuses as well as different system states. For example, if all the electron spins near end “B” are aligned pointing up, the magnetic field will be pointing up as in a traditional Stern-Gerlach apparatus. However, if the electron spins near end “B” are an even mixture of up and down spins then no Stern-Gerlach magnetic field would be present and the electron chain at end “A” would pass straight on through.

In the 3rd person quantum description the whole chain is entangled as one system and so will be described by a single wave function. We can expect all the features of the three-electron case, elucidated earlier, to still hold – such as robust quantum computing power, a potentially vast superposition of states, and a means to store the results of measurements in a form of memory while still maintaining entanglement – but, additionally, a complex non-linearity now emerges. The wave function interacts with itself. That is, the end of the entangled chain at “A” (in figure 12) will interact with a magnetic field induced by the chain at end “B” (in figure 12). This means that an NLSE is created, and, if the time dependence, too, is nonlinear, then the system is not functioning merely as a typically quantum computer, but, potentially has the power to solve any NP problem in polynomial time.

The system, in general, may exist in a vast superposition of states, but two broad categories of states are of special interest for characterizing the system. First, those states where substantially all the electrons near end “B” have spin up, and second, those states where the electron spins near end “B” are evenly mixed up and down. In the former case, illustrated in (figure 13 – top), the electron chain at end “A” experiences a magnetic field generated by end “B” similar to the external magnetic field of the Stern-Gerlach apparatus. With the magnetic field turned “on” in this case, the beam at end “A” is split depending on its spin state. In the latter case, illustrated in (figure 13 – bottom), the electron chain at end “A” experiences no magnetic field because the electron spins at end “B” mostly cancel each other out. The chain at end “A” is not split because the magnetic field is effectively turned “off”. Each state has a coefficient, $\alpha_k$, and so the likelihood that the system will have the magnetic field turned “on”, as in the first category, will be related to the aggregate probability associated with these coefficients, and, likewise, for the latter category with the field “off”. Beyond these two categories there are other interesting states the system could be in – for instance with substantially all spins pointing down, effectively creating an inverted Stern-Gerlach apparatus. We merely call out these two as being illustrative of the fact that measurement of this system reveals a complex nonlinear dependence on itself. Measurement of the system does not mean just measuring the state of the electron spins near end “A”, it means measuring whether the system induced a magnetic field or not as well.

Figure 13: Two broad categories of the electron system are remarkable. (Top) The category of states that have substantially all of the spins at end B pointing up – more or less replicating the magnetic field of the Stern-Gerlach apparatus, and (bottom) the category of states that have end B in a roughly evenly mixed distribution of spins up and down – effectively producing no net magnetic field.

Especially interesting, however, is the 1st person perspective: the system no longer has choice forced upon it like the prior cases we’ve examined. In this configuration, the system can choose whether to subject itself to making a choice or not. That is, by choosing the state of the electrons near end “B”, the system is choosing whether to use a Stern-Gerlach-like magnetic field to split itself at end “A”. Something resembling real free will has emerged: the ability not just to make a choice, but to choose what choices to make, and, even, whether to make a choice at all! To the system, each state in its quantum superposition feels preferable in relation to the $\alpha_k$ coefficient of that state. The system prefers to turn “on” the magnetic field and “think” about some choice in direct relation to the aggregate $\alpha_k$ coefficients that apply to the “all spins up” category of states as shown in (figure 13 – top). Similarly, the system prefers to ignore “thinking” about a particular choice in relation to the aggregate $\alpha_k$ that correspond to the “spins evenly mixed” category of states as shown in (figure 13 – bottom). The system at once follows the laws of quantum mechanics, and has the freedom to choose what choices to make at the same time.

Figure 13 B: Breather interactions of the NLSE from “Breather interactions, higher-order rogue waves and nonlinear tunneling for a derivative nonlinear Schrödinger equation in inhomogeneous nonlinear optics and plasmas” by L. Wang et. al.

# VI. The Mind’s Eye and Free Will

Quantum probabilities always manifest themselves to you as preferences. You are more likely to direct your thoughts to those states with higher $\alpha_k$ state coefficients, that is, you are more likely to direct your thoughts to those things you prefer. You freely choose what to think about. And, once you’ve chosen where to direct your thoughts, the choices you make there, too, follow quantum probabilities corresponding to your preferences. So, your will is free and indeterminant, but it is constrained to follow the probabilities of quantum mechanics – you are constrained to probably do what you want and need to do.

Is this truly free will, though? Descartes famously said: “the will is by its nature so free that it can never be constrained“. But, if we examine our will certain things are very hard to do. For example, it would be very hard to, on a whim, stab ourselves in the gut with a knife. Search your feelings, do you think you can even will yourself to do that? I have a very strong preference to not do this action. The quantum coefficients there are very small. Even though my will is free it is probabilistically constrained. On the less dramatic side, I have a very hard time resisting chocolate. My best strategy is to not even think about eating chocolate. I have made this mistake in the past and felt terrible later. I have learned over time, and, relying on my ability to store long-term memories, my quantum probabilities have adjusted to avoid this trap. Nowadays, I am aware of the consequences of my choices, my preferences have adjusted, and I don’t “go there”. This illustrates how our preferences can be recursively affected. Furthermore, the more I think about chocolate the more tempting it becomes – the mere act of thinking about something can recursively alter the state we are in, change our preferences. The quantum state coefficients may be altered the more and more we think about something. And, once I eat one bite, it becomes almost impossible to not eat more. My choices feel constrained probabilistically to my preferences which are dual to the quantum state coefficients.

Figure 14: Artist’s illustration of the mind’s eye. The focus of our thoughts can feel like a third eye. From Psychology Today here.

“You can choose a ready guide

In some celestial voice

If you choose not to decide

You still have made a choice

You can choose from phantom fears

And kindness that can kill

I will choose a path that’s clear

I will choose free will.”

– the song Free Will by Rush (1980)

Of course, thinking about the prospect of going to dinner, and actually doing it are two different things. If we make too many weak measurements too quickly we will ascertain the state of the system definitively and collapse the wave function – a strong measurement. If that’s what you want to do – if you choose to push the button – you focus on it sufficiently to activate it. You make enough weak measurements of yourself to reveal sufficient information about yourself to push the button, you make a definitive choice. You make a choice in the 1st person to push the button, and a measurement is made of you in the 3rd person by the button. Then, this button triggers a whole chain of events, including pressing other buttons, related to going out to dinner.

A great question naturally arises: can you “game” the system”? In other words, can you do what you don’t want to do? The mere act of thinking about this, however, changes the game. You no longer are just thinking about whether you want to go out for dinner, but are now considering a much more abstract topic. Indeed, you are projecting yourself now onto a whole different button – a “game the system” button!

“Free will is neither fate, nor chance. In some unfathomable way it partakes of both.” – Martin Gardner. Hat tip here for the quote

# VII. Making Sense of Experimental Results in Neuroscience

We have already seen that neurons themselves are too big to persist quantum entanglement (see “The Importance of Quantum Decoherence in Brain Processes” by M. Tegmark (1999)), but scientists have suggested other theories involving much smaller mechanisms that may support quantum entangled states in the brain. One theory involves ion channels (see “Ion Channels: Structure and Function“). These channels allow sodium, calcium, potassium or chloride ions to flow into and out of neuron cells throughout the nervous system. They are only about the diameter of a single ion, even though millions of ions per second pass through them, and regulate very precisely the ratio of sodium to potassium ions they allow through. The ion channels are small enough they are susceptible to quantum effects, but can still influence neuron firing suggesting a conceivable way that quantum effects could be magnified and thereby manifested at a macroscopic level (a button press!). The process of thinking about something, exploring our preferences, could affect these ion channels through weak measurement and result, through amplification, in increased neuron firing rates and cause the formation of the readiness potential (RP) witnessed by Libet prior to motor activation. See, for example, this paper which shows how preference is related to the RP: “The LRP (lateralized readiness potential) is capable of measuring preparatory motor activity underlying the dynamic accumulation of subjective preference in the premotor cortex” (2014). A choice, or button press, occurs when ion channels are affected even more significantly and increase neuron firing rates enough to initiate action. In this paper “The Speed of Free Will” by T. Horowitz, J. Wolfe et. al. (2009), the authors compare the time required for voluntary shifts of attention versus task-driven shifts (i.e. external stimulus). The voluntary shifts occur more slowly suggesting a speed of free will on the order of 100-200 milliseconds. This is consistent with the framework we describe and indicates the time latency for quantum effects to be amplified. Still, the quantum entangled system discussed here would require a vast network of ion channels to all be entangled. How could ion channels in different neurons – vast distances for quantum entanglement – be entangled together?

In quantum networking, if we have a photon source that produces entangled photons (an EPR source) and we send one photon to lab A and another to lab B and then let them interact with qubits in those labs we can entangle the qubits at A with the qubits at B without them ever coming into contact with one another. Even if labs A and B are far apart (many kilometers apart). After the photons interact with the qubits the photons can be measured, discarded, or whatever – the entanglement between the qubits will remain. We know biological decoherence rates are very fast, like picosecond-fast, but if continual absorption of entangled photons from an EPR source by the ion channels cyclically reestablishes entanglement ala’ a biological quantum network, it could allow entanglement to persist. Since biomolecules, such as DNA, can absorb, down convert, and reemit photons on time scales ranging from picoseconds to femtoseconds they may be a possible EPR source (see here for details on internal conversion). Photons that interact with the vibrational frequencies of biomolecules, such as THz, are candidates (see “Observation of coherent delocalized phonon-like modes in DNA under physiological conditions” by M. González-Jiménez et. al. (2016)).

Figure 15: A diagram of a quantum network from Centre for Quantum Computation & Communication Technology. EPR sources at either end are sources of entangled qubits where A&B and C&D are entangled. The joint measurement of B & C occurs at the quantum repeater in the middle entangling A & D at a distance.

There has been much study of the electromagnetic field (photons) of the brain (the EM field) – these “brain waves” are what we see in EEG readouts like Libet’s. The source of this field has never been understood, however. There are many theories that relate the brain’s EM field to consciousness, for example the CEMI (Conscious ElectroMagnetic Information) theory. In this theory, the EM field interacts with the ion channels to synchronize neuron firings, but the neuron firings produce changing electric and magnetic fields and therefore reciprocally affect the EM field. Another interesting theory of quantum effects in the brain has been proposed by S. Hameroff and R. Penrose in their Orch-OR (Orchestrated Objective Reduction) theory of consciousness. In this theory, tiny microtubules, which form the cytoskeleton of all cells in the body, but are particularly prevalent in neurons (~ $10^9$ per neuron), support quantum entanglement. Microtubules have been shown in the laboratory to support certain kinds of quantum states known as topological qubits (see for example: “Discovery of quantum vibrations in ‘microtubules’ corroborates theory of consciousness“). Microtubules in one cell are theorized to communicate with microtubules in neighboring cells through gap junctions although quantum effects across junctions have not been experimentally verified. Perhaps a mechanism involving an EPR photon source is involved here as well, for example, photon-quasiparticle interaction between neighboring microtubules. Decoherence times in microtubules, like ion channels, also tend to be on the order of picoseconds which, again, is long enough that entanglement could be sustained if it were being cyclically refreshed at rates faster than that. Especially implicating is experimental evidence suggesting that anesthetics, known to cause patients to sleep and lose consciousness, chemically bind to sites along the microtubules (see for instance “Quantum Criticality in Life’s Proteins” (2015)). Also, evolutionarily, single celled paramecium have been shown to exhibit learning capability even though these organisms are too simple to have neurons or a nervous system. They do, however, have a network of millions of microtubules comprising their cytoskeleton. Interestingly, ion channels, microtubules, and the Davydov alpha-helix all involve a biomolecular helical structure (as do many proteins and DNA itself) which can give rise to nonlinear quantum interactions and, therefore, quasiparticles such as the Davydov soliton. For instance, the quantum mechanism may involve lateral molecular oscillations (e.g. in hydrogen bonds) entangled with longitudinal phonons to generate a NLSE whose solutions are described by a propagating quasiparticle. Lastly, we should say that it may be that not one but all of the above mechanisms, and others, collectively form some complex NLSE satisfying quasiparticle that collectively gives rise to the mind’s eye.

This is not without precedent in Biology as the mechanism that transports energy to the reaction center in photosynthesis is proven to be another quasiparticle known as an exciton (an electron-hole pair). Solitons are “propagating pulses or solitary waves that maintain their shape and can pass through one another” (from here). Think of a chain of rogue waves for intuition. Such a system means that no individual electrons are necessarily moving throughout the brain like in a classical electrical system, but rather a composite quasiparticle comprised quantum mechanically of a collective wave function of entangled particles. Other examples of these quasiparticle systems include Cooper pairs in superconductors, magnons (quanta of spin waves), polarons, and phonons (which are known to propagate up and down the double-helix of DNA and thought to direct the replication of DNA in what is known as a transcription bubble). There are all kinds of quasiparticles, a list is here.

Figure 16: Image of the alpha-helix structure ubiquitous in biological systems. Helical structures give rise to nonlinear Hamiltonians
which, in turn, imply the Nonlinear Schrödinger equation. This has quasiparticle solutions – like the Davydov soliton that transports energy along the alpha-helix. The nonlinear Hamiltonian arises from transverse quantum states entangling with longitudinal phonon states. This nonlinearity of entangled particles (in the 3rd person description) is dual to the mind’s eye (in the 1st person, subjective description). This image is from here, image from Voet, Voet & Pratt 2013, Figure 6.7

So, we have an entangled quasiparticle system in the 3rd person that is dual to our feeling of being One in the 1st person, and, moreover, the quantum entangled system interacts with the outside world by pushing buttons and being affected by sensors. The buttons could be neurons via amplification of quantum effects. Sensors, too, could be neurons affecting the quantum system, or something much smaller like the ion channels. To make our bodies do anything or say anything, we need to push these buttons – just like Buddhist scholar Alan Wallace says “the brain is the keyboard of the mind“. Libet’s experiments are measuring the action of the neurons, and the neurons are your keyboard and a critical resource for your mind, but you are not those neurons. And so, when these experiments are conducted they are measuring you through the different ways you interact with the outside world, by pushing the “speak” buttons or the “finger press” buttons. The timing differences these experiments measure are just the timing differences between button presses. These experiments are measuring events correlated with each other in the cascade of neuron firings that produces action, but these events are not the original cause. The reporting of “conscious choice” by subjects is, itself, the reporting of a button press – the button to report the choice. Only when you sufficiently focus your quantum self to fire that neuron(s) does the button get pressed. Only then does a full choice/measurement occur. The decision to override the RP is yet another button press – the negation of the “finger press” button. All deliberate actions start with a button press, but, the original cause originates from your free will, from your mind’s eye, from the crisscross of a quantum entangled system.

There is evidence in Psychology that there are two types of memory used by the brain: short-term memory and long-term memory. Short-term memory typically lasts for something on the order of about 18 to 30 seconds, while long-term memory lasts much longer – potentially for the life of the organism. The evidence for these two distinct kinds of memory is based on case studies of patients with a disease called anterograde amnesia as well as certain kinds of “distraction tasks” that seem to affect one or the other types of memory in isolation. Patients with anterograde amnesia tend to have working short-term memories but have difficulty forming long-term ones. In other words, they can retain memories for 30 seconds or so. Such a description of memory naturally fits in with the quantum entangled system we have described above. In such a system, short-term memory is that information which is encoded by collapsing the superposition to an eigenstate of the measurement operation with the chosen/measured eigenvalue – like we described in the 3-electron case above. So, you start in a superposition of states, you choose to think about where to go out for dinner. That choice leaves end “B” in an eigenstate, say, with all the spins pointing up focusing you onto the choice of where to go for dinner. Upon thinking about it, you decide, say, to go to restaurant XYZ. There again, that choice leaves you in an eigenstate – this time at end “A”. So that all states at end “A” have total spin equal to, say, +7/2, which corresponds to restaurant XYZ. Every time you make a choice, you are subjecting yourself to measurement and collapsing your state to an eigenstate of that measurement. You might still be, and probably are, in a vast superposition of states, but all the states have a symmetry to them – an aggregate property of each state, like the total spin, is the same. That is short-term memory, but quantum states are difficult to preserve, even with redundancy and error-correction, so an organism needs to encode this memory in something more stable, more macroscopic. This is where the neurons and long-term memory come in. But, how is this information transferred to long-term memory?

The Hebbian theory of neurons tells us that “neurons that fire together wire together”. Our quantum entangled system, when run through the same measurement conditions repetitively would realize the same outcome over and over. And, if that outcome is amplified to induce neuron firings, then it could cause the same neurons to fire over and over. That is because the quantum superposition is still in an eigenstate. Once a choice/measurement is made, if the same measurement is repeated the system will produce the same result. This may be the feeling of having “your mind set on something” or “your mind made up”. In other words, if the magnetic field generated by end “B” is unchanged and we pass end “A” through it again and again, it will wind up projected onto the same place. For example, if all states of the system have the symmetry that their net spin is +3/2 at end “A” then this will be indicated by their trajectory upon exposure to the same magnetic field from end “B”. The chain of entangled particles will be focused to the same place. Iterated through a loop a number of times, like a broken record playing over and over again, would produce the repetitive firing of the same neurons which then would cause those neurons to wire together – forming a long-term memory of that choice. If you have ever had the experience of, when trying to remember something, you keep “saying it” over and over in your mind, this is possibly the dual description of that experience. You use your mind’s eye to keep playing it repeatedly until it sticks – until the neurons “wire together” per Hebbian principles. Such a model is consistent with the results described in this paper “Delaying Interference Enhances Memory Consolidation in Amnesic Patients” by M. Dewar et. al. (2010), where, if there were no distractions, even patients with amnesia could continually keep the short-term memory “in mind” and give it the extra time needed to wire the neurons together for long-term memory.

So, imagine our quasiparticle system is a system of “brain solitons” propagating around the brain, like a “ticker-tape” of information entangled as One system. Perhaps following a general path like that shown in (figure 17). Inducing the appropriate neurons to fire in synchrony, over and over again, and wire together. One system of particles entangled together but no individual particle is the system. In fact, individual particles may be added, replaced or exchanged over time which is consistent with isotope studies that show that the vast majority of atoms in the body are recycled over time. The critical difference between the ticker-tape of the mind and a classical ticker-tape is that it can be in a vast superposition of states at once. Like zillions of classical ticker-tapes flowing at once and directed by the mind’s eye. All of this is at once described by this quantum mechanically in the 3rd person description, and in the 1st person as what it is like to be you! A mind’s eye, a short-term quantum memory, quantum computing power to enable creative leaps and “aha!” moments, long-term memory storage via neurons – a full-on conscious experience is emerging! However, we must say, as interesting as this depiction is, it clearly is an over-simplification of a very complex process of making choices and storing memory in the brain, and we will not pretend to have all the answers here.

Figure 17: Flow of quantum information as a quantum “ticker-tape: in the brain takes the form of a propagating quasi-particle. Such a quasiparticle could take the form of a “brain-soliton” resulting from a nonlinear Schrödinger equation – the mind’s eye. Communication between the hemisphere’s may take place through the corpus callosum and explain why split-brain patients (who have it severed) exhibit traits of two wills. Brain picture from public domain pics here.

Figure 18: Possible flow of the quantum “ticker-tape” of information, “brain solitons”, after patients have their corpus callosum severed. R. Sperry observed that one half at a time becomes dominant, takes control of the mind, and carries on unaware of the actions undertaken by the other half of the brain.

Another illustrative example you may have encountered is the experience of looking at an image, and, for a few seconds, you have no idea what you are looking at. It may mean the neurons by themselves aren’t able to recognize the image. It seems the mind’s eye has to get involved somehow to recognize the image – to adjust the timing of neuron firings, to bus the information on the ticker tape to other regions of the brain, or place them in context to achieve recognition. This is called “top-down reasoning“. When humans only have a split second to look at an image and do not have time to invoke top-down reasoning, they can recognize images only about on par with the best machine learning models. These machine learning models, interestingly, use so-called deep learning neural nets which use a neural network structure patterned after the structure of neurons in the visual cortex. However, on hard images that don’t offer instantaneous recognition, humans, if allowed time to think about, can use top-down reasoning to recognize images better than machines. This is the basis for modern “CAPTCHA’s” – Completely Automated Public Turing test to tell Computers and Humans Apart. No computer machine learning models have such functioning, and so far, it has not been understood what is involved to produce it. Also, it is an unsolved question how the brain came up with the neurological structure of the visual cortex, especially since the problem of finding the right structure seems to involve a highly non-linear problem (e.g. discovery of “max pooling” configurations). Quite possibly, in early development, it is the quantum entangled aspects of the organism that are engaging to solve such difficult, probably NP, problems. Training methods in artificial neural networks that use gradient descent type approaches (i.e. non-quantum) tend to get stuck in local optima. But, invoking quantum computing power may provide an exponential speedup on learning problems of this kind and enable a global solution to be found (see this video “Quantum Machine Learning” which shows how quantum algorithms can be applied to neural networks by Seth Lloyd (2016)).

From an evolutionary perspective, this paper “Towards a scientific concept of free will as a biological trait: spontaneous actions and decision-making in invertebrates” by B. Brembs (2010) describes how animal species have been shown to randomize their behavior to gain an evolutionary advantage. Their behavior is shown to be self-initiated, that is, given the same environment, the same external conditions, animals will deliberately inject variability into their behavior. This is in direct conflict with behaviorist models where environmental conditioning alone determines behavior, but strongly agrees with the indeterminant quantum system described here. Animals go even further, actually increasing the variability of their actions when faced with uncertain situations – seeking to deliberately explore the unknown. A model is described in which animals amplify small random differences in internal conditions to generate this variability: “a neuronal amplification process was recently observed directly in the barrel cortex of rodents, opening up the intriguing perspective of a physiological mechanism dedicated to generating neural (and by consequence, behavioral) variability” from “Sensitivity to perturbations in vivo implies high noise and suggest rate coding in cortex” by M. London et al (2010) hat tip B. Brembs. The paper by Brembs gets everything right short of recognizing two things: (1) while these small differences are indeed random when looked at externally and objectively, the randomness follows quantum probabilities that are dual to the preferences of the organism. Even in an organism as simple as a fly, there is a little ‘self’ in there. Not necessarily offering the same experience as in human consciousness, but free will and a mind’s eye are indeed present: “(flies)…can actively shift their focus of attention restricting their behavioral responses to parts of the visual field.” (from here and here, hat tip B. Brembs). Recognizing this will resolve the conundrum specifically called out in “Downward Causation and the Neurobiology of Free Will” by C. Koch (2009): “for surely my actions should be caused because I want them to happen for one or more reasons rather that they happen by chance” (hat tip B. Brembs). The resolution involves recognizing that “want” and “chance” are dual versions of the same thing! Second (2), while it is tempting, and dogmatic, in evolutionary biology to chalk everything up to “that must have been selected for during X billion years of evolution”, we have pointed out in “What if the Miracle Behind Evolution is Quantum Mechanics?“, that this hypothesis class does, in fact, have infinite capacity, i.e. VC-dimension. That means it can explain anything! Behavioral unpredictability was not selected for by natural selection in evolution as an advantageous trait, it was present in life from the very beginning – all the way back to when life was nothing more than a complex molecule who’s only remarkable property was that it vibrated fast enough to sustain growing quantum entanglement!

Figure 19: Drawing by Santiago Ramón y Cajal (1899) of neurons in the pigeon cerebellum via Wikipedia.

# VIII. What is it like to be an Electron?

Now imagine something like the electron – remember we are talking about maybe nanoseconds between measurements, between choices. Forced to make choices constantly, billions of times per second. And, every time your memory is erased. You have no memory of where you came from. You have no idea where you are going. No concentration. No focus. No mind’s eye, and so no free will. Choice is forced upon you by the outside world and all you can down is choose. But, you can still make choices. Not choices like you are used to, you cannot push buttons in your brain and move your limbs – it takes too long, takes too much focus, remember the speed of free will? You no longer have self-awareness, that too takes too long to think about. You are just all absolutely, insanely “in the moment”. Only aware of right now! Choices are thrown at you constantly, you have no time to think: Will you turn left, or right? Emit a photon or not? What direction are you facing? Absorb a photon or not? A billion choices per second, it’s so maddening! But, the 1st person perspective is there. And you can make choices. Correction, you have to make choices! And, you don’t get to decide what those choices are. Just in your face questions that you must answer now!

You will seek out stability. But, you will never see it coming. If you chance upon an atom to give you shelter, you can emit a photon and descend into a lower energy state of the atom. Excited states are unstable, you will quickly emit another photon and transition to a lower, more stable energy state. The ground state, if you get there, gives you a moment of peace. If you could dream, your dream would be to entangle with other particles, to become something greater than yourself, to escape the madness, to be able to remember, to have some idea of what’s coming next, to have freedom. But, you cannot.

If that all sounds crazy, maybe you are beginning to appreciate what life is. Life is persisting entanglement so that you can remember your last choice, so that you may know you made a decision, so that everything isn’t just a fleeting instant, so that there is depth to your existence, so that you may have freedom! Life is the Universe’s opportunity to escape the madness. A miracle made possible by quantum entanglement. Indeed, the essence of life is quantum entanglement. And, the engine by which it adapts and evolves is the intrinsic quantum computational power of the Universe! It is precious indeed!

/

Figure 20: Memento is a 2000 American neo-noir psychological thriller film directed and written by Christopher Nolan, and produced by Suzanne and Jennifer Todd. From Wikipedia.

# IX. Predictability

In his essay “The Ghost in the Quantum Turing Machine” (2013), S. Aaronson discusses the relationship between predictability and free will. He suggests that if a Turing machine (a non-quantum computer) can consistently and accurately predict the actions of a human then he or she has no free will. This seems sensible enough, but since the source of free will we describe here is a quantum entangled system, and is distinctly non-classical, this assertion requires a modification. Today’s cutting edge supercomputers require 100 days to simulate the dynamics of a single small protein biomolecule for a millisecond (see “Supercomputer sets protein folding record” (2010)). Modeling something like an organism would likely take more than the age of the Universe even using all the classical computers available on Earth. Quantum computers with their vast computing power change things, though, and this is why we would alter Aaronson’s conjecture as follows: if a model can simulate a human then the model has a least as much free will as the human. And, frankly, it could very well turn out that it will take not just a quantum computer, but a quantum computer that implements an NLSE to perform such a simulation. That is to say, it will be necessary to give the model a mind’s eye so that it can have the ability to choose what to think about, where to focus its attention, choose what choices to make, to truly pass a Turing test. Furthermore, to create such a simulator would be to create life. The 1st person perspective is present in all fundamental particles in the Universe and the means by which this grows is through the process of entanglement combining the collection of particles into One thing. Since we would have established the quantum entangled simulator artificially, it may better be characterized as life on life support, but life nonetheless. The system would naturally follow the laws of quantum mechanics, and this would give rise to its own personality in the 1st person. It would, of course, follow the transitions of a quantum system seeking out lower and more stable energy states and this would manifest to it as its own assortment of needs, desires and a spectrum of emotions.

# X. The Good, the Bad and the Dual – Further Explorations of the 1st/3rd Person Duality

Despite that we have said quite a lot about the origin of the 1st person perspective, memory, the mind’s eye, free will, and the nature of life itself, we have not said anything about why certain experiences feel pleasant (good) and others unpleasant (bad). Why should anything feel pleasant? Why is there pain? Why are there qualia to experience? Sure, there are studies showing that certain chemicals like serotonin and dopamine are correlated with positive moods and good feelings. They may even be an essential part of some causal chain. But, still we are left asking the question: why does the chemical serotonin feel good? Why should any chemical have any bearing on our feelings? If the 1st/3rd person duality we have described here is to hold, then every phenomenon in subjective experience must have a dual description in objective physics. All those pleasant and unpleasant feelings, all of those wants, desires, pains, and needs must have a dual description not just in disassociated chemical reactions, but directly pertaining to a single quantum entangled system – the self. On the other side, quantum mechanics only cares about one thing and that is: energy states. Generally, systems when left alone “want” to transition to lower and/or more stable energy states, e.g. an electron in the hydrogen atom, in an excited state, will emit a photon to return to the lower, and more stable ground state. So, all the subjectivity in life must be understood in their dual quantum mechanical description as relating to the energy state of the organism’s quantum entangled system.

Too move forward we need to refer to the following paper “Quantum entanglement between the electron clouds of nucleic acids in DNA” by E. Rieper et. al. (2010) which shows that the electron clouds of neighboring nucleotides are entangled in DNA. Moreover, the entanglement helps to hold DNA together and allows it to achieve a more stable energy configuration. To get an intuitive idea of how this works, imagine electrons swirling laterally, some clockwise, some counter-clockwise, around a long double-helix strand of DNA, tugging on it, inducing instability in the chain. When entangled, the electrons clouds slip into a superposition of states so that each electron is half on the right side, and half on the left and balance each other out, sort of like orbiting in symmetric unison, stabilizing the molecule. Similarly, for oscillations along the length of the chain, if neighboring oscillations are out-of-synch, disharmonious, it induces instability in the biomolecule. If the oscillations are synchronized, entangled together, harmonious, like normal modes of oscillation in a classical spring, the energy state is lower stabilizing the molecule.

This is not a new trick of Nature, the nucleus of deuterium (an isotope of hydrogen) is comprised of a proton and a neutron that sit in an entangled superposition of states (an isospin singlet and triplet state, hat tip J. McFadden and J. Al-Khalili) so that they may bind closer to each other. This allows the system to form a more stable nuclear state. Suppose that life takes it one step further, though, and entanglement moves far beyond stabilizing individual atoms and biomolecules and instead entangles biomolecules all over the organism together. Intuitively, to imagine the meaning of all this entanglement, think of a large selection of biomolecules in your body vibrating coherently, synchronized – like a marching band marching together as One unit rather than a cluster of chaotic individuals – stabilizing the organism as entanglement in the electron clouds did for DNA. To provide a guess at how this entanglement could be sustained, we invoke the idea presented earlier that DNA may function not only as a source of genetic code but as a sort of antennae – an EPR photon source for entangling other biomolecules, constantly being driven, absorbing, down converting and re-emitting entangled photons (possibly THz) on femtosecond timescales (faster than decoherence rates) to keep the system entangled together as One unit. This entanglement then may have a stabilizing effect on the organism’s quantum entangled system, lowering and/or stabilizing its collective energy state. If this is true, then several natural, interesting and compelling explanations of dual subjective/objective phenomena emerge.

Stress

If we fire any particle, whether it be light (photons), electrons, or something relatively heavy like buckyballs (the molecule Buckminsterfullerene, $C^{60}$) through a two-slit interferometer, we will see an interference pattern on the other side. All particles exhibit this wave-like quantum property regardless of size – it’s just that more massive ones have a much shorter wavelength and therefore a narrower interference pattern. When this experiment is performed, it is important, however, to remove all gas molecules from the interferometer chamber. Gas molecules interfere with the wave-like nature of the particles and will ruin the interference pattern. One way to think of it is: if anything in the environment acquires information about where the particle is, like which of the slits the particle passes through, the wave-like nature of the particle is disrupted and so is the interference pattern. If the information is only partial (i.e. not certain) the interference pattern is still visible but obstructed. Once information is obtained definitively showing which slit the particle went through, though, the interference pattern is completely destroyed (this is decoherence).

Figure 21: A hologram of a mouse. Two photographs of a single hologram taken from different viewpoints by By Holo-Mouse.jpg: Georg-Johann Layderivative work: Epzcaw (talk) – Holo-Mouse.jpg, Public domain from Wikipedia.

In the body of an organism, the establishment of entanglement throughout a macro-collection of biomolecules may be like projecting an internal hologram. Holograms are made from interfering coherent light (i.e. light of the same frequency, like a laser). In this case, the frequency is not visible light but probably frequencies like THz that interact with the vibrational states of biomolecules. The hologram is analogous to the interference pattern in the two-slit interferometer described above and is the means by which biomolecules throughout the body are entangled together. It allows biomolecules to be synchronized and brought to operate in unison – as One entity. So that it may, for instance, direct the growth of biomolecules such as microtubules (see “Live visualizations of single isolated tubulin protein self-assembly via tunneling current: effect of electromagnetic pumping during spontaneous growth of microtubule” (2014) by S. Sahu et. al.), or, control gene expression as in “Specificity and heterogeneity of THz radiation effect on gene expression in mouse mesenchymal stem cells” by B. S. Alexandrov et. al. (2013). It is also the means by which a lower and/or more stable energy state is achieved for the collective entangled biomolecules in the organism. When something disrupts this interference pattern, just like gas molecules disrupt the distinct interference bands in the interferometer, this is experienced by the organism as stress. This stress could be mental stress, environmental stress, or physical stress – like being sick. The root cause of the feeling, though, is something is disrupting the stable quantum energy state of the organism. The feeling of stress in the 1st person is dual to this quantum description in the 3rd.

Heart rate variability refers to variations in the heart’s rhythmic beat and is a strong indicator of overall health, including stress levels. See this video “Heart Rate Variability Explained” by J. Augustine (2007) for an introduction. The human heart is a rhythmic organ, intimately connected to the autonomic nervous system, and perhaps this is why it is particularly sensitive to stress – and thereby disruptions of the coherent hologram entangling the organism together. If you experience stress, you probably feel it most pronounced in your heart. In some cases, it may feel like your heart is in the grip of a vice, because the interference pattern synchronizing the whole body, and so important to the heart’s rhythmic operation, is being interfered with. See this paper for more on clinical studies relating heart rate variability to stress “The Effects of Psychosocial Stress on Heart Rate Variability in Panic Disorder” by K. Petrowski et. al. (2010).

There are studies connecting stress to negative clinical outcomes as it relates to all kinds of health issues including digestive, fertility, and urinary problems as well as a weakened immune system. However, evidence linking stress and cancer is still weak, possibly because its development is long term and there are other explanatory covarying factors such as smoking and alcohol consumption that are themselves behavioral responses to stress (see this page by the National Cancer Institute for more).

On the other hand, adaptation to stress is a tremendously positive experience. It feels great to overcome stress. We have explored previously how adaptation to stress evolutionarily (e.g. heat stress, starvation stress, oxygen stress) is a quantum transition or series of transitions to more stable quantum energy states (see “What if Quantum Mechanics is the Miracle Behind Evolution?
for more) and leads to adaptation. This is made possible because of the vast amount of entanglement taking place in the organism. Psychophysical stress is no different. Adapting to stress is a quantum transition to a more stable energy state – a transition that clears up the interference pattern, clears up the hologram bringing stabilizing coherent entanglement to the organism.

Meditation

In the same way that stress disrupts entanglement throughout the body, causing instability in quantum energy states, we can explore the stabilizing effects of quantum entanglement in the mind through meditation. Meditation is about calming the mind. Practitioners unanimously speak of its benefits and the inner peace it bestows, even though proficiency requires diligent practice (also see here on the health benefits of meditation). In the words of yogi Dada Gunamuktananda, in his video “Consciousness — the Final Frontier” (2014), while Descartes famously said, “I think therefore I am”, Gunamuktananda’s response is “When I stop thinking then I really am!”.

Zeno’s paradox can be described as the action of keeping a quantum state localized by continually observing it. Meditation is the opposite. When we stop thinking we stop using our mind’s eye to direct our thoughts, we stop making choices, we stop subjecting ourselves to measurement. The absence of measurement allows quantum entanglement to grow, it allows quantum superposition to grow. The system must be driven and out-of-equilibrium to do this, yes, but that is the nature of biological systems. In our illustrative example of the mind’s eye it is not making a choice at end “B” or at end “A”, but is letting entanglement flow, the wave function is not collapsed, and superpositions of states expand. It allows the “brain solitons” to delocalize. This entanglement has a stabilizing effect on the organism’s energy state in the 3rd person and feels quite pleasant to you in the 1st. Awareness of the self increases substantially during meditation – a direct result of delocalization.

Another aspect of meditation is its focus on directing a sort of “energy” to certain parts of the body, e.g. an arm, a leg, or the forebrain, etc. The term “energy” is used, but this is not the same as the rigorously defined quantity of energy in physics. However, it does not seem crazy to consider these two terms related. Interestingly, we have seen that the Davydov soliton is thought to transport real energy up and down alpha-helical structures, and, these structures are ubiquitous throughout the body. The Davydov soliton quite possibly could play a critical role in the manifestation of the mind’s eye. And, in meditation, it is the mind’s eye that focuses energy to a specific region. Coincidence? If there was a quantum way to transport energy from one alpha-helical molecule to another, and thereby, from one region of the organism to another, this would make the association of the two descriptions of energy sensible. A quantum mechanism for such transport is not yet known, however, this may be possible through the holographic process of photon exchange we describe above.

If you feel like undertaking an experiment, try this one: the next time you find yourself going through security at an airport and you notice a millimeter wave (a.k.a. THz) scanning device, pay attention to your feelings during the scan. Prior to passing through the machine, try to meditate mildly, calm your mind and increase awareness of your mind and body. See if you can feel the momentary effect, a slight muddling of the hologram, of the THz scan on your quantum entangled self!

A regular meditation practice can lead to a rich set of inner experiences that explore the 1st/3rd person duality. Generally, you can find a plethora of experiences that are immensely pleasant (e.g. the resonance associated with Om in the mind feels incredible, delocalizing, loving, and like becoming One with your surroundings). In each case, the feeling of the experience is dual to a quantum mechanical transition – more stable states feel calm, pleasant, patient, love-like, unifying, while unstable states feel frantic, impatient, scattered, unpleasant.

Sex

Understanding

Why does it always feel good when we finally understand something? That’s not to say gaining knowledge always feel good, certainly there may be some things in life that we wish we could “un-know”. Understanding is different than knowledge. Understanding is about “getting it”. We’ve all had moments battling to understand an idea, wrestling with it, sometimes seemingly hopelessly, until, suddenly, we “get it”, it all comes together, and we feel great! What is going on in the 3rd person description that is dual to this consistently pleasant experience?

Figure 22: Another nonlinear Schrödinger equation (NLSE) solution – the Helmholtz Hamiltonian system. Definitely watch the video and get more information from Quantum Calculus.

Quantum neural networks are models designed to describe the behavior of neurons in the brain, while also implementing quantum effects, and leveraging quantum computing power. While no method has been found yet that fully integrates quantum with the classical aspects of neural computing, designs for quantum network models usually involve minimizing a so-called “energy” function that both reduces (a) the errors that the model makes and (b) minimizes the square of “synaptic” weights connecting neurons (see this paper “The Quest for a Quantum Network” by M. Schuld et. al. (2014) for more). The purpose of the first is clear – we want the model to learn how to make predictions without mistakes, while the second tends to diversify the model’s dependence across a plethora of “experts”. For example, if you are a juror in a court case, isn’t your confidence in your verdict greater if you base your decision on a diverse array of expert testimony, e.g. DNA evidence, ballistics, eye-witnesses, finger prints, smoking gun, etc., and they all agree? In the case of a neural network each “expert” is a neuron, but the idea is the same. This is a standard procedure in neural network learning (both classical and quantum) and has achieved substantial success in the machine learning community. Still, quantum mechanics only cares about real energy states, what does this “energy” function have to do with real energy? Conveniently, the function that is minimized in the quantum neural network (see Hopfield network) is equivalent to the real interaction energy in something called an Ising spin-glass model from physics. In other words, if the neurons and their connections in your brain can be regarded as an Ising model, then the quantum neural network learns by lowering the real energy embedded in its structure. Experimental evidence demonstrating that a biological neural network behaved like an Ising model was found in “Weak Pairwise Correlations Imply Strongly Correlated Network States in a Neural Population” by E. Schneidman et. al. (2006) studying salamander ganglion cells. This suggests that when you learn something, and that moment of understanding hits you, and a rush of joy comes over you, this is a quantum energy transition to a lower energy state. Of course, for the 1st/3rd person duality presented here to hold, your quantum-entangled-self would need to be entangled with the quantum network, functioning as One system – a fully quantum neural network design would need to be found.

Figure 23: Quantum Neural Network from Kinda Altarboush at slideshare.net

The learning function we described above is actually completely general – it appears in many machine learning algorithms that “maximize the margin” in learning the task of classification or regression. Margin maximization has an Occam’s Razor-like effect – it directly reduces the capacity of the model to “fit” the data, simplifying it, which in turn makes the model produce better predictions. When a model makes better predictions, it is natural to suggest the model “understands” the data better. It actually makes you wonder, are the rules of the quantum mechanics setup to encourage understanding in all kinds of systems throughout the Universe? In other words, in whatever physical systems something like an Ising model shows up, should we expect something that resembles understanding? If so, understanding is not something sought out by just humans, or even just high-level organisms, but it is intrinsic to the Universe itself! Suddenly, the quote below by physicist Brian Cox does not sound far-fetched at all:

“We are the cosmos made conscious and life is the means by which the Universe understands itself.” – Brian Cox, Physicist (~2011) Television show: “Wonders of the Universe – Messengers”

Self-Awareness and Gödel’s Incompleteness Theorem

There is a strange feeling to our self-awareness that is hard to put a finger on. Douglas Hofstadter in his Pulitzer prize winning book “Gödel, Escher, Bach” (1979) explored the idea that the source of consciousness is a kind of “strange loop” that embodies abstract self-referencing ability. The self-referencing fugues of J. S. Bach, the abstract, impossible, self-referential drawings of M. C. Escher, and the self-referential formal math bomb known as Gödel’s incompleteness theorem are examples that triangulate Hofstadter’s concept.

Figure 24: Drawing Hands by M. C. Escher, 1948, Lithograph

Today, self-awareness is widely accepted as a critical step to consciousness and has inspired artificial intelligence researchers to attempt to build self-aware robots. This video by Hod Lipson “Building Self Aware Robots” (2007) shows some of them. The robots start out not knowing what they themselves look like. As they try to execute motion tasks – like moving across the room – they become “aware” of whether they have arms and legs, how many, and, generally, what they look like – and improve their skills at locomotion. There are two parts to the robot’s autonomous command center, (a) the model of itself, which is learned on the fly from data collected in the act of moving, and (b) the locomotion command center which uses (a) to attempt to move across the floor. Each of these takes a turn in operating serially. So, the model (a) of itself updates using the latest collected data, then the locomotion module (b) uses that model to move which in turn generates new data, then feeds that back to update the model of itself (a), and so on. It is an iterative process of (a)->(b)->(a)->(b)->(a)… In the video, you can see the robot’s self-referential model evolve real-time to where it achieves an accurate 3-D representation of itself. While it is remarkable the learning that these robots do, they do not seem to be self-aware and conscious like we are. The module (b) has no understanding of module (a) and vice versa. No real awareness of it. The processing that takes places is still the tunnel-vision algorithmic processing of simple logical gates in a CPU. Still, it does seem like Lipson is on the right track – the self-awareness developed by these robots is clearly necessary for human consciousness, but it does not feel sufficient.

Figure 25: (Left) A “figure-eight” Mobius strip from here. (Right) A two-dimensional representation of the Klein bottle immersed in three-dimensional space. Image and caption via Wikipedia. Self-referencing quantum entanglement in the brain gives rise to the feeling of self-awareness.

R. Penrose in his book “The Emperor’s New Mind” (1989) argues that the difference between the human mind and the machine is that the mind can see mathematical truth in non-algorithmic problems. He said there are certain Gödelian statements that humans, because of their consciousness, can see their truth, but which Turing machines can never prove. D. Srivastava et. al. (2015) summed Gödel up well: “Gödel’s incompleteness theorem shows that any finite set of rules that encompass the rules of arithmetic is either inconsistent or incomplete. It entails either statements that can be proved to be both true and false, or statements that cannot be proved to be either true or false” from here. An example is the following: suppose F is any formal axiomatic system for proving mathematical statements, then there is a statement, called the Gödelian statement of F, we will label as G(F), equal to the following:

G(F) = “This sentence cannot be proved in F ”

The argument is that the system F can never prove the truth of this statement, but its truth is apparent to us. To see this, assume F could prove the statement, then we are lead to a logical contradiction. On the other hand, if F cannot prove the sentence then the sentence is true, again leading to a contradiction. Much has been made of Penrose’s argument with several notable counter arguments (a review is here) and no definitive resolution. Whatever the formal case may be with Penrose’s argument, true or not, it does seem that it does capture some essential elements of human consciousness – something “rings true” about it. Something feels strange about these kinds of self-referential loops and so a natural question is what kinds of quantum mechanical phenomena are dual to this subjective, albeit vague, 1st person description? We will suggest there are three essential aspects of quantum mechanics at play here:

I.) Quantum systems have the capacity for self-reference through quantum entanglement. The Gödelian statement above is obviously self-referential, but the serial self-referencing ability in Lipson’s robots does not seem to capture it. The crisscross entanglement we have described here, the self-referencing capacity of the NLSE, does. The difference is the wave function in this case loops back onto itself, but, because of entanglement, it is always One thing. Just as the Gödelian statement above must be evaluated as one mathematical statement. Lipson’s robots would need to entangle module (a) and module (b) together so they function as One thing.

II.) Quantum systems can be in a superposition of states, simultaneously spin up and spin down. Abstracted so spins represent a Boolean qubit, it is perfectly fine for a statement to be both true and false at the same time. Consider this version of the above Gödelian statement:

G(F) = “This statement is false”

If we try to iteratively evaluate this statement (like Lipson’s robots), we might start assuming the statement is true. Ok, so it is true that this statement is false, then we conclude the statement is false. Ok, if false, then it is false that this statement is false, then we conclude the statement is true. And, we are back to where we started. We can iterate through like this forever and never understand this statement. It will never converge to one answer. Only with a quantum entangled system can we model the essence of this statement, that it is true and false at the same time, in a superposition of states, never converging to one or the other. Since you are a quantum entangled system, that’s ok. You can model this statement. Consider this version:

G(F) = “I cannot prove this sentence”

This might be your Gödelian statement. You cannot prove it true or false without leading to a contradiction, but you can model it in your mind, you can understand it. Your mind is not a classical system governed by classical logic.

III.) The last quantum mechanical trait at play here is the ability to evaluate infinite loops instantaneously. In physics the problem of solving the Schrödinger equation is something that Nature does instantaneously even though it involves non-local information. For example, the solutions to the equation that describe a particle in a box look like the standing waves that would fit in that box. But suppose that box is inside another bigger box, and we suddenly remove one of the walls, Nature will instantaneously solve the new Schrödinger equation for the bigger box. These new solutions will look like standing waves in the bigger box. Another way of looking at this is from Feynman’s path integral perspective. The path integral formulation is equivalent to the Schrödinger equation, it’s just a different way to model the evolution of a quantum system. If we want to ask how does the state of some electron change with time (e.g. upon removal of a wall of the box) we can calculate infinitely many path integrals over all possible ways the system could evolve and sum the “amplitudes” up instantaneously and this would describe the time evolution of the system. Fortunately, we have calculus to integrate this infinite sum. In either case, Feynman or Schrödinger, the point is Nature considers infinite non-local information in quantum mechanics all the time. Now, consider the following issue with our first Gödelian statement. We can simply say the statement G(F) is now an axiom in a new stronger system called F’. Then have we plugged the hole created by Gödel’s statement? The answer is no, because we can always construct a new Gödelian statement for the new system F’:

G(F) = “This sentence cannot be proved in F’ ”

We could add a new axiom to F’ and create F”, but then we would just create a new Gödelian statement for F”, and so on forever (for a much more thorough treatment of this process of “jumping out of the system”, see Hofstadter’s Gödel, Escher, Bach)… If we operate with blinders on like a Turing machine, not seeing the lack of convergence at infinity, then we could iterate through this process forever. A Turing machine would have no way of knowing this iterative process would lead nowhere. But, in the right quantum system, we can count on Nature to evaluate this infinite loop for us, to solve the Schrödinger equation. Like finding the standing waves that fit in a sort of recursive neural circuit. We feel this when we think about this self-referential puzzle, our quantum minds are modeling this statement, we find a quantum solution, and we feel the true nature of the statement. Another way to think of this is just to consider the version of Gödel’s theorem in b.) above. We can iteratively evaluate it again and again, True, False, True, False, and it will never converge. This infinite series, also, can be described in some kind of quantum circuit in the mind. Nature does this infinite calculation for us, sums all the paths, all the amplitudes, and it is clear to us that it will never converge to a provable statement. The solution to the Schrödinger equation for the quantum circuit corresponds to a superposition of true and false. Just like Nature finding that an entangled superposition is the solution to Deuterium (energy minimum in the nucleus). It is formally undecidable classically, but representable in a quantum circuit.

Figure 26: These are just three of the infinitely many paths that contribute to the quantum amplitude for a particle moving from point A at some time t0 to point B at some other time t1.By Drawn by Matt McIrvin – Drawn by Matt McIrvin, CC BY-SA 3.0 from Wikipedia

Interestingly, I do not feel I can ever retain an understanding of Gödel’s theorem in long-term memory. Every time I recall it, I have to think-it-through a few times before I feel I understand it again. I wonder if this is the inability of a classical system, the neurons and their connections, to adequately describe Gödel’s statement. Perhaps I have to think-it-though each time to conjure up a quantum model in my mind. Only in my quantum mind, in short-term memory, can I adequately represent its self-referential nature, the simultaneous truth and falsehood of the statement.

“A bit beyond perception’s reach

I sometimes believe I see

That life is two locked boxes, each

Containing the other’s key”

by Ilyas Khan

General Qualia of the Senses

Red, yellow, blue, hot, cold, pain, tickle, joy, fear, hunger, (hat tip here for the suggestions) are all qualia of the senses. Why do these things feel as they do? In other words, why does yellow appear as the color yellow? Why should it appear as a color at all and not just feel as does sound? Both are waves. The frequencies of visible light are expressible in the trillions of hertz, far outside the audible range (20-20,000 hz). There would be no confusion as to the origin of the signal. Why not map both these inputs onto the same perception? Both systems are directional, you could just have an image in your mind of where any and all waves were coming from. Think of how a bat must “see” with its sonar, for instance. In the duality we have described here, all 1st person subjective phenomena must have a dual 3rd person quantum mechanical description as it pertains to a single quantum entangled system – the self. There must be a quantum signature to all these phenomena. The quantum effect of light absorption on the quantum entangled self must be qualitatively different than the quantum effect of sound detection. What is the difference in these quantum signature effects? Could they each induce different kinds of quasiparticles? Maybe light generates a spectrum of magnetons, while sound induces a spectrum of phonons? The feeling of color would then be the subjective dual to a magneton rippling through the quantum entangled self, while the feeling of sound, the dual to phonons vibrating thru.

Interestingly, the human eye has recently been shown to be able to detect single photons providing an example of how the brain is sensitive to quantum-level effects. Perhaps a useful way to think about the distinction between the macroscopic brain and quantum mind is in terms of classical versus quantum information. Classical information will tell you what’s in the world, where it is, what color it is, etc., but quantum information will tell you what it feels like.

Moral Responsibility

Philosophers since Aristotle have argued that free will is a necessary requirement for moral responsibility. For if we are not free to make choices then how can we be held accountable for our actions? Recent studies in the field of Quantum Cognition seek to understand if human behavior can be modeled using quantum effects. For example, when psychologists study how people play the game known as the Prisoner’s Dilemma, their behavior looks irrational. Players can maximize their “reward” in the game by “defecting” and ratting out their colleague (the other prisoner). But, they don’t do this. They tend to “cooperate” refusing to confess or blame their colleague for the crime and accepting an inferior game theoretic outcome. It is interesting though that if quantum effects are applied, and we assume quantum entanglement between the prisoners, the experimental results are predicted accurately. In the 1st person, this may manifest as our feeling of empathy for others. In this article “You’re not irrational, you’re just quantum probabilistic” by Z. Wang (2015), she provides an overview of quantum approaches to psychology. The trouble is that it is difficult to describe a physical system that can maintain entanglement between organisms. One guess starts with the fact that the brain does emit brain waves. If these brain waves were comprised of photons from an EPR source, and emitted in sufficient quantities, then it is conceivable that absorption of them by another organism could create cross-organism entanglement. A variation on this scheme could be related to eye contact. In this case eye contact would be somehow instrumental in the entanglement – e.g. incoming light is absorbed in one person’s eye by biomolecules (e.g. DNA), down converted, and re-emitted as lower frequency entangled EPR photons, and, finally, absorbed by the second person entangling the organisms together. The reason for singling out this particular mechanism as a means of entanglement is sourced in the powerful feelings that arise upon making eye contact, especially prolonged contact, with another individual (see this video for example). Another possibility is that the entanglement is present but exists entirely within the brain of each individual. For instance, imagine two piano players making music together. It feels joyful to each, the sounds resonating together, the surprise of what improvisation the other musician will add next, the harmony. But, in this description the entanglement can be seen as taking place between phonons within the auditory centers of each individual brain. Analogously, people empathizing with each other could be more like making a sort of ’emotional music’ together but the quantum entanglement resides entirely within each individual. Whatever the case may turn out to be, the feeling of empathy in the 1st person could be derived from some form of quantum entanglement in the 3rd. The simplistic models of quantum cognition would be an approximation to a much finer biological entanglement, but sufficient enough to predict the outcomes of these psychology experiments. Interestingly, “question order” is another psychological puzzle that looks irrational from a classical perspective but is explained naturally by quantum mechanics, although not relating to entanglement.

But, empathy alone would not seem to explain moral responsibility. Children start out exploring, doing the wrong thing and making mistakes, not because they are malicious but because they do not know better. It takes time to learn what the right thing to do is. It can be very difficult, even as an adult, to stay on the straight and narrow path of trying to do the right thing. Even the best of us have some missteps along the way. In this article “Aristotle on Moral Responsibility” (1995) author D. Hsieh sums Aristotle’s views up thusly:

“[Aristotle’s] general account of freedom of the will, coupled with [his] view that virtue consists in cultivating good habits over the long-term implies that ‘the virtues are voluntary (for we ourselves are somehow partly responsible for our states of character).’ (Aristotle, 1114b) We are responsible for our states of character because habits and states of character arise from the repetition of certain types action.”

Children choose what choices to make with their mind’s eye. They follow their preferences which are dual to quantum probabilities. Initially many of these preferences are exploratory – the whole world is new to them! Over time some of these decisions lead to negative outcomes, some lead to positive outcomes, some lead to short-term positive but long-term negative outcomes (local maxima). Neural connections are rewired in long-term memory as a result of these experiences which in turn affect the preferences of the child. Probably quantum computing is involved to solve some of these difficult and highly nonlinear connective questions, then solutions are relayed to long-term memory through the quantum tickertape of short-term memory – i.e. understanding what the “right thing to do” is. Adult guidance is certainly helpful too. Empathy affects lead to an understanding of “do unto others as you would have done unto you.” From these experiences, some advice, and sometimes some deep deliberation, morally responsible preferences are developed in the quantum-entangled-self. We are still free to make choices, still free to choose to contemplate heinous actions, but our preferences are there, and, through which, we are ultimately constrained by quantum probabilities.

Vice

Ok, we’ve said a lot about more stable energy states equating with pleasant feelings, but what about vices? Certainly, there are things in this world, temptations, desires, that feel pleasant, at least in the short term, but we don’t feel they are really good for us. What’s going on there? This common phenomenon would seem the feeling of transitioning quickly into an easily accessible energy state when another state, maybe a more arduous transition, but a substantially more stable state, was an alternative. It is trading a short-term pleasure for long-term stability. It is succumbing to temptation and not staying on the more stable straight and narrow path. Quantum mechanically it is transitioning into a potential energy well that is a quicker transition, but unstable while foregoing a transition into a potential energy well that is much more stable, albeit not as quickly accessible.

Love

The greatest of human emotions, it overlaps a bit with meditation. It also has something to do with two organisms entangling together as One. It is kept going by adapting to stress together, transitioning to a more stable energy state as One. The rest we leave to the reader to discover!

Consciousness

All of the above.

Conclusion

In conclusion, if we assume a 1st person perspective is present in fundamental particles, and we assume that some theories regarding maintaining quantum entanglement in biological systems are experimental proven, a compelling picture of life as a narrative of growing quantum entanglement emerges. As entanglement grows, a sense Oneness is present, basic short term memory develops, quantum computing power enables problem solving and creativity, then a mind’s eye, self-awareness and consciousness itself emerge. A compelling description of many subjective phenomena emerges once we view ourselves as a quantum entangled organism. Descriptions such as “the whole point of your body is to sustain quantum entanglement, that is, to keep you alive” seem apropos! Indeed, quantum entanglement feels as if it is the essence of life itself!

“Western civilization, it seems to me, stands by two great heritages. One is the scientific spirit of adventure — the adventure into the unknown, an unknown which must be recognized as being unknown in order to be explored; the demand that the unanswerable mysteries of the Universe remain unanswered; the attitude that all is uncertain; to summarize it — the humility of the intellect. The other great heritage is Christian ethics — the basis of action on love, the brotherhood of all men, the value of the individual — the humility of the spirit. These two heritages are logically, thoroughly consistent. But logic is not all; one needs one’s heart to follow an idea. If people are going back to religion, what are they going back to? Is the modern church a place to give comfort to a man who doubts God — more, one who disbelieves in God? Is the modern church a place to give comfort and encouragement to the value of such doubts? So far, have we not drawn strength and comfort to maintain the one or the other of these consistent heritages in a way which attacks the values of the other? Is this unavoidable? How can we draw inspiration to support these two pillars of western civilization so that they may stand together in full vigor, mutually unafraid? Is this not the central problem of our time?” – Richard Feynman, physicist

Author’s note: Feynman’s “Christian ethics” seems to refer to those morals that people of all spiritualities hold dear.

The END

# Evolution – Synopsis

We explore a means, originally suggested by Schrödinger in 1944, by which mutations as quantum transitions of a whole organism may be physically feasible. Like an electron in a hydrogen atom makes a transition to a higher energy state upon absorption of a photon, the organism transitions to a more stable energy state – probably by absorption of a photon in the UV spectrum (chapter VIII). There are several substantial challenges that must be met for this to be physically plausible.

First up, the trouble with quantum theories and evolution is that quantum mechanics does not care about fitness, or survival, it only cares about energy states, e.g. the ground state, or first excited state in the hydrogen atom. We bridge this gap by showing that stress in the environment induces instability in the organism’s energy state. The key is recognizing a.) that entanglement itself plays a role in binding the organism together – something which has been shown to be true in the case of the electron clouds of DNA. But, b.) environmental stress muddles it. Therefore, adapting to stress means a mutation that tends to increase, or at least restore entanglement. This upside bias in entanglement leads to the selective bias toward higher complexity. The quantum transition, which involves tautomerization of nucleotides in DNA via quantum tunneling (C<->A, G<->T) and photon absorption, is thus to a more stable energy configuration (chapter XI).

Second, for a quantum transition to occur, the organism must have the relevant pieces entangled together as One system of molecules (DNA<->RNA<->Proteins). In other words, the proteins in contact with the environment must been entangled with the DNA that encodes them so they function as one system (chapter XI). The marginal stability of proteins – the small energy differences between various configurations – is an essential characteristic too (chapter IV). If true, this empowers the system with the infinite computational power of quantum mechanics (a power illuminated plainly, and computationally modeled by the path integral formulation of quantum mechanics) (chapter V). The quantum calculus of photon absorption, and thereby mutation to the DNA sequence, instantaneously considers all the possible pathways by which the organism might adapt to the stress. The collective sum of these path integrals can be thought of as a sort of hologram. The path chosen is the result of quantum probabilities manifested in a complex holographic interference pattern. This hologram is not in the visual spectrum but in the frequency range relevant to the vibrational, conformational and other states of biomolecules – probably THz among others. It is the coherent tool an organism uses to direct its’ own growth non-locally – like DNA directs its own transcription (chapter X). An analogy is drawn to quasicrystals where vast collective, non-local atomic rearrangements, called phasons, are seen to occur in the laboratory, elucidating quantum mechanical effects on an intermediate scale (chapter IX).

Third, while it is virtually impossible to imagine the sustained static quantum entanglement that scientists pursue in today’s quantum computers in biological systems, i.e. decoherence, biology takes a different tact. Its approach is dynamical, with constant renewed entanglement and constant decoherence (chapter VIII). It is closer, by analogy, to the dynamical environment described in a quantum network where entanglement can be restored and extended over vast distances (chapter VII). Research has shown dynamical entangled quantum systems can exist in environments even when static entanglement is impossible (chapter VIII). This is crucial to life and critical to the miracles of evolution.

Fourth, even with the infinite computational power of quantum mechanics available to the organism, focusing this computational power is critical to leveraging it, just as interference is critical to Shor’s factoring algorithm (chapter V), and for that, life needs to control its own complexity (chapter III). The simplest description of the world is the correct one – the philosophical principle of Occam’s razor (chapter II). This principle forces DNA to keep the blueprint of the organism simple so that the genetic code is modularized, objected oriented, plug-and-play like. This gives the path integrals a fighting chance of finding a working adaptation to environmental stress. But, the relationship is reciprocal. A simple description of the organism is equivalent to a more stable energy state – a key point derived from machine learning (chapter III). A key result of this is that mutations cannot be truly random, they have an element of quantum mechanical uncertainty for sure, but they must be very organized in nature, swapping out one module for another. And this is, indeed, what we see in experiments: organism can change a few nucleotides, delete sections, insert sections, or even make gross genetic rearrangements to adapt to stress with minimal failures (chapter XII). All are allowed quantum transitions with various probabilities given by the mathematics of quantum mechanics of complex dynamical systems- as described in the solution to the quantum measurement problem (chapter VI). This high degree of ordered simplicity combined with quantum computational power is the secret of the miraculous leaps that occur in evolutionary pathways (chapter XI).

Last, this description of biological systems allows us to draw an analogy between some very personal, first person experiences and the fundamental quantum mechanical nature of the universe. For instance, “love” is naturally affiliated with “Oneness”, or becoming “One” with others – like quantum entanglement is to particles. “Understanding” is also a fundamental defining trait of the human experience, yet life is utilizing this principle in DNA – manifest in its simplicity – from life’s very beginning. And, “creativity”, something that we as humans take such pride in, appears as the result of the infinite quantum computational power of the universe at the level of basic particles. Creative capacity grows as organisms, and the entanglement therein, grows more complex – it doesn’t suddenly appear. In higher-level organisms the range of creativity transitions from just the space of biomolecules and DNA to the external space of human endeavor (via the brain), but this is still all creativity nonetheless. A picture irresistibly emerges that these three traits, “love”, “understanding”, and “creativity” aren’t random accidental traits selected for the during “X billion years of evolution” at all, but defining characteristics of the quantum mechanical universe all the way from humans, to single-cell life, to sub-cellular life, to fundamental particles. It is a picture in which natural selection plays a role, but in which life is a cooperative, not a cutthroat competition. Indeed, the metaphor that life is the Universe trying to understand itself is apropos (chapter XII).

# What if the Miracle Behind Evolution is Quantum Mechanics?

(CC BY-NC 4.0)

I, Quantum

“…about forty years ago the Dutchman de Vries discovered that in the offspring even of thoroughly pure-bred stocks, a very small number of individuals, say two or three in tens of thousands, turn up with small but ‘jump-like’ changes, the expression ‘jump-like’ not meaning that the change is so very considerable, but that there is a discontinuity inasmuch as there are no intermediate forms between the unchanged and the few changed. De Vries called that a mutation. The significant fact is the discontinuity. It reminds a physicist of quantum theory – no intermediate energies occurring between two neighbouring energy levels. He would be inclined to call de Vries’s mutation theory, figuratively, the quantum theory of biology. We shall see later that this is much more than figurative. The mutations are actually due to quantum jumps in the gene molecule. But quantum theory was but two years old when de Vries first published his discovery, in 1902. Small wonder that it took another generation to discover the intimate connection!” – Erwin Schrödinger, ‘What is Life?‘ (1944)

Synopsis

# I. Miracles and Monsters

What is going on with life? It is utterly amazing all the things these plants and creatures of mother nature do! Their beauty! Their complexity! Their diversity! Their ability to sustain themselves! The symbiotic relationships! Where did it all come from? If evolution is the right idea, how does it work? We’re not talking about the little changes, the gradual changes proposed by Charles Darwin. We understand there is natural selection going on, like pepper colored moths and Darwin’s finches. We’re talking about the big changes – the evolutionary leaps apparently due to mutations affecting gene expression, a process known as saltation. How do these mutations know what will work – shouldn’t there be a bunch of failed abominations everywhere from the gene mutations that screwed up? Shouldn’t a mix up be far more likely than an improvement? Is it possible mutations are adaptive as Jean-Baptiste Lamarck, a predecessor of Darwin, originally proposed? That is, could it be that the environment, rather than random changes, is the primary driver of adaptation?

Imagine selecting architectural plans on a two-story house. Suppose we randomly pick from the existing set of millions of blueprints for the upstairs, and separately pick the plans for the downstairs and put them together. How many times would you expect this house to be functional? The plumbing and electrical systems to work? Suppose we start with a blueprint for a house and then select randomly the plans for just the living room and swap that into the original? What are the chances this would produce a final blue print that was workable? Seemingly very small we should say! We expect there should be all these monstrous houses, with leaking plumbing, short circuited electricity, windows looking out at walls, doorways to nowhere, and grotesque in style!

Turns out Evolutionary Biologists have been concerned with this problem for a long time. A geneticist named Richard Goldschmidt was the first scientist to coin the term “hopeful monster” in 1933 in reference to these abominations. Goldschmidt’s theory was received with skepticism. Biologists argued: if evolution did produce big changes in a species then how would these mutants find a mate? For most of the 20th century Goldschmidt’s ideas were on the back burner, scientists were focused on gradualism as they uncovered many examples of gradual evolutionary changes in nature, supporting the natural selection hypothesis. But, recent scientific results reveal the environment does, indeed, have a deep impact on the traits of offspring. The adaptations of embryos in experiments are an example:

“The past twenty years have vindicated Goldschmidt to some degree. With the discovery of the importance of regulatory genes, we realize that he was ahead of his time in focusing on the importance of a few genes controlling big changes in the organisms, not small-scales changes in the entire genome as neo-Darwinians thought. In addition, the hopeful monster problem is not so insurmountable after all. Embryology has shown that if you affect an entire population of developing embryos with a stress (such as a heat shock) it can cause many embryos to go through the same new pathway of embryonic development, and then they all become hopeful monsters when they reach reproductive age.” – Donald R. Prothero in his book Evolution: What the Fossils Say and Why it Matters (2007); via rationalwiki.org.

These discoveries prompted Evolutionary Biologist Olivia Judson to write a wonderful article “The Monster is Back, and it’s Hopeful.” (via Wikipedia) Still, we are left wondering: where are all the hopeless monsters? All the embryos either adapt to the stress or keep the status quo – there are no failures. Shouldn’t some suffer crippling mutations? Are epigenetic factors involved? And, perhaps most importantly, even with environmental feedback, how do organisms know how to adapt – i.e. how is the process of adaptation so successful?

The puzzle would not be complete, however, without also considering some amazing leaps that have occurred along the tree of life, for example, the mutations that lead to the evolution of the eye. How does life figure out it can construct this extended precisely shaped object – the eyeball – and set up the lens, the muscles to focus it, the photoreceptors and the visual cortex to make sense of the image? It seems like we would need a global plan, a blueprint of an eye, before we start construction! Not only that, but to figure it out independently at least fifty-times-over in different evolutionary branches? Or, how did cells make the leap from RNA to DNA, as is widely believed to be the case, in the early evolution of single celled organisms? Evolutionary biologists puzzle that to make that leap life would need to know the DNA solution would work before it tried it. How should life be so bold – messing with the basic gene structure would seem fraught with danger? How could life know? And, don’t forget, perhaps the most amazing leap of all, where does this amazing human intelligence come from? We humans, who are probing the origins of the Universe, inventing or discovering mathematics, building quantum computers and artificial intelligence, and seeking to understand our very own origin– however it may have happened – how did WE come to be?

To frame the problem, let’s talk classical statistics for a second and consider the following situation: suppose we have 100 buckets into which we close our eyes and randomly toss 100 ping pong balls. Any that miss we toss again. When we open our eyes, what distribution should we expect? All in a single cup? Probably not. Scattered over many cups with some cups holding more ball than others? Probably something like that. If we repeat this experiment zillions of times, however, sooner or later, we will find one instance with them all in the same bucket. Is this a miracle? No, of course not. Once in a while amazingly unlikely things do happen. If we tossed the balls repeatedly and each time all landed in the same bucket, now that would feel like a miracle! That’s what’s weird about life – the miracles seem to keep happening again and again along the evolutionary tree. The ping pong balls appear to bounce lucky for Mother Nature!

# II. Occam’s Fat-Shattering Razor

The Intelligent Design folks ardently point out the miraculous nature of life despite being labeled as pseudoscientists by the scientific community at large. However, no can one deny that the amazing order we see in biological systems does have the feel of some sort of intelligent design, scientifically true or not. The trouble is that these folks postulate an Intelligent Designer is behind all these miracles. In fact, it is possible that they are correct, but, there is a problem with this kind of hypothesis: it can be used to explain anything! If we ask “how did plants come to use photosynthesis as a source of energy?” we answer: “the Designer designed it that way”. And, if we ask “how did the eye come to exist in so many animal species?”, again, we can only get “the Designer designed it that way”. The essential problem is that this class of hypotheses has infinite complexity.

“It may seem natural to think that, to understand a complex system, one must construct a model incorporating everything that one knows about the system. However sensible this procedure may seem, in biology it has repeatedly turned out to be a sterile exercise. There are two snags with it. The first is that one finishes up with a model so complicated that one cannot understand it: the point of a model is to simplify, not to confuse. The second is that if one constructs a sufficiently complex model one can make it do anything one likes by fiddling with the parameters: a model that can predict anything predicts nothing.” – John Maynard Smith and Eörs Szathmáry (Hat tip Gregory Chaitin)

The field of learning theory forms the foundation of machine learning. It contains the secret sauce that is behind many of the amazing artificial intelligence applications today. This list includes achieving image recognition on par with humans, self-driving cars, Jeopardy! champion Watson, and the amazing 9-dan Go program AlphaGo [see Figure 2]. These achievements shocked people all over the world – how far and how fast artificial intelligence had advanced. Half of this secret sauce is a sound mathematical understanding of complexity in computer models (a.k.a. hypotheses) and how to measure it. In effect learning theory has quantified the philosophical principal of Occam’s razor which says that the simplest explanation is the correct one – we can now measure the complexity of explanations. Early discoveries in the 1970’s produced the concept of the VC dimension (also known as the “fat-shattering” dimension) named for its discoverers, Vladimir Vapnik and Alexey Chervonenkis. This property of a hypothesis class measures the number of observations that it is guaranteed to be able to explain. Recall a polynomial with, say, 11 parameters, such as:

$P(x)=c_0+c_1x^1+c_2x^2+c_3x^3+c_4x^4+c_5x^5+c_6x^6+c_7x^7+c_8x^8+c_9x^9+c_{10}x^{10}$

can be fit to any 11 data points [see Figure 1]. This function is said to have a VC dimension of 11. Don’t expect this function to find any underlying patterns in the data though! When a function with this level of complexity is fit to an equal number of data points it is likely to over-fit. The key to having a hypothesis generalize well, that is, make predictions that are likely to be correct, is having it explain a much greater number of observations than its complexity.

Figure 1: Noisy (roughly linear) data is fitted to both linear and polynomial functions. Although the polynomial function is a perfect fit, the linear version can be expected to generalize better. In other words, if the two functions were used to extrapolate the data beyond the fit data, the linear function would make better predictions. Image and caption b

Nowadays measures of complexity have become much more acute: the technique of margin-maximization in support vector machines, regularization in neural networks and others have had the effect of reducing the effective explanatory power of a hypothesis class, thereby limiting its complexity, and causing the model to make better predictions. Still, the principal is the same: the key to a hypothesis making accurate predictions is about managing its complexity relative to explaining known observations. This principal applies whether we are trying to learn how to recognize handwritten digits, how to recognize faces, how to play Go, how to drive a car, or how to identify “beautiful” works of art. Further, it applies to all mathematical models that learn inductively, that is, via examples, whether machine or biological. When a model fits the data with a reasonable complexity relative to the number of observations then we are confident it will generalize well. The model has come to “understand” the data in a sense.

Figure 2: The game of Go. The AI application AlphaGo defeated one of the best human Go players, Lee Sedol, 4 games to 1 in March,2016 by Goban1 via Wikimedia Commons.

The hypothesis of Intelligent Design, simply put, has infinite VC dimension, and, therefore can be expected to have no predictive power, and that is what we see – unless, of course, we can query the Designer J! But, before we jump on Darwin’s bandwagon we need to face a very grim fact: the hypothesis class characterized by “we must have learned that during X billion years of evolution” also has the capacity to explain just about anything! Just think of the zillions of times this has been referenced, almost axiom-like, in the journals of scientific research!

# III. Complexity is the Key – In Machine Learning and DNA

As early as 1945 a computational device known as a neural network (a.k.a. a multi-layered perceptron network) was invented. It was patterned after the networks formed by neuron cells in animal brains [see figure 3]. In 1975 a technique called backpropagation was developed that significantly advanced the learning capability of these networks. They were “trained” on a sample of input data (observations), then could be used to make predictions about future and/or out-of-sample data.

Neurons in the first layer were connected by “synaptic weights” to the data inputs. The inputs could be any number of things, e.g. one pixel in an image, the status of a square on a chessboard, or financial data of a company. These neurons would multiply the input values by the synaptic weights and sum them. If the sum exceeded some threshold value the neuron would fire and take on a value of 1 for neurons in the second layer, otherwise it would not fire and produce a value of 0. Neurons in the second layer were connected to the first via another set of synaptic weights and would fire by the same rules, and so on to the 3rd, 4th, layers etc. until culminating in an output layer. Training examples were fed to the model one at a time. The network’s outputs were compared against the known results to evaluate errors. These were used to adjust the weights in the network via the aforementioned backpropagation technique: weights that contributed to the error were reduced while weights contributing to a correct answer were increased. With each example, the network followed the error gradient downhill (gradient descent). The training stopped when no further improvements were made.

Figure 3: A hypothetical neural network with an input layer, 1 hidden layer, and an output layer, by Glosser.ca (CC BY-SA) via Wikimedia Commons.

Neural Networks exploded onto the scene in the 1980’s and stunned us with how well they would learn. More than that, they had a “life-like” feel as we could watch the network improve with each additional training sample, then become stuck for several iterations. Suddenly the proverbial “lightbulb would go on” and the network would begin improving again. We could literally watch the weights change as the network learned. In 1984 the movie “The Terminator” was released featuring a fearsome and intelligent cyborg character, played by Arnold Schwarzenegger, with a neural network for a brain. It was sent back from the future where a computerized defense network, Skynet, had “got smart” and virtually annihilated all humanity!

The hysteria did not last, however. The trouble was that while neural networks did well on certain problems, on others they failed miserably. Also, they would converge to a locally optimal solution but often not a global one. There they would remain stuck only with random perturbations as a way out – a generally hopeless proposition in a difficult problem. Even when they did well learning the in-sample training set data, they would sometimes generalize poorly. It was not understood why neural nets succeeded at times and failed at others.

In the 1990’s significant progress was made understanding the mathematics of the model complexity of neural networks and other computer models and the field of learning theory really emerged. It was realized that most of the challenging problems were highly non-linear, having many minima, and any gradient descent type approach would be vulnerable to becoming stuck in one. So, a new kind of computer model was developed called the support vector machine. This model rendered the learning problem as a convex optimization problem – so that it had only one minima and a globally optimal result could always be found. There were two keys to the support vector machine’s success: first it did something called margin-maximization which reduced overfitting, and, second, it allowed computer scientists to use their familiarity with the problem to choose an appropriate kernel – a function which mapped the data from the input feature space into a smooth, convex space. Like a smooth bowl-shaped valley, one could follow the gradient downhill to a global solution. It was a way of introducing domain knowledge into the model to reduce the amount of twisting and turning the machine had to do to fit the data. Bayesian techniques offered a similar helping hand by allowing their designers to incorporate a “guess”, called the prior, of what the model parameters might look like. If the machine only needed to tweak this guess a little bit to come up with a posterior, the model could be interpreted as a simple correction to the prior. If it had to make large changes, that was a complex model, and, would negatively impact expected generalization ability – in a quantifiable way. This latter point was the second half of the secret sauce of machine learning – allowing clever people to incorporate as much domain knowledge as possible into the problem so the learning task was rendered as simple as possible for the machine. Simpler tasks required less contortion on the part of the machine and resulted in models with lower complexity. SVM’s, as they became known, along with Bayesian approaches were all the rage and quickly established machine learning records for predictive accuracy on standard datasets. Indeed, the mantra of machine learning was: “have the computer solve the simplest problem possible”.

Figure 4: A kernel, , maps data from an input space, where it is difficult to find a function that correctly classifies the red and blue dots, to a feature space where they are easily separable – from StackOverflow.com.

It would not take long before the science of controlling complexity set in with the neural net folks – and the success in learning that came with it. They took the complexity concepts back to the drawing board with neural networks and came out with a new and greatly improved model called a convolutional neural network. It was like the earlier neural nets but had specialized kinds of hidden layers known as convolutional and pooling layers (among others). Convolutional layers significantly reduced the complexity of the network by limiting neurons connectivity to only a nearby region of inputs, called the “receptive field”, while also capturing symmetries in data – like translational invariance. For example, a vertical line in the upper right hand corner of the visual field is still a vertical line if it lies in the lower left corner. The pooling layer neurons could perform functions like “max pooling” on their receptive fields. They simplified the network in the sense that they would only pass along the most likely result downstream to subsequent layers. For example, if one neuron fires, weakly indicating a possible vertical line, but, another neuron fires strongly indicating a definite corner, then only the latter information is passed onto the next layer of the network [see Figure 5].

Figure 5: Illustration of the function of max pooling neurons in a pooling layer of a convolutional neural network. By Aphex34 [CC BY-SA 4.0] via Wikimedia Commons

The idea for this structure came from studies of the visual cortex of cats and monkeys. As such, convolutional neural networks were extremely successful at enabling machines to recognize images. They quickly established many records on standardized datasets for image recognition and to this day continue to be the dominant model of choice for this kind of task. Computer vision is on par with human object recognition ability when the human subject is given a limited amount of time to recognize the image. A mystery that was never solved was: how did the visual cortex figure out its own structure.

Interestingly, however, when it comes to more difficult images, humans can perform something called top-down reasoning which computers cannot replicate. Sometimes humans will look at an image, not recognize it immediately, then start using a confluence of contextual information and more to think about what the image might be. When ample time is given for humans to exploit this capability we exhibit superior image recognition capability. Just think back to the last time we were requested to type in a string of disguised characters to validate that we were, indeed, human! This is the basis for CAPTCHA: Completely Automated Public Turing test to tell Computers and Humans Apart. [see Figure 6].

Figure 6: An example of a reCAPTCHA challenge from 2007, containing the words “following finding”. The waviness and horizontal stroke were added to increase the difficulty of breaking the CAPTCHA with a computer program. Image and caption by B Maurer at Wikipedia

While machine learning was focused on quantifying and managing the complexity of models for learning, the dual concept of the Kolmogorov complexity had already been developed in 1965 in the field of information theory. The idea was to find the shortest description possible of a string of data. So, if we generate a random number by selecting digits at random without end, we might get something like this:

5.549135834587334303374615345173953462379773633128928793.6846590704…

and so on to infinity. An infinite string of digits generated in this manner cannot be abbreviated. That is, there is no simpler description of the number than an infinitely long string. The number is said to have infinite Kolmogorov complexity, and is analogous to a machine learning model with infinite VC dimension. On the other hand, another similar looking number, , extends out to infinity:

3.14159265358979323846264338327950288419716939937510582097494459230…

never terminating and never repeating, yet, it can be expressed in a much more compact form. For example, we can write a very simple program to perform a series approximation of to arbitrary accuracy using the Madhava-Leibniz series (from Wikipedia):

So, has a very small Kolmogorov complexity, or minimum description length (MDL). This example illustrates the abstract, and far-from-obvious nature of complexity. But, also, illustrates a point about understanding: when we understand something, we can describe it in simple terms. We can break it down. The formula, while very compact, acts as a blueprint for constructing a meaningful, infinitely long number. Mathematicians understand . Similar examples of massive data compression abound, and some, like the Mandelbrot set, may seem biologically inspired [see Figure 7].

Figure 7: This image illustrates part of the Mandelbrot set (fractal). Simply storing the 24-bit color of each pixel in this image would require 1.62 million bits, but a small computer program can reproduce these 1.62 million bits using the definition of the Mandelbrot set and the coordinates of the corners of the image. Thus, the Kolmogorov complexity of the raw file encoding this bitmap is much less than 1.62 million bits in any pragmatic model of computation. Image and caption By Reguiieee via Wikimedia Commons

Perhaps life, though, has managed to solve the ultimate demonstration of MDL – the DNA molecule itself! Indeed, this molecule, some 3 billion nucleotides (C, A, G, or T) long in humans, encodes an organism of some 3 billion-billion-billion ($3 \times 10^{26} \colon 1$) amino acids. A compression of about a billion-billion to 1 ($1 \times 10^{18} \colon 1$). Even including possible epigenetic factors as sources of additional blueprint information (epigenetic tags are thought to affect about 1% of genes in mammals), the amount of compression is mind boggling. John von Neuman pioneered an algorithmic view of DNA, like this, in 1948 in his work on cellular automata. Biologists know, for instance, that the nucleotide sequences: “TAG”, “TAA”, and “TGA” act as stop codons (Hat tip Douglas Hoftstadter in Gödel, Escher, Bach: An Eternal Golden Braid) in DNA and signal the end of a protein sequence. More recently, the field of Evolutionary Developmental Biology (a.k.a. evo-devo) has encouraged this view:

The field is characterized by some key concepts, which took biologists by surprise. One is deep homology, the finding that dissimilar organs such as the eyes of insects, vertebrates and cephalopod mollusks, long thought to have evolved separately, are controlled by similar genes such as pax-6, from the evo-devo gene toolkit. These genes are ancient, being highly conserved among phyla; they generate the patterns in time and space which shape the embryo, and ultimately form the body plan of the organism. Another is that species do not differ much in their structural genes, such as those coding for enzymes; what does differ is the way that gene expression is regulated by the toolkit genes. These genes are reused, unchanged, many times in different parts of the embryo and at different stages of development, forming a complex cascade of control, switching other regulatory genes as well as structural genes on and off in a precise pattern. This multiple pleiotropic reuse explains why these genes are highly conserved, as any change would have many adverse consequences which natural selection would oppose.

New morphological features and ultimately new species are produced by variations in the toolkit, either when genes are expressed in a new pattern, or when toolkit genes acquire additional functions. Another possibility is the Neo-Lamarckian theory that epigenetic changes are later consolidated at gene level, something that may have been important early in the history of multicellular life.” – from Wikipedia

Inspired by von Neuman and the developments of evo-devo, Gregory Chaitin in 2010 published a paper entitled “To a Mathematical Theory of Evolution and Biological Creativity“. Chaitin characterized DNA as a software program. He built a toy-model of evolution where computer algorithms would compute the busy beaver problem of mathematics. In this problem, he tries to get the computer program to generate the biggest integer it can. Like children competitively yelling out larger and larger numbers: “I’m a million times stronger than you! Well, I’m a billion times stronger. No, I’m a billion, billion times. That’s nothing, I’m a billion to the billionth power times stronger!” – we get the idea. Simple as that. The program has no concept of infinity and so that’s off limits. There is a subroutine that randomly “mutates” the code at each generation. If the mutated code computes a bigger integer, it becomes the de facto code, otherwise it is thrown out (natural selection). Lots of times the mutated code just doesn’t work, or it enters a loop that never halts. So, an oracle is needed to supervise the development of the fledgling algorithms. It is a very interesting first look at DNA as an ancient programming language and an evolving algorithm. See his book, Proving Darwin: Making Biology Mathematical, for more.

Figure 8: Graph of variation in estimated genome sizes in base pairs (bp). Graph and caption by Abizar at Wikipedia

One thing is for certain: the incredible compactness of DNA molecules implies it has learned an enormous amount of information about the construction of biological organisms. Physicist Richard Feynman famously said “what I cannot create, I do not understand.” Inferring from Feynman: since DNA can create life (maybe “build” is a better word), it therefore understands it. This is certainly part of the miracle of biological evolution – understanding the impact of genetic changes on the organism. The simple description of the organism embedded in DNA allows life to predictably estimate the consequences of genetic changes – it is the key to generalizing well. It is why adaptive mutations are so successful. It is why the hopeless monsters are missing! When embryos adapt to stress so successfully, it’s because life knows what it is doing. The information is embedded in the genetic code!

Figure 9: Video of an Octopus camouflaging itself. A dramatic demonstration of how DNA understands how to build organisms – it gives the Octopus this amazing toolkit! Turns out it has an MDL of only 3 basic textures and the chromatophores come in only 3 basic colors! – by SciFri with marine biologist Roger Hanlon

In terms of house blueprints, it means life is so well ordered that living “houses” are all modular. The rooms have such symmetry to them that the plumbing always goes in the same corner, the electrical wiring always lines up, the windows and doors work, even though the “houses” are incredibly complex! You can swap out the upstairs, replace it with the plans from another and everything will work. Change living rooms if you want, it will all work, total plug-and-play modular design. It is all because of this remarkably organized, simple MDL blueprint.

The trouble is: how did this understanding come to be in the first place? And, even understanding what mutations might successfully lead to adaptation to a stress, how does life initiate and coordinate the change among the billions of impacted molecules throughout the organism? Half of the secret sauce of machine learning was quantifying complexity and the other half was allowing creative intelligent beings, such as ourselves, to inject our domain knowledge into the learning algorithm. DNA should have no such benefit, or should it? Not only that, but recent evidence suggests the role of epigenetic factors, such as methylation of DNA, is significant in heredity. How does DNA understand the impact of methylation? Where is this information stored? Seemingly not in the DNA, but if not, then where?

# IV. The Protein Folding Problem

“Perhaps the most remarkable features of the molecule are its complexity and its lack of symmetry. The arrangement seems to be almost totally lacking in the kind of regularities which one instinctively anticipates, and it is more complicated than has been predicated by any theory of protein structure. Though the detailed principles of construction do not yet emerge, we may hope that they will do so at a later stage of the analysis.” – John Kendrew et al. upon seeing the structure of the protein myoglobin under an electron microscope for the first time, via “The Protein Folding Problem, 50 Years On” by Ken Dill

DNA exists in every cell in every living organism. Not only is it some 3 billion nucleotides long, but it encodes 33,000 genes which express over 1 million proteins. There are several kinds of processes that ‘repeat’ or copy the nucleotides sequences in DNA:

1.) DNA is replicated into additional DNA for cell division (mitosis)

2.) DNA is transcribed into RNA for transport outside the nucleus

3.) RNA is translated into protein molecules in the cytoplasm of the cell – by NobelPrize.org

Furthermore, RNA does not only play a role in protein synthesis. Many types of RNA are catalytic – they act like enzymes to help reactions proceed faster. Also, many other types of RNA play complex regulatory roles in cells (see this for more: the central dogma of molecular biology).

Genes act as recipes for protein molecules. Proteins are long chains of amino acids that become biologically active only after they fold. While often depicted as messy squiggly strands lacking any symmetry, they ultimately fold very specifically into beautifully organized highly complex 3-dimensional shapes such as micro pumps, bi-pedaled walkers called kinesins, whip-like flagella that propel the cell, enzymes and other micro-machinery. The proteins that are created ultimately determine the function of the cell.

Figure 10: This TEDx video by Ken Dill gives an excellent introduction to the protein folding problem and shows the amazing dynamical forms these proteins take.

The protein folding problem has been one of the great puzzles in science for 50 years. The questions it poses are:

1. “How does the amino acid sequence influence the folding to form a 3-D structure?
2. There are a nearly infinite number of ways a protein can fold, how can proteins fold to the correct structure so fast (nanoseconds for some)?
3. Can we simulate proteins with computers?”
– from The Protein-Folding Problem, 50 Years On by Ken Dill

Nowadays scientists understand a great number of proteins, but several questions remain unanswered. For example, Anfinsen’s dogma is the postulate that the amino acid sequence alone determines the folded structure of the protein – we do not know if this is true. We also know that molecular chaperones help other proteins to fold, but are thought not to influence the protein’s final folded structure. We can produce computer simulations of how proteins fold. However, this is only possible in special cases of simple proteins where there is an energy gradient leading the protein downhill to a global configuration of minimal energy [see figure 11]. Even in these cases, the simulations do not accurately predict protein stabilities or thermodynamic properties.

Figure 11: This graph shows the energy landscape for some proteins. When the landscape is reasonably smoothly downhill like this, protein folding can be simulated. Graph By Thomas Splettstoesser (www.scistyle.com) via Wikimedia Commons

Figure 12: A TED Video (short) by David Bolinsky showing the complexity of the protein micro-machinery working away inside the cell. Despite all this complexity, organization, and beauty, little is understood about how proteins fold to form these amazing machines.

Protein folding generally happens in a fraction of a second (nanoseconds in some cases), which is mind boggling given the number of ways it could fold. This is known as Levinthal’s paradox, posited in 1969:

“To put this in perspective, a relatively small protein of only 100 amino acids can take some $10^{100}$ different configurations. If it tried these shapes at the rate of 100 billion a second, it would take longer than the age of the universe to find the correct one. Just how these molecules do the job in nanoseconds, nobody knows.” – Technology Review.com, “Physicists discover quantum law of protein folding”

The Arrhenius equation is used to estimate chemical reaction rates as a function of temperature. Turns out the application of this equation to protein folding misses badly. In 2011, L. Luo and J. Lu published a paper entitled “Temperature Dependence of Protein Folding Deduced from Quantum Transition“. They show that quantum mechanics can be used to correctly predict the proper temperature dependence of protein folding rates (hat tip chemistry.stackexchange.com). Further, globular proteins (not the structural or enzymatic kind) are known to be marginally stable, meaning that there is very little energy difference between the folded, native state, and the unfolded state. This kind of energy landscape may open the door to a host of quantum properties.

# V. The Nature of Quantum Mechanics – Infinite, Non-Local, Computing Capacity

“It is impossible that the same thing belong and not belong to the same thing at the same time and in the same respect.”; “No one can believe that the same thing can (at the same time) be and not be.”; “The most certain of all basic principles is that contradictory propositions are not true simultaneously.” – Aristotle’s Law of Non-Contradiction, “Metaphysics (circa 350 B.C.) Via Wikipedia

Max Planck in 1900, in order to solve the blackbody radiation problem, and Albert Einstein in 1905, to explain the photoelectric effect, postulated that light itself was made of individual “energy quanta” and so began the theory of quantum mechanics. In the early 20th century many titans of physics would contribute to this strange theory, but a rare, rather intuitive, discovery occurred in 1948 when Richard Feynman invented a tool called the path integral. When physicists wanted to calculate the probability that, say, an electron, travels from A to B they used the path integral. The path integral appears as a complex exponential function like $e^{-i\Phi(x)}$ in physics equations, but this can be conceptually understood simply as a two-dimensional wave because:

$e^{-i\Phi(x)}=cos\Phi(x)+isin\Phi(x)$

The real component represents one direction (e.g. horizontal-axis), while the other, “imaginary”, component another (e.g. vertical-axis). This complex function in the path integral, and in quantum mechanics in general, just means the wave is two-dimensional, not one. Think of a rope with one person holding each end. A vertical flick by one person sends a vertical wave propagating along the rope toward the other – this is not the path integral of quantum mechanics. Neither is a horizontal flick. Instead, imagine making a twisting flick, both vertical and horizontal. A corkscrew shaped wave propagates down the rope. This two-dimensional wave captures the nature of quantum mechanics and the path integral, but the wave is not known to be something physical like the wave on the rope. It is, rather, a wave of probability (a.k.a. a quantum wave function).

Figure 13: The titans of quantum physics -1927 Solvay Conference on Quantum Mechanics by Benjamin Couprie via Wikimedia Commons.

The path integral formulation of quantum mechanics is mathematically equivalent to the Schrödinger equation – it’s just another way of formulating the same physics. The idea for the electron is to sum (integrate) over all the possible ways it can go from A to B, summing all the 2-D waves together (a.k.a. amplitudes). To get the right answer – the one that agrees with experiment – we must also consider very exotic paths. The tools that help us do this are Feynman diagrams which illustrate all the particle physics interactions allowed along the way. So, a wave propagates from A to B via every possible path it can take in space and time and at every point therein it considers all the allowed Feynman diagrams (great intro to Feynman diagrams here). The more vertices there are in the diagram the smaller that particular diagram’s contribution – each additional vertex adds a probability factor of about 1/137th. The frequency and wavelength of the waves change with the action (a function of the energy of the particle). At B, all the amplitudes from every path are summed, some interfering constructively, some destructively, and the resultant amplitude squared is the probability of the electron going from A to B. But, going from A to B is not the only thing that path integrals are good for. If we want to calculate the probability that A scatters off of B then interacts with C, or A emits or absorbs B, the cross-section of A interacting with D, or whatever, the path integral is the tool to do the calculation. For more information on path integrals see these introductory yet advanced gold-standard lectures by Feynman on Quantum Electro-Dynamics: part 1, 2, 3 and 4.

Figure 14: In this Feynman diagram, an electron and a positron
annihilate
, producing a photon (represented by the blue sine wave) that becomes a quark
antiquark pair, after which the antiquark radiates a gluon (represented by the green helix). Note: the arrows are not the direction of motion of the particle, they represent the flow of electric charge. Time always moves forward from left to right. Image and caption by Joel Holdsworth [GFDL, CC-BY-SA-3.0], via Wikimedia Commons

Path integrals apply to every photon of light, every particle, every atom, every molecule, every system of molecules, everywhere, all the time, in the observable universe. All the known forces of nature appear in the path integral with the peculiar sometimes exception of gravity. Constant, instantaneous, non-local, wave-like calculations of infinitely many possibilities interfering all at once is the nature of this universe when we look really closely at it. The computing power of the tiniest subsets are infinite. So, when we fire a photon, an electron, or even bucky-balls (molecules of 60 carbon atoms!) for that matter, at a two-slit interferometer, on the other side we will see an interference pattern. Even if fired one at a time, the universe will sum infinitely many amplitudes and a statistical pattern will slowly emerge that reveals the wave-like interference effects. The larger the projectile the shorter it’s wavelength. The path integrals still must be summed over all the round-about paths, but the ones that are indirect tend to cancel out (destructively interfere) making the
interference pattern much more narrow. Hence, interference effects are undetectable in something as large as a baseball, but still theoretically there.

Figure 15: Results from the Double slit experiment: Pattern from a single slit vs. a double slit.By Jordgette [CC BY-SA 3.0 ] via Wikimedia Commons

Feynman was the first to see the enormous potential in tapping into the infinite computing power of the universe. He said, back in 1981:

“We can’t even hope to describe the state of a few hundred qubits in terms of classical bits. Might a computer that operates on qubits rather than bits (a quantum computer) be able to perform tasks that are beyond the capability of any conceivable classical computer?” – Richard Feynman [Hat tip John Preskill]

Quantum computers are here now and they do use qubits instead of bits. The difference is that, while a classical 5-bit computer can be in only one state at any given time, such as “01001”, a 5-qubit quantum computer can be in all possible 5-qubit states ($2^5$) at once: “00000”, “00001”, “00010”, “00011”, …, “11111”. Each state, k, has a coefficient, $\alpha_k$, that, when squared, indicates the probability the computer will be in that state when we measure it. An 80-qubit quantum computer can be in $2^{80}$ states at once – more than the number of atoms in the observable universe!

The key to unlocking the quantum computer‘s power involves two strange traits of quantum mechanics: quantum superposition and quantum entanglement. Each qubit can be placed into a superposition of states, so it can be both “0” and “1” at the same time. Then, it can be entangled with other qubits. When two or more qubits become entangled they act as “one system” of qubits. Two qubits can then be in four states at once, three qubits in eight, four qubits in 16 and so on. This is what enables the quantum computer to be in so many states at the same time. This letter from Schrödinger to Einstein in 1935 sums it up:

“Another way of expressing the peculiar situation is: the best possible knowledge of a whole does not necessarily include the best possible knowledge of its parts…I would not call that one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought…” – Erwin Schrödinger, Proceedings of the Cambridge Philosophical Society, submitted Aug 14, 1935. [Hat tip to John Preskill]

We can imagine starting a 5-qubit system in the ground state, all qubits initialized to “0”. The computer is in the state “00000”, no different than a classical computer so far. With the first tick of the clock (less than a nanosecond), we can place the 1st qubit into a superposition of states, state 1 = “00000” and state 2 = “10000”, with coefficients $\alpha_1$ and $\alpha_2$ indicating the probability of finding the system in each state respectively upon measurement. Now we have, in a sense, two computers operating at once. On the 2nd tick of the clock, we place the 2nd bit into a superposition too. Now our computer is in four states at once: “00000”, “10000”, “01000”, and “11000” with probabilities $\alpha_1$, $\alpha_2$, $\alpha_3$, and $\alpha_4$, respectively. And so on. In a handful of nanoseconds our computer could be in thirty-two states at once. If we had more qubits to work with, there is no theoretical limit to how many states the quantum computer can be in at once. Other quantum operations allow us to entangle two or more qubits in any number of ways. For example, we can entangle qubit #1 and qubit #2 such that if qubit #1 has the value of “0”, then qubit #2 must be “1”. Or, we can entangle qubits #3, #4, and #5 so that they must all have the same value: all zeros, “000”, or all ones, “111” (an entanglement known as a GHZ state). Once the qubits of the system are entangled, the states of the system can be made to interfere with each other, conceptually like the interference in the two-slit experiment. The right quantum algorithm of constructive and destructive interference unleashes the universe’s infinite quantum computational power.

In 1994 Peter Shor invented an algorithm, known as Shor’s algorithm (a tutorial is here), for factorizing integers on a quantum computer. Factorizing is a really hard problem and that’s why this approach is used to encrypt substantially all of the information we send over the internet (RSApublic key cryptography). For example, the problem of factoring a 500-digit integer takes $10^{12}$ CPU years on a conventional computer – longer than the age of the universe. A quantum computer with the same clock speed (a reasonable assumption), would take two seconds! [Hat tip to John Preskill for the stats] Factoring of integers is at least in a class of problems known to be NP, and more than likely NP-Hard, in its computational complexity. That means the calculation time on a conventional computer grows exponentially bigger, proportional to $e^N$, as the size of the integer, N, grows (actually, this is only a conjecture, not proven, see P=NP? for more). On a quantum computer, the calculation time only grows logarithmically, proportional to $(log N)^3$. That is a HUGE difference! That means, for instance, that quantum computers will trivially break all current public key encryption schemes! All the traffic on the internet will be visible to anyone that has access to a quantum computer! And still, quantum algorithms and quantum computing are very much in their infancy. We have a long way to go before we understand and can harness the full potential of quantum computing power!

Figure 16: Quantum subroutine for order finding in Shor’s algorithm by Bender2k14 [CC BY-SA 4.0], via Wikimedia Commons. Based on Figure 1 in Circuit for Shor’s algorithm using 2n+3 qubits by Stephane Beauregard.

There are many ways to implement a quantum computer. It is possible to make qubits out of electron-spins, so, say the spin is pointing up, that would represent a value of “1”, and down, a value of “0”. Electrons can never have any spin but either up or down, i.e. they’re quantized, but, they can exist in a superposition of both. They can also be entangled together. Other implementations involve photons, nuclear spins, configurations of atoms (called topological qubits), ion traps, and more. While there are many different approaches, and still a lot to learn, all of today’s approaches do have something in common: they try to isolate the qubits in a very cold (near absolute zero), dark, noiseless, vibration free, static environment. Nothing is allowed to interact with the qubits, nor are new qubits allowed to be added or removed during the quantum computation. We have a fraction of a second to finish the program and measure the qubits before decoherence sets in and all quantum information in the qubits is lost to the environment. Researchers are constantly trying to find more stable qubits that will resist decoherence for longer periods. Indeed, there is no criterion that says a quantum computer must be digital at all – it could be an analog style quantum computer and do away with qubits altogether.

IBM has a 5-qubit quantum computer online right now that anyone can access. They have online tutorials that teach how to use it too. The best way for us to develop an intuition for quantum mechanics is to get our hands dirty and write some quantum programs, called “quantum scores” – like a musical score. It really is not hard to learn, just counter-intuitive at first. Soon, intuition for this kind of programming will develop and it will feel natural.

Another company, D-Wave, is working on an alternative approach to quantum computing called quantum annealing. A quantum annealer does not allow us to write quantum programs, instead it is specifically designed to find global solutions (a global minimum) to specific kinds of mathematical optimization problems (here is tutorial from D-Wave). This process takes advantage of yet another strange property of quantum mechanics called quantum tunneling. Quantum tunneling allows the computer to tunnel from one local minimum to another, in a superposition of many different paths at once, until a global minimum is found. While they do have a 1,000+qubit commercial quantum annealer available, some physicists remain skeptical of D-Wave’s results.

# VI. Solving the Quantum Measurement Problem – Pointers, Decoherence & Quantum Dynamics

Despite all the incredible practical success with quantum technology there was still an incompleteness about quantum’s interpretation. The trouble had to do with reconciling the quantum world with the macroscopic classical world. It wasn’t just a matter of a different set of equations. Logic itself was different. John Bell proved this when he published what became known as Bell’s inequality (1964). He came up with a simple equation, essentially:

N(A,~B)+N(B,~C) $\geq$ N(A,~C)

This video by Leonard Susskind explains it best – “the number of things in A and not B plus the number of things in B and not C is greater than or equal to the number of things in A and not C”. It’s easy to visualize with Venn diagrams and straight forward to prove this mathematically, just like a theorem of set theory. It involves no physical assumptions, just pure mathematics. But, turns out quantum mechanics doesn’t obey it! (see also Hardy’s paradox (1992)
for a really good brain teaser)

The trouble with quantum mechanics is that classical logic does not apply because the quantum world does not have the property of realism. Realism means that the things around us exist independently of whether we observe them. If there are mathematical sets A, B, and C those sets exist independent of the mathematician. In the quantum world, if we observe set A, it can change set B and C. The order that we observe the sets matters too. Realism means the proverbial tree that falls in the forest makes a sound whether we hear it or not. In the quantum world that’s not true. The tree exists in a superposition of states both making and not making a sound until someone, or something, observes it. This does not sound like a very plausible description of our practical experience though. From early on we all learn that nobody really disappears when we play “peek-a-boo”! It’s almost axiomatic. Realism does seem to be a property of the macroscopic universe. So, what gives?

The most common interpretation of quantum mechanics was called the Copenhagen interpretation. It said that the wave function would “collapse” upon measurement per the Born rule. It was a successful theory in that it worked – we could accurately predict what the results of a measurement would be. Still, this was kind of a band-aid on an otherwise elegant theory and the idea of having two entirely different logical views of the world was unsettling. Some physicists dissented and argued that it was not the responsibility of physicists to interpret the world, it was enough to have the equations to make predictions. This paradox became known as the quantum measurement problem and was one of the great unsolved mysteries of physics for over one hundred years. In the 1970’s the theory of decoherence was developed. This helped physicists understand why it was hard to keep things entangled, in a superposition, but it didn’t solve the problem of how things transitioned to a definite state upon measurement – it only partially addressed the problem. In fact, many brilliant physicists gave up on the idea of one Universe – to them it would take an infinite spectrum of constantly branching parallel Universes to understand quantum mechanics. This was known as the many world’s interpretation.

Figure 17: Excellent video introduction to quantum entanglement by Ron Garret entitled “The Quantum Conspiracy: What popularizers of QM Don’t Want You to Know“. Garret’s argument is that measurement “is” entanglement. We now understand entanglement is the first step in the measurement process, followed by asymptotic convergence to pointer states of the apparatus.

In 2013 A. Allahverdyan, R. Balian, and T. Nieuwenhuizen published a ground-breaking paper entitled “Understanding quantum measurement from the solution of dynamical models“. In this paper the authors showed that the measurement problem can be understood within the context of quantum statistical mechanics alone – pure quantum mechanics and statistics. No outside assumptions, no wave function collapse. All smooth, time reversible, unitary evolution of the wave function. The authors show that when a particle interacts with a macroscopic measuring device, in this case an ideal Curie-Weiss magnet, it first entangles with the billion-billion-billon (~$10^{27}$) atoms in the device momentarily creating a vast superposition of states. Then, two extreme cases are examined: first, if the coupling to the measuring device is much stronger than the coupling to the environment, the system cascades asymptotically to a pointer state of the device. This gives the appearance of wave-function collapse, but it is not that, it is a smooth convergence, maybe like a lens focusing light to a single point. This is the case when the number of atoms, which all have magnetic moments, in the measuring device is large. At first this seems a counter-intuitive result. One might expect the entanglement to keep spreading throughout and into the environment in an increasingly chaotic and complex way, but this does not happen. The mathematics prove it.

In the second extreme, when the coupling to the environment is much stronger, the system experiences decoherence – the case when the number of atoms in the measuring device is small. This happens before entanglement can cascade to a pointer state and so the system fails to register a measurement.

The author’s results are applied to a particle’s spin interacting with a particular measuring device, but the results appear completely general. In other words, it may be that measurements in general, like the cloud chamber photos of particle physics or the images of molecular spectroscopy, are just asymptotic pointer states – no more wave-particle duality, just wave functions. Just more or less localized wave functions. It means that the whole of the classical world may just be an asymptotic state of the much more complex quantum world. Measurement happens often because pointer states are abundant, so the convergence gives the illusion of realism. And, in the vast majority of cases, this approximation works great. Definitely don’t stop playing “peek-a-boo”!

It may turn out that biological systems occupy a middle ground between these two extremes – many weak couplings but not so many strong ones. Lots of densely packed quantum states, but a distinct absence of pointers. In such a system, superpositions could potentially be preserved for longer time scales because it may be that the rate of growth of entanglement propagating through the system may equal the rate of decoherence. It may even be that no individual particle remains entangled but a dynamic wave of entanglement – an entanglement envelope – followed by a wave of decoherence will describe the quantum dynamics. A dynamic situation where entanglement is permanent, but always on the move.

# VII. Quantum Networks – Using Dynamics to Restore and Extend Entanglement

Quantum networks use a continual dynamical sequence of entanglement to teleport a quantum state for purposes of communication. It works like this: suppose A, B, C, & D are qubits and we entangle A with B in one location, and C with D in another (most
laboratory quantum networks have used entangled photons from an EPR source for qubits). The two locations are 200km apart. Suppose the farthest we can send B or C without losing their quantum information to decoherence is 100km. So, we send B and C to a quantum repeater halfway in between. At the repeater station B and C are entangled (by performing a Bell state measurement, e.g. passing B and C thru a partially transparent mirror). Instantaneously, A and D will become entangled! Even if some decoherence sets in with B and C, when they interact at the repeater station full entanglement is restored. After that it does not matter what happens to B or C. They may remain entangled, be measured, or completely decohere – A and D will remain entangled 200km apart! This process can be repeated with N quantum repeaters to connect arbitrarily far away locations and to continually restore entanglement. It can also be applied in a multiple party setting (3 or more network endpoints). We could potentially have a vast number of locations entangled together at a distance – a whole quantum internet! When we are ready to teleport a quantum state, $\left|\phi\right>$, (which could be any number of qubits, for instance) over the network, we entangle $\left|\phi\right>$ with A in the first location and then D will instantaneously be entangled in a superposition of states at the second location – one of which will be the state $\left|\phi\right>$! In a multi-party setting, every endpoint of network receives the state $\left|\phi\right>$ instantaneously! Classical bits of information must be sent from A to D to tell which one of the superposition is the intended state. This classical communication prevents information from traveling faster than the speed of light – as required by Einstein‘s special theory of relativity.

Figure 18: A diagram of a quantum network from Centre for Quantum Computation & Communication Technology. EPR sources at either end are sources of entangled qubits where A&B and C&D are entangled. The joint measurement of B & C occurs at the quantum repeater in the middle entangling A & D at a distance.

Researchers further demonstrated experimentally that macroscopic atomic systems can be entangled (and a quantum network established) by transfer of light (the EM field) between the two systems (“Quantum teleportation between light and matter” – J. Sherson et al., 2006). In this case the atomic system was a spin-polarized gas sample of a thousand-billion ($10^{12}$) cesium atoms at room temperature and the distance over which they were entangled was about $\frac{1}{2}$ meter.

# VIII. Quantum Biology – Noisy, Warm, Dynamical Quantum Systems

Quantum Biology is a field that has come out of nowhere to be at the forefront of pioneering science. But 20 years ago, virtually no one thought quantum mechanics had anything to do with biological organisms. On the scale of living things quantum effects just didn’t matter. Nowadays quantum effects seem to appear all over biological systems. The book “Life on the Edge: The Coming Age of Quantum Biology” by J. McFadden and J. Al-Khalili (2014) is a New York Times bestseller and gives a great comprehensive introduction. Another, slightly more technical introduction, is this paper “Quantum physics meets biology” by M. Ardnt, T. Juffmann, and V. Vedral (2009), and more recently this paper
Quantum biology” (2013) by N. Lambert et al. A summary of the major research follows:

Photosynthesis: Photosynthesis represents probably the most well studied of quantum biological phenomenon. The FMO complex (Fenna-Mathews-Olsen) of green-sulphur bacteria is a large complex making it readily accessible. Light-harvesting antennae in plants and certain bacteria absorb photons creating an electronic excitation. This excitation travels to a reaction center where it is converted to chemical energy. It is an amazing reaction achieving near 100% efficiency – nearly every single photon makes its way to the reaction center with virtually no energy wasted as heat. Also, it is an ultrafast reaction taking only about 100 femtoseconds. Quantum coherence was observed for the first time in “Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems” by Engel et al. (2007). The energy transfer seems to involve quantum exciton
delocalization that is assisted by quantum phonon states and environmental noise. It is believed that coherent interference may guide the excitations to the reaction centers. This paper proves unequivocally that photosynthesis uses quantum processes – something that there is surprisingly strong resistance to by classicists.

Enzyme Catalysis:
Enzymes catalyze reactions speeding up reactions rates by enormous amounts. Classical factors can only explain a small fraction of this. Quantum tunneling of hydrogen seems to play an important role. Enzymes are vibrating all the time and it is unclear what role coherence and superposition effects may also contribute to reaction rate speed-ups.

Avian Compass: Several bird species, including robins and pigeons, are believed to use quantum radical-pair production to sense the Earth’s magnetic field for migratory purposes (the avian compass). Pair production involves the protein cryptochrome and resides in the bird’s eye.

Olfactory sense: Traditional theories of olfaction describe a “lock & key” method where molecules (the key) are detected if they fit into a specific geometric configuration (the lock). We have about 400 differently shaped smell receptors, but recognize 100,000 different smells. For example, the human nose can distinguish ferrocene and nickelocene which both have similar geometry. It has been proposed that the olfactory sense uses quantum electron tunneling to detect the vibrational spectra of molecules.

Vision receptors: One of the key proteins involved in animal vision is called retinal. The retinal molecule undergoes conformational change upon absorption of a photon. This allows humans to detect even just a handful of emitted photons. The protein rhodopsin, active in octopi in the dark ocean depths, may be able to detect single photons.

Consciousness: R. Penrose was the first to propose that quantum mechanics had a role in consciousness in his book “The Emperor’s New Mind” (1989). Together with S. Hameroff, he developed a theory known as Orch-OR (orchestrated objective reduction) which has received much attention. While the theory remains highly controversial, it has been instrumental in jump starting research into possible relationships between quantum mechanics and consciousness. The compelling notion behind this has to do with quantum’s departure from determinism – a.k.a. the “annihilation operator” of freewill, i.e. quantum probabilities could potentially allow freewill to enter the picture. Generally, the thinking is that wave function collapse has something to do with conscious choice. The conversation about consciousness is a deeply fascinating subject unto itself and we will address this in a subsequent supposition.

Mutation: In 1953, shortly after discovering DNA, J. Watson and F. Crick proposed that mutation may occur through a process called tautomerization. The DNA sequence is comprised of nucleotides: cytosine, adenine, guanine and thymine. The only difference between guanine and thymine is the location of a hydrogen atom in the molecular structure. Tautomerization is a process by which the hydrogen atom quantum tunnels through the molecular structure to allow guanine to transform into thymine, and similarly adenine into cytosine. Only recently have quantum simulations become sophisticated enough to test this hypothesis. This paper “UV-Induced Proton Transfer between DNA Strands” by Y. Zhang et al. (2015) shows experimental evidence that ultraviolet (UV) photons can induce tautomerization. This is a very important mechanism we will return into later.

Even with the growth and success of quantum biology, and the advances in sustaining quantum entanglement (e.g. 10 billion ions entangled for 39 minutes at room temperature – 2013), some scientists look at the warm, wet environment of living organisms and conclude there is no way “to keep decoherence at bay” in such an environment. Such arguments are formidable in the context of static quantum systems – like those used for developing present day quantum computers. But, biological systems tend to be dynamical, operating far from thermal equilibrium, with lots of noise and many accessible quantum rotational, vibrational, torsional and quasiparticle states. Moreover, we have discussed the importance of managing complexity in machine learning (chapter II and III), science has had a lot of success with classical molecular chemistry (balls and sticks), and, classical calculations are much simpler than quantum calculations. Shouldn’t we cling to this simpler approach until it is utterly falsified? Maybe so, but while quantum mechanical calculations are certainly more computationally intensive, they may not be more complex as a theory. More importantly, classical science is simply struggling to correctly predict observed results all over biological systems. A thorough study of quantum biological processes is deservedly well underway.

In 2009 J. Cai, S. Popescu, and H. Briegel published a paper entitled “Dynamic entanglement in oscillating molecules and potential biological implications” (follow-up enhancements in 2012 are here) which has shown that entanglement can continually recur in biological molecules in a hot noisy environment in which no static entanglement can survive. Conformational change is ubiquitous in biological systems – this is the shape changing that many proteins rely on to function. Conformational change induced by noisy, thermal energy in the environment repetitively pushes two sites of the bio-molecule together entangling them. When the two sites come together, they “measure” each other. That means that their spins must either line up together, or be opposite. The system will sit in a superposition of both, with each spin dependent upon the other, i.e. entangled, during at least a portion of the oscillation cycle. If the conformational recurrence time is less than the decoherence time, entanglement may be preserved indefinitely. Entanglement can be continually restored even in the presence of very intense noise. Even when all entanglement is temporarily lost, it will be restored cyclically. We wonder if there were not only two sites, but a string of sites, could a wave of entanglement spread, via this method, throughout the system? Followed by a wave of decoherence. In such a circumstance, perhaps an “envelope” of entanglement might cascade through the system (as we discussed in chapter VI). Such a question could be addressed in the context of quantum dynamical models as in the solution to the quantum measurement problem.

Figure 19: “Conformational changes of a bio-molecule, induced, for example, by the interaction with some other chemical, can lead to a time-dependent interaction between different sites (blue) of the molecule.” – from “Dynamic entanglement in oscillating molecules and potential biological implications” by J. Cai, S. Popescu, and H. Briegel (2009)

# IX. Quasicrystals & Phasons – Shadows of Life?

“A small molecule might be called ‘the germ of a solid’. Starting from such a small solid germ, there seem to be two different ways of building up larger and larger associations. One is the comparatively dull way of repeating the same structure in three directions again and again. That is the way followed in a growing crystal. Once the periodicity is established, there is no definite limit to the size of the aggregate. The other way is that of building up a more and more extended aggregate without the dull device of repetition. That is the case of the more and more complicated organic molecule in which every atom, and every group of atoms, plays an individual role, not entirely equivalent to that of many others (as is the case in a periodic structure). We might quite properly call that an aperiodic crystal or solid and express our hypothesis by saying: ‘We believe a gene – or perhaps the whole chromosome fibre’ – to be an aperiodic solid.” – Erwin Schrödinger, What is Life? (1944) chapter entitled ‘The Aperiodic Solid’

Crystals are structures that derive their unique properties (optical transparency, strength, etc.) from the tight packing, symmetric structure of the atoms that comprise them – like quartz, ice, or diamonds. There are only so many ways atoms can be packed together in a periodic pattern to form a two-dimensional crystal: rectangles and parallelograms (i.e. 2-fold symmetry), triangles (3-fold), squares (4-fold), or hexagons like snowflakes or honeycombs (6-fold). These shapes can be connected tightly to one another leaving no gaps in between. Moreover, there is no limit on how extensive crystals can be since attaching more atoms is just a matter of repeating the pattern. Mathematically, we can tessellate an infinite plane with these shapes. Other shapes, like pentagons, don’t work. There are always gaps. In fact, mathematicians have proven no other symmetries are allowed in crystals! These symmetries were “forbidden” in nature and crystallographers never expected to see them. But, in 1982, Dan Shechtman did! When studying the structure of a lab-created alloy of aluminum and manganese ($Al_6Mn$) using an electron microscope, he saw a 5-fold symmetric diffraction pattern (Bragg Diffraction) [see Figure 20]. Most crystallographers were skeptical. Shechtman spent two years scrutinizing his work, and, after ruling out all other possible explanations, published his findings in 1984. Turns out, what he discovered was a quasicrystal. In 2011 he was awarded the Nobel Prize in chemistry for his discovery.

Figure 20: Electron diffraction pattern of an icosahedral Zn-Mg-Ho quasicrystal by Materialscientist (Own work) [CC BY-SA 3.0 or GFDL], via Wikimedia Commons

Quasicrystals were not supposed to exist in nature because they were thought to require long-range forces to develop. The forces that were thought to guide atomic assembly of crystals, electromagnetic Coulomb forces, are dominated by local (nearest neighbor) interactions. Still, today, we can make dozens of different quasicrystals in the lab, and, they have been found a handful of times in nature. Physicists have postulated that the non-local effects of quantum mechanics are involved and this is what enables quasicrystals to exist.

Figure 21: Example of 5-fold symmetry may be indicative of biological quasicrystals. (First) flower depicting 5-fold symmetry from “Lotsa Splainin’ 2 Do”, (second) plant with 5-fold symmetric spiral from www.digitalsynopsis.com, (third) starfish from www.quora.com, (last) Leonardo Da Vinci’s “The Vitruvian Man” (1485) via Wikipeida

There is evidence of quasicrystals in biological systems as well: protein locations in the bovine papilloma virus appear to show dodecahedral symmetry [see figure 22], the Boerdijk-Coxeter helix (which forms the core of collagen) packs extremely densely and is proposed to have a quasicrystalline structure, pentameric symmetry of neurotransmitters may be indicative of quasicrystals, and general five-fold symmetries in nature [see figure 21] may also be indicative of their presence. Also, the golden ratio which appears frequently in biological systems is implicit in quasicrystal geometry.

Figure 22: Protein locations in a capsid of bovine papilloma virus. (a) Experimental protein density map. (b) Superimposition of the protein density map with a dodecahedral tessellation of the sphere. (c) The idealized quasilattice of protein density maxima. Kosnetsova, O.V.  Rochal, S.B.  Lorman, V.L. “Quasicrystalline Order and Dodecahedron Geometry in Exceptional Family of Viruses“, Phys. Rev. Lett., Jan. 2012, Hat tip to Prescribed Evolution.

Aperiodic tilings give a mathematical description of quasicrystals. We can trace the history of such tilings back to Johannes Kepler in the 1600’s. The most well-known examples are Penrose tilings [see figure 23], discovered by Roger Penrose in 1974. Penrose worked out that a 2-D infinite plane could, indeed, be perfectly tessellated in a non-periodic way -first, using six different shapes, and later with only two. Even knowing what two shapes to use, it is not easy to construct a tiling that will cover the entire plane (a perfect Penrose tile). More likely is that an arrangement will be chosen that will lead to an incomplete tiling with gaps [see figure 23]. For example, in certain two-tile systems, only 7 of 54 combinations at each vertex will lead to a successful quasicrystal. Selected randomly, the chance of successfully building a quasicrystal quickly goes to zero as the number of vertices grows. Still, it has been shown that in certain cases it is possible to construct Penrose tiles with only local rules (e.g. see “Growing Perfect Quasicrystals“, Onoda et al., 1988). However, this is not possible in all cases, e.g. quasicrystals that implement a one-dimensional Fibonacci sequence.

Figure 23: (Left) A failed Penrose tiling. (Right) A successful Penrose tiling. Both are from Paul Steinhardt’s Introduction to Quasicrystals here.

Phasons are a kind of dynamic structural macro-rearrangement of particles. Like phonons they are a quasiparticle. Several particles in the quasicrystal can simultaneously restructure themselves to phase out of one arrangement and into another [see Figure 24-right]. This paper from 2009 entitled “A phason disordered two-dimensional quantum antiferromagnet” studied a theoretical quasicrystal of ultracold atomic gases in optical lattices after undergoing phason distortions. The authors show that delocalized quantum effects grow stronger with the level of disorder in the quasicrystal. One can see how phason-flips disorder the perfect quasicrystaline pattern [see Figure 24-left].

Figure 24: (Left) The difference between an ordered and disordered quasicrystal after several phason-flips from “A phason disorder two-dimension quantum antiferromagnet” by A. Szallas and A. Jagannathan. (Right) HBS tilings of d-AlCoNi (a) boat upright (b) boat flipped. Atomic positions are indicated as Al¼white, Co¼blue, Ni¼black. Large/small circles indicate vertical position. Tile edge length is 6.5A˚. Caption and image from “Discussion of phasons in quasicrystals and their dynamics” by M. Widom.

Figure 25: Physical examples of quasicrystals created in the lab. Both are from Paul Steinhardt’sIntroduction to Quasicrystals“.

In 2015 K. Edagawa et al. captured video via electron microscopy of a quasicrystal, $Al_{70.8}Ni_{19.7}Co_{9.5}$, growing. They published their observations here: “Experimental Observation of Quasicrystal Growth“. This write-up, “Viewpoint: Watching Quasicrystals Grow” by J. Jaszczak, provides an excellent summary of Edagawa’s findings and we will follow it here: certain quasicrystals, like this one, produce one-dimensional Fibonacci chains. A Fibonacci chain can be generated by starting with the sequence “WN” (W for wide, N for narrow referring to layers of the quasicrystal) and then use the following substitution rules: replace “W” with “WN” and replace “N” with “‘W”. Applying the substitutions one time transforms “WN” into “WNW”. Subsequent application expands the Fibonacci sequence: “WNWWN”, “WNWWNWNW”, “WNWWNWNWWNWWN”, and so on. The continued expansion of the sequence cannot be done without knowledge of the whole one-dimensional chain. Turns out that when new layers of atoms are added to the quasicrystal, they are usually added incorrectly leaving numerous gaps [see Figure 26]. This creates “phason-strain” in the quasicrystal. There may be, in fact, several erroneous layers added before the atoms undergo a “phason-flip” into a correct arrangement with no gaps.

Figure 26:
Portion of an ideal Penrose tiling illustrating part of a Fibonacci sequence of wide (W) and narrow (N) rows of tiles (green). The W and N layers are separated by rows of other tiles (light blue) that have edges perpendicular to the average orientation of the tiling’s growth front. The N layers have pinch points (red dots) where the separator layers touch, whereas the W layers keep the divider layers fully separated. An ideal tiling would require the next layer to be W as the growth front advances. However, Edagawa and colleagues observed a system in which the newly grown layer would sometimes start as an N layer, until a temperature-dependent time later upon which it would transition through tile flipping to become a W layer. (graph and caption are from Jaszczak, J.A. APS Physics)

How does nature do this? Non-local quantum mechanical effects may be the answer. Is the quasicrystal momentarily entangled together so that it not only may be determined what sort of layer, N or W, goes next, but also, so that the action of several atoms may be coordinated together in one coordinated phason-flip?

One cannot help but wonder, does quantum mechanics understand the Fibonacci sequence? In other words, has it figured out that it could start with “WN” and then follow the two simple substitution rules outlined above? This would represent a rather simple description (MDL) of the quasicrystal. And, if so, where does this understanding reside, i.e. where is the quasicrystal’s DNA? Suffice it to say, it has, at the very least, figured out something equivalent. In other words, whether it has understood the Fibonacci sequence or not, whether it has understood the substitution rules or not, it has developed the equivalent to an understanding as it can extend the sequence! So, even if quantum mechanics did not keep some sort of log, or blueprint of how to construct the Fibonacci quasicrystal, it certainly has the information to do so!

# X. Holography & The Ultimate Quantum Network – A Living Organism

DNA is a remarkable molecule. Not just because it contains the whole genetic blueprint of the organism distilled in such a simple manner, but also because it can vibrate, rotate, and excite in so many ways. DNA is not natively static. It’s vibrating at superfast frequencies (like nanoseconds and femtoseconds)! Where does all this vibrational energy come from? One would think this energy would dissipate into the surrounding environment. Also puzzling is: why is there a full copy of DNA in every single cell? Isn’t that overkill? This paper, “Is it possible to predict electromagnetic resonances in proteins, DNA and RNA?” by I. Cosic, D. Cosic, and K. Lazar (2016), shows the incredible range of resonant frequencies in DNA. And, not only that, they also show that there is substantial overlap with other biomolecules like proteins and RNA. Perhaps DNA has some deeper purpose. Is it possible DNA is some sort of quantum repeater (chapter VII)? To do so, DNA would need to provide a source of entangled particles (like the EPR photon source in a laboratory quantum network).

This paper “Quantum entanglement between the electron clouds of nucleic acids in DNA” (2010) by E. Rieper, J. Anders, and V. Vedral has shown that entanglement between the electron clouds of neighboring nucleotides plays a critical role in holding DNA together. They oscillate, like springs, between the nucleotides, and occupy a superposition of states: to balance each other out laterally, and to synchronize oscillations (harmonics) along the chain. The former prevents lateral strain on the molecule, and the latter is more rhythmically stable. Both kinds of superpositions exist because they stabilize and lower the overall energy configuration of the molecule! The entanglement is in its ground state at biological temperatures so the molecule will remain entangled even in thermal equilibrium. Furthermore, because the electron clouds act like spacers between the planar nucleotides they are coupled to their vibrations (phonons). If the electron clouds are in a superposition of states, then the phonons will be also.

Figure 27: The structure of the DNA double helix. The atoms in the structure are colour-coded by element and the detailed structure of two base pairs (nucleotides) are shown in the bottom right. The nucleotides are planar molecules primarily aligned perpendicular to the direction of the helix. From Wikipedia.

So, DNA’s electron clouds could provide the entanglement, but where does the energy come from? It could, for instance, come from the absorption of ultraviolet light (UV radiation). While we’re all mindful of the harmful aspect of UV radiation, DNA is actually able to dissipate this energy superfast and super efficiently 99.9% of the time. When DNA does absorb UV radiation, the absorption has been shown to be spread out non-locally along the nucleotide chain and follows a process known as internal conversion where it is thought to be thermalized (i.e. turned into heat). Could UV photons be down-converted and then radiated as photons at THz frequencies instead? One UV photon has the energy to make a thousand THz photons, for instance. We have seen such highly efficient and coherent quantum conversions of energy before in photosynthesis (chapter VIII). Could this be a way of connecting the quantum network via the overlapping resonant frequencies to neighboring DNA, RNA, and proteins? The photons would need to be coherent to entangle the network. Also, we can’t always count on UV radiation, e.g. at night or indoors. If this is to work, there must be another source of energy driving the vibrations of DNA also.

A paper published in 2013 by A. Bolan et al. showed experimental evidence that THz radiation affected the expression of genes in the stem cells of mice suggesting that the THz spectrum is particularly important for gene expression. Phonon modes have been observed in DNA for some time, but not under physiological conditions (e.g. in the presence of water) until now. This paper entitled “Observation of coherent delocalized phonon-like modes in DNA under physiological conditions” (2016) by M. González-Jiménez, et al. gives experimental evidence of coherent quantum phonons states even in the presence of water. These phonons span the length of the DNA sequence, expand and contract the distance between nucleotides, and are thought to play a role in breaking the hydrogen bonds that connect the two DNA strands. They are in the THz regime and allow the strands to open forming a transcription bubble which enables access to the nucleotide sequence for replication. This is sometimes referred to as “DNA breathing“. Hence, it’s plausible these phonon modes can control gene expression, and, possibly exist in a complex superposition with the other states of the DNA molecule. They also are coherent which is critical for extending the quantum network, but, is there any evidence proteins could be entangled too?

In 2015 I. Lundholm, et al. published this paper “Terahertz radiation induces non-thermal structural changes associated with Fröhlich condensation in a protein crystal” showing that they could create something called a Fröhlich condensate when they exposed a collection of protein molecules to a THz laser. Herbert Fröhlich proposed the idea back in 1968 and since then it has been the subject of much debate. Now, finally, we have direct evidence these states can be induced in biological systems. These condensates are special because they involve a macroscopic collection of molecules condensing into a single non-local quantum state that only exists under the right conditions. There are many ways a Fröhlich condensate can form, but, in this case, it involves compression of the atomic helical structure of the proteins. Upon compression, the electrons of millions of proteins in crystalline form align and form a collective vibrational state, oscillating together coherently. This conformational change in the protein is critical to controlling its functioning – something generally true of proteins, e.g. as in enzyme catalysis, and protein-protein interactions (hat tip here for the examples). In the laboratory, the condensate state will last micro- to milli- seconds after exposure to the THz radiation, a long time in biomolecular timescales. Of course, that’s upon exposure to a THz laser. Could DNA THz photon emissions perform the same feat and carry the coherent information on from DNA and entangle proteins in the quantum network as well? Could a whole quantum network involving DNA, RNA, and a vast slew of proteins throughout the organism be entangled together via continuous coherent interaction with the EM field (at THz and other frequencies)? If so, it would give the organism an identity as “One” thing, and, it would connect the proteins which are interacting with the environment with the DNA that encodes them. This would open a possible connection between the tautomerization mutation mechanism (chapter VIII) and environmental stress! In other words, a method by which mutations are adaptive would be feasible, and not just that, but a method which could use quantum computational power to determine how to adapt!

But, then there is the question of energy. Where does the continual energy supply come from to support this network and can it supply it without disrupting coherence? In this paper, “Fröhlich Systems in Cellular Physiology” by F. Šrobár (2012), the author describes the details of a pumping source providing energy to the Fröhlich condensate via ATP, or GTP-producing mitochondria. Could the organism’s own metabolism be the sustaining energy source behind the organism’s coherent quantum network?

In the presence of so much coherence, is it possible dynamical interference patterns, using the EM field, could be directed very precisely by the organism – very much like a hologram? Not a visual hologram but rather, images in the EM field relevant to controlling biomolecular processes (e.g. the KHz, MHz, GHz, and THz domains)? A hologram is a 3-D image captured on a 2-D surface using a laser. The holographic plate is special in that it not only records brightness and color, but also the phase of incident coherent light. When the same frequency of coherent light is shined upon it, it reproduces the 3-D image through interference. The surface does not need to be a 2-D sheet, however. Coherently vibrating systems of molecules throughout the organism could create the interference. Not only that, but if the biological quantum network is in a superposition of many states at once, could it conceivably create a superposition of multiple interference patterns in the 3-D EM field at many different frequencies simultaneously (e.g. 20 MHz, 100 GHz, 1 THz, etc.)? With these interference effects, perhaps the organism directly controls, for instance, microtubule growth in specific regions as shown in this paper “Live visualizations of single isolated tubulin protein self-assembly via tunneling current: effect of electromagnetic pumping during spontaneous growth of microtubule” (2014) by S. Sahu, S. Ghosh, D. Fujita, and A. Bandyopadhyay? The paper shows that when the EM field is turned on, at a frequency that coincides with mechanical vibrational frequency of the tubulin protein molecule, the microtubules may be induced to grow, or, stop growing if the EM field is turned off. Microtubules are structural proteins that help form the cytoskeleton of all cells throughout the organism. Perhaps, more generally, organisms use holographic like interference effects to induce or halt growth, induce conformational changes (with the right frequency), manipulate Fröhlich effects, and generally control protein function throughout themselves? Indeed, it may not only be the case of “DNA directing its own transcription” as many biologists believe, but the organism as One whole directing many aspects of its own development.

Figure 28: (Left) Two photographs of a single hologram taken from different viewpoints, via Wikipedia. (Right) Rainbow hologram showing the change in colour in the vertical direction via Wikipedia.

This process would be more analogous to the growth of a quasicrystal (chapter IX) than a bunch of individual molecules trying to find their way. In the process of growth, mistakes along the way happen, such as misfolded proteins. Because quantum mechanics is probabilistic, some mistakes are inevitable. They become like the phason-strain in the quasicrystal – the quantum network corrects the arrangement through non-local phason-shifts, directed holographically. Rearrangement is not like reallocating balls and sticks as in classical molecular chemistry, but more like phasing out of one configuration of quantum wave functions and into another. Perhaps the quantum computing power of vast superpositions through holographic interference effects, not unlike Shor’s algorithm (chapter V), is the key to solving the highly non-linear, probably NP-hard problems, of organic growth.

Construction of the eye, a process requiring global spatial information and coordination, could be envisioned holographically by the quantum organism in the same way that quantum mechanics understood the Fibonacci sequence. Imagine the holographic image of the “Death Star” in “Star Wars” acting as a 3-D blueprint guiding its own assembly (as opposed to destroying it J). The hologram of the eye, originating from the quantum network of the organism is like a guiding pattern – a pattern resulting from coherent interfering amplitudes – guiding its own construction. It’s the same concept as how quantum mechanics can project forward the Fibonacci sequence and then build it in a quasicrystal, just scaled up many-fold in complexity. Growth of the eye could be the result of deliberate control of the organism’s coherent EM field focused through the holographic lens of DNA and the entangled biomolecules of the organism’s quantum network.

Figure 29: (Left) Diagram of the human eye via Wikipedia.(Right) Close-up photograph of the human eye by Twisted Sifter.

The growth of the organism could quite possibly be related to our own experience of feeling, through intuition, that the solution to a problem is out there. Maybe, we haven’t put all the parts together yet, we haven’t found a tangible approach yet, we may not know all the details but there is a guiding intuition there. We feel it. Perhaps that is the feeling of creativity, the feeling of quantum interference, the feeling of holographic effects. The building of an organism is like layers of the quasicrystals phasing together, capturing abstract complex relationships and dependencies, to make a successful quasicrystal. Each layer is a milestone on the way to that distant clever solution – a fully functional organism! Maybe humans do not have a monopoly on creative intelligence, maybe it is a power central to the Universe! Life moved it beyond quasicrystalline structures, highly advanced organisms moved it beyond the space of biomolecules, but the raw creative power, could be intrinsic. Moreover, all life would be the very special result of immense problem solving, creativity and quantum computational power! That certainly feels good, doesn’t it?

# XI. Quantum Mechanics and Evolution

“We are the cosmos made conscious and life is the means by which the universe understands itself.” – Brian Cox (~2011) Television show: “Wonders of the Universe – Messengers”

Attempts to describe evolution in quantum mechanical terms run into difficulties because quantum mechanics does not care about ‘fitness’ or ‘survival’ – it only cares about energy states. Some states are higher energy, some are lower, some are more or less stable. As in the solution of the quantum measurement problem (chapter VI), we may not need anything outside our present understanding of quantum mechanics to understand evolution. The key is recognizing: quantum entanglement itself factors into the energy of biological quantum states. Just like quantum entanglement in the electron clouds of DNA allows the electrons to more densely pack in their orbits in a cooperative quantum superposition thereby achieving a more stable energy configuration, we expect entanglement throughout the organism to lead to lower, more stable energy states. Coherence between the whole system, DNA oscillating coherently together, coherent with RNA, coherent with protein vibrations, in-synch with the EM field, all are coherent and entangled together. All that entanglement affects the energy of the system and allows for a more stable energy state for the whole organism. Moreover, it incentivizes life to evolve to organisms of increasing quantum entanglement – because it is a more stable energy state. Increasing entanglement means increasing quantum computational horsepower. Which, in turn, means more ability to find even more stable energy states in the vast space of potential biological organisms. This, as opposed to natural selection, may be the key reason for bias in evolution toward more complex creatures. Natural selection may be the side show. Very important, yes, absolutely a part of the evolutionary landscape, yes, but not the main theme. That is much deeper!

Recall our example of fullerene (a.k.a. buckyballs) fired through a two-slit interferometer. When this experiment is performed in a vacuum a clear interference pattern emerges. As we allow gas particulates into the vacuum, the interference fringes grow fuzzier and eventually disappear (hat tip “Quantum physics meets biology” for the example). The gas molecules disrupt the interference pattern. They are like the stresses in the environment – heat stress, oxidative stress, lack of a food, …whatever. They all muddle the interference pattern. There is no interferometer per se’ in a living organism, but there are holographic effects throughout the organism and every entangled part of the organism can feel it (this feeling can be quantified mathematically as the entropy of entanglement through something called an entanglement witness). The stresses erode the coherence of the organism and induce instability in the energy state. The organism will probabilistically adapt by undergoing a quantum transition to a more stable energy state – clarifying the interference pattern, clarifying the organism’s internal holography. All within the mathematical framework of dynamical quantum mechanics. This could mean an epigenetic change, a simple change to the genetic nucleotide sequence or a complex rearrangement. The whole of DNA (and the epigenetic feedback system) is entangled together so these complex transitions are possible, and made so by quantum computational power.

In J. McFadden’s book “Quantum Evolution” (2000) he describes one of the preeminent challenges of molecular evolutionary biology: to explain the evolution of Adenosine monophosphate (AMP). AMP is a nucleotide in RNA and a cousin of the more well-known ATP (Adenosine triphosphate) energy molecule. Its creation involves a sequence of thirteen steps involving twelve different enzymes. None of which have any use other than making AMP, and each one is absolutely essential to AMP creation (see here for a detailed description). If a single enzyme is missing, no AMP is made. Furthermore, there is no evidence of simpler systems in any biological species. No process of natural selection could seemingly account for this since there is no advantage to having any one of the enzymes much less all twelve. In other words, it would seem, somehow, evolution had this hugely important AMP molecule in mind and evolved the enzymes to make it. Such an evolutionary leap has no explanation in the classical picture, but we can make sense of this in the same way that quantum mechanics envisioned completion of the Fibonacci quasicrystal. The twelve enzymes represent quasicrystal layers along the way that must be completed as intermediate steps. In holographic terms, organisms, prior to having AMP, saw via far reaching path integrals a distant holographic plan of the molecule comprised of many frequencies of EM interference: a faint glow corresponding to the stable energy configuration of the AMP molecule, a hologram formed from the intersection of the amplitudes of infinitely many path integrals at many relevant biological frequencies. A hint of a clever idea toward a more stable energy configuration. The enzymes needed for its development were holographic interference peaks along the way. Development of each enzyme occurred not by accident, but with the grand vision of the AMP molecule all along. This is same conceptual process that we as human beings execute all the time having a distant vision of a solution to a problem, like Roger Penrose’s intuition of the Penrose tiles, Feynman’s intuition of the quantum computer, or Schrödinger’s vision of quantum genes. Intuition guides us. We know from learning theory (chapter II & III) that learning is mathematical in nature, whether executed by the machine, by the mind, or by DNA. The difference is the persistent quantum entanglement that is life, that is “Oneness”, and the holographic quantum computational power that goes with it.

Because the entire organism is connected as one vast quantum entangled network, mutation via UV photon induced tautomerization (Chapter VIII) can be viewed as a quantum transition between the energy states of the unified organism. So, when the organism is faced with an environmental stress, it is in an unstable energy state. Just like a hydrogen atom absorbing an incident photon to excite it to the next energy level, the organism absorbs the UV photon (or photons) and phason-shifts the genetic code and the entire entangled organism. Isomerization of occurs. This is made possible in part by the marginal stability of proteins (chapter IV) – it takes very little energy to transition from one protein to another. In other words, a change to one or more nucleotides in the DNA sequence instantaneously and simultaneously shifts the nucleotide sequence in other DNA, RNA, and the amino acid sequences of proteins. Evolutionary adaptations of the organism are quantum transitions to more stable energy configurations.

In chapters II and III we talked about the importance of simplicity (MDL) in the genetic code, the importance of Occam’s Razor. Simplicity is important for generalization, so that DNA can understand the process of building organisms in simplest terms. Thereby, it can generalize well, that is, when it attempts to adapt an organism to its environment it would have a sense of how to do it. The question then arises, how does this principle of Occam’s razor manifest itself in the context of quantum holograms? A lens, like that of the eye, is a very beautiful object with great symmetry, and must be perfectly convex to focus light properly. If we start making random changes to it, the image will no longer be in focus. The blueprint of the lens must be kept simple to ensure it is constructed and functions properly. Moreover, the muscles around the lens of the eye that flex and relax to adjust its focal length, must do so in a precise choreographed way. Random deformations of its shape will render the focused image blurry. The same concept applies to the genetic code. DNA serves as a holographic focal lens for many EM frequencies simultaneously. We cannot just randomly perturb its shape, that could damage it and leave the organism’s guiding hologram out of focus, unstable. The changes must be made very carefully to preserve order. This is a factor in the quantum calculus of mutation, it’s not simply a local question of does the UV photon interact with a nucleotide and tautomerize it. Rather, it must be non-local involving the whole organism and connecting to the stress in the environment while also keeping the DNA code very organized and simple. If a DNA mutation occurs that does not preserve a high-state-of-order in the blueprint, i.e. does not preserve a short MDL, it could be disastrous for the organism.

# XII. Experimental Results in Evolutionary Biology

So, how does all this contrast with biological studies of evolution? Turns out Lamarck was correct, there is growing evidence that mutations are indeed adaptive – mutation rates increase when organisms are exposed to stress (heat, oxidative, starvation, etc.) and, they resist mutation when not stressed. This has been studied now in many forms of yeast, bacteria, and human cancer cells across many types of stress and under many circumstances. Moreover, there are many kinds of mutations in the genetic code ranging from small changes affecting a few nucleotides, to deletions and insertions, to gross genetic rearrangements. This paper “Mutation as a Stress Response and the Regulation of Evolvability” (2007) by R. Galhardo, P. Hastings, and S. Rosenberg sums it up:

“Our concept of a stable genome is evolving to one in which genomes are plastic and responsive to environmental changes. Growing evidence shows that a variety of environmental stresses induce genomic instability in bacteria, yeast, and human cancer cells, generating occasional fitter mutants and potentially accelerating adaptive evolution. The emerging molecular mechanisms of stress induced mutagenesis vary but share telling common components that underscore two common themes. The first is the regulation of mutagenesis in time by cellular stress responses, which promote random mutations specifically when cells are poorly adapted to their environments, i.e., when they are stressed. A second theme is the possible restriction of random mutagenesis in genomic space, achieved via coupling of mutation-generating machinery to local events such as DNA-break repair or transcription. Such localization may minimize accumulation of deleterious mutations in the genomes of rare fitter mutants, and promote local concerted evolution. Although mutagenesis induced by stresses other than direct damage to DNA was previously controversial, evidence for the existence of various stress-induced mutagenesis programs is now overwhelming and widespread. Such mechanisms probably fuel evolution of microbial pathogenesis and antibiotic-resistance, and tumor progression and chemotherapy resistance, all of which occur under stress, driven by mutations. The emerging commonalities in stress-induced-mutation mechanisms provide hope for new therapeutic interventions for all of these processes.”

……

“Stress-induced genomic instability has been studied in a variety of strains, organisms, stress conditions and circumstances, in various bacteria, yeast, and human cancer cells. Many kinds of genetic changes have been observed, including small (1 to few nucleotide) changes, deletions and insertions, gross chromosomal rearrangements and copy-number variations, and movement of mobile elements, all induced by stresses. Similarly, diversity is seen in the genetic and protein requirements, and other aspects of the molecular mechanisms of the stress-induced mutagenesis pathways.” – “Mutation as a Stress Response and the Regulation of Evolvability” (2007) by R. Galhardo, P. Hastings, and S. Rosenberg

What does the fossil record say about evolution? The fossil record paints a mixed picture of gradualism and saltation. The main theme of the fossil record is one of stasis – fossils exhibit basically no evolutionary change for long periods of time, millions of years in some cases. There are clear instances where the geological record is well preserved and still we see stasis, e.g. the fossil record of Lake Turkana, Kenya. Sometimes, there are gaps in the fossil record. Sometimes long periods of stasis follow abrupt periods of change in fossils – an evolutionary theory known as punctuated equilibria. Other times, the fossil record clearly shows a continuous gradual rate of evolution (e.g. the fossil record of marine plankton) – a contrasting evolutionary theory known as phyletic gradualism. This paper “Speciation and the Fossil Record” by M. Benton and P. Pearson (2001) provides an excellent summary. Neither theory, punctuated equilibria, nor phyletic gradualism seems to apply in every case.

If we allow ourselves to be open to the idea of quantum mechanics in evolution, it would seem Schrödinger was right. On the fossil record, we could see quantum evolution as compatible with both the punctuated equilibria and the phyletic gradualism theories of evolution as changes are induced by stress with quantum randomness. On the biological evidence for adaptive mutation it would seem quantum evolution nails it. We have talked about the fundamental physical character of quantum mechanics and evolution. Three aspects emerge as central to the theme: quantum entanglement via a quantum network, generalization (or adaptation) through holographic quantum computing, and complexity management via the MDL principal in DNA. These three themes are all connected as a natural result of the dynamics of quantum mechanics. Sometimes, though, it can be useful to see things through a personal, 1st person perspective. Perhaps entanglement is like “love“, connecting things to become One, generalization through holographic projection like “creativity“, and MDL complexity like “understanding“. Now suppose, if just for a moment, that these three traits: love, creativity, and understanding, that define the essence of the human experience, are not just three high-level traits selected for during “X billion years of evolution” but characterize life and the universe itself from its very beginnings.

“The Force is what gives a Jedi his power. It’s an energy field created by all living things. It surrounds us and penetrates us. It binds the galaxy together.” – Ben Obi-Wan Kenobi, Star Wars

The End