Gerard ‘t Hooft’s thoughts on the quantum nature of the universe

Gerard_t_Hooft_1Image credit

Gerard ‘t Hooft is one of my heros in physics. My other heros are Albert Einstein and Roger Penrose. Gerard ‘t Hooft  is known for his brilliance and clarity of thought. All his papers are examples of clear thinking. He is also known for his patience and kindness.

You should also know that Gerard ‘t Hooft is a Nobel laureate. He shared the Nobel Prize in Physics with Martinus J.G. Veltman in 1999 “for elucidating the quantum structure of electroweak interactions in physics.” You can get more information about his Nobel winning contributions to physics at the Nobel site.

Like Albert Einstein Gerard ‘t Hooft has been uncomfortable with Quantum Mechanics. He has been working towards an alternative theory in the past decade. He summarized his work in a 207 page paper titled “The Cellular Automaton Interpretation of Quantum Mechanics: A View on the Quantum Nature of our Universe, Compulsory or Impossible” which was published in 2014 but I became aware of it recently. I will refer to this paper as the “CAI paper” in the remainder of this post. The “CAI” stands for “Cellular Automaton Interpretation” of Quantum Mechanics.

The premise of CAI is that at the tiniest scale the universe is deterministic which means that everything can be explained in principle by the initial conditions and a finite number of classical rules for behavior. He tries to prove that the quantum weirdness observed at larger scales – the scale where we observe particles and fields – can be explained by the loss of information.

We were all taught in school that at the tiniest scales of the universe there is quantum weirdness and the uncertainty in the behavior of particles and fields is intrinsic. The words “intrinsic” and “weirdness” imply that we cannot find classical (rational) rules to explain the quantum behavior. We give up and say that it is the way it is. Gerard ‘t Hooft is saying that it is too early to give up. He seems to be saying that all the work that was done in the area of the foundational quantum mechanics in recent decades has been impressive –  it has been clarified that it is possible to find (deterministic) “hidden variable” theories if one gives up the locality principle – but one can still do better than the “nonlocal hidden-variable theories.”

Gerard ‘t Hooft’s CAI approach is heretical. In this CAI paper he proudly admits being a heretic. Last week I wrote a post titled “Crazy-old-guy syndrome among theoretical physicists” where I quoted John Preskill

“I suppose most theoretical physicists who (like me) are comfortably past the age of 60 worry about their susceptibility to “crazy-old-guy syndrome.” (Sorry for the sexism, but all the victims of this malady I know are guys.) It can be sad when a formerly great scientist falls far out of the mainstream and seems to be spouting nonsense.”

“Crazy-old-guy syndrome” does not apply to Gerard ‘t Hooft who is one of the most respected physicists in the world. His clarity of thought and his skill in the mathematical exposition of his ideas are unmatched.

In my early career as a physicist I also thought that Quantum Mechanics is an incomplete theory as Einstein did. Over the years I reluctantly accepted the quantum weirdness. But, instead of saying “that is the way it is” I came up with a metaphysical rationale for quantum weirdness. I explained my metaphysical rationale in the following posts:

Gerard ‘t Hooft would have none of that metaphysical nonsense. On page 8 of the CAI paper he says

“So why the present treatise? Almost every day, we receive mail from amateur physicists
telling us why established science is all wrong, and what they think a ‘theory of
everything’ should look like. Now it may seem that I am treading in their foot steps. Am
I suggesting that nearly one hundred years of investigations of quantum mechanics have
been wasted? Not at all. I insist that the last century of research lead to magnificent
results, and that only one thing missing so-far was a more radical description of what has been found. Not the equations were wrong, not the technology, but only the wording of what is often referred to as ‘quantum logic’ should be replaced. It is my intention to
remove every single bit of mysticism from quantum theory.”

Even though I don’t agree with the premise of the CAI paper, out of my tremendous respect for Gerard ‘t Hooft I wanted to bring the CAI paper to your attention.

Other quotations from the CAI paper (Gerard ‘t Hooft)

“What exactly happened to the superposition principle in the CA Interpretation of quantum mechanics? Critics of our work brought forward that the CAI disallows superposition, while obviously the superposition principle is serving quite well as a solid back bone of quantum mechanics. Numerous experiments confirm that if we have two different states, also any superposition of these states can be realised. Although the reader should have understood by now how to answer this question, let us attempt to clarify the situation once again.

At the most basic level of physical law, we assume only ontological states to occur, and
any superposition of these, in principle, does not correspond to an ontological state. At
best, a superposition can be used to describe probabilistic distributions of states (we call
these ‘physical states’, since in physics, we often do not have the exact information at
hand to determine with absolute certainty which ontological state we are looking at). In
our description of the Standard Model, or any other known physical system such as atoms and molecules, we do not use ontological states but templates, which can be regarded as superpositions of ontological states. The hydrogen atom is a template, all elementary particles we know about are templates, and this means that the wave function of the universe, which is an ontological state, must be a superposition of our templates. Which superposition? Well, we will encounter many different superpositions when doing repeated experiments. This explains why we were led to believe that all superpositions are always allowed.

But not literally all superpositions can occur. Superpositions are man-made. Our
templates are superpositions, but that is because they represent only the very tiny sector
of Hilbert space that we understand today. The entire universe is in only one ontological
state at the time, and it of course cannot go into superpositions of itself. This fact
now becomes manifestly evident when we consider the ‘classical limit’. In the classical
limit we again deal with certainties. Classical states are also ontological. When we do a
measurement, by comparing the calculated ‘physical state’ with the ontological classical
states that we expect in the end, we again recover the probabilities by taking the norm
squared of the amplitudes. Classical states also never go into superpositions of classical
states. Such superpositions never occur.

It appears that for many scientists this is difficult to accept. During a whole century we
have been brainwashed with the notion that superpositions occur everywhere in quantum mechanics. At the same time we were told that if you try to superimpose classical states, you will get probabilistic distributions instead. It is here that our present theory is more accurate: if we knew the wave function of the universe exactly, we would find that it always evolves into one classical state only, without uncertainties and without superpositions. Of course this does not mean that standard quantum mechanics would be wrong.

Our knowledge of the template states, and how these evolve, is very accurate today. It is only because it is not yet known how to relate these template states to the ontological states that we have to perform superpositions all the time when we do quantum mechanical calculations. They do lead to statistical distributions in our final predictions, rather than certainties. This could only change if we would find the ontological states, but since even the vacuum state is expected to be a template, and as such a complicated superposition of uncountably many ontic states, we should expect quantum mechanics to stay with us forever but as a mathematical tool, not as a mystic departure from classical logic.”


“What we did find, however, seems to be more than sufficient to extract a succinct interpretation of what quantum mechanics really is about. The technical details of the
underlying theory do not make much difference here. All one needs to assume is that
some ontological theory exists; it will be a theory that describes phenomena at a very
tiny distance scale in terms of evolution laws that process bits and bytes of information.
These evolution laws may be ‘as local as possible’, requiring only nearest neighbours to
interact directly. The information is also strictly discrete, in the sense that every ‘Planckian’ volume of space may harbour only a few bits and bytes. We also suspect that the bits and bytes are processed as a function of local time, in the sense that only a finite
amount of information processing can take place in a finite space-time 4-volume. On the
other hand, one might suspect that some form of information loss takes place such that
information may be regarded to occupy surface elements rather than volume elements,
but this we could not elaborate very far.

In any case, this local theory of information being processed, does not require any
Hilbert space or superposition principles to be properly formulated. The bits and bytes
we discussed are classical bits and bytes; at the most basic level of physics (but only
there) qubits do not play any role, in contrast with more standard approaches considered in today’s literature. Hilbert space only enters when we wish to apply powerful mathematical machinery to address the question how these evolution laws generate large scale behaviour, possibly collective behaviour, of the data.

Our theory for the interpretation of what we observe is now clear: humanity discovered
that phenomena at the distance and energy scale of the Standard Model (which comprises distances vastly larger, and energies far smaller, than the Planck scale) can be captured by postulating the effectiveness of templates. Templates are elements of Hilbert space that form a basis that can be chosen in numbers of ways (particles, fields, entangled objects), which allow us to compute the collective behaviour of solutions to the evolution equations that do require the use of Hilbert space and linear operations in that space.

The original observables, the beables, can all be expressed as superpositions of our templates. Which superpositions one should use, differs from place to place. This is weird but not inconceivable. Apparently there exists a powerful scheme of symmetry transformations allowing us to use the same templates under many different circumstances. The rule for transforming beables to templates and back is complex and not unambiguous; exactly how the rules are to be formulated, for all objects we know about in the universe, is not known or understood, but must be left for further research.

Most importantly, the original ontological beables do not allow for any superposition,
just like we cannot superimpose planets, but the templates, with which we compare the
beables, are elements of Hilbert space and require the well-known principles of superposition. The second element in our CAI is that objects we normally call classical, such as planets and people, but also the dials and all other detectable signals coming from measurement devices, can be directly derived from beables, without use of the templates. Of course, if we want to know how our measurement devices work, we use our templates, and this is the origin of the usual ‘measurement problem’. What is often portrayed as mysteries in quantum theory: the measurement problem, the ‘collapse of the wave function’, and Schrodinger’s cat, is completely clarified in the CAI. All wave functions that will ever occur in our world, may seem to be superpositions of our templates, but they are completely peaked, ‘collapsed’, as soon as we use the beable basis. Since classical devices are also peaked in the beable basis, their wave functions are collapsed. No violation of Schrodinger’s equation is required for that, on the contrary, the templates, and indirectly, also the beables, exactly obey the Schrodinger equation.

In short, it is not nature’s degrees of freedom themselves that allow for superposition, it
is the templates we normally use that are man-made superpositions of nature’s ontological states. It is due to our intuitive thinking that our templates represent reality in some way, that we hit upon the apparently inevitable paradoxes concerning superposition of the natural states. If, instead, we start from the ontological states that we may one day succeed to characterise, the so-called ‘quantum mysteries’ will disappear.”

Gerard ‘t Hooft’s educational web site :

Other lectures :

CV :

Web page at the University of Utrecht :

Lecture course: Lie Groups in Physics: This lecture course was originally set up by M. Veltman, and subsequently modified and extended by Bernard de Wit and G. ‘t Hooft.



About Suresh Emre

I have worked as a physicist at the Fermi National Accelerator Laboratory and the Superconducting Super Collider Laboratory. I am a volunteer for the Renaissance Universal movement. My main goal is to inspire the reader to engage in Self-discovery and expansion of consciousness.
This entry was posted in physics and tagged , . Bookmark the permalink.