Quantum Reality (IV): Primitive Ontology

In this final post (for now) on my exploration of quantum mechanics, I will engage in some casual speculation about what our physical reality might be like. (As I wrote in my last post, this is something that most contemporary interpretations do not really try to do in a coherent way.) I will do this within the framework of the primitive ontology approach, which says, roughly, that:

  • A fundamental physical theory must specify what are the basic entities that make up physical matter located in space and time.
  • Then, that theory must specify how those basic entities behave, in such a way as to produce the macroscopic world that we observe.

Most of the effort so far, by the researchers following this approach, has been directed at specifying the primitive ontology of matter, but it seems to me that to eventually unite quantum mechanics with general relativity, we will also need to think about the primitive ontology of space and time.

Space and Time

The two basic options we have for an ontology of spacetime are continuous and discrete.

Continuous Spacetime

If spacetime is continuous, then the way that we usually see space and time as infinitely divisible continuums is correct down to the smallest level. Spacetime forms a smooth manifold just as it does in general relativity; the only differences are quantum fluctuations in the geometry at very small scales. (From a presentist perspective, a continuous four-dimensional spacetime takes the form of a continuous three-dimensional space evolving continuously in time.)

In general relativity, the fundamental variable in the theory is the (four-dimensional) spacetime metric, which controls the geometry of space and the behaviour of matter under gravitational forces. A possible model for quantum gravity has been suggested wherein spacetime is given a privileged foliation in spacelike slices, and the fundamental variable is the (three-dimensional) spatial metric. The spatial metric controls the geometry of space, and the way it evolves from slice to slice is related to the gravitational forces.

In this model, the evolution of the spatial metric is given by a guidance equation derived from the universal wavefunction, similar to how particles are guided by the wavefunction in Bohmian mechanics. It is possible (as suggested by the Wheeler-DeWitt equation) that when quantum mechanics is united with general relativity, the universal wavefunction will be static, unchanging in time. This, I think, makes its role in the primitive ontology approach as a representation of the laws of nature more apparent: combined with the guidance equation, it determines the evolution of every possible configuration of the universe.

Discrete Spacetime

If spacetime is discrete, then at very small scales the continuity of space and time breaks down: space itself is made of discrete units and all change occurs in discrete steps. There would be “atoms” of space with some form of connectivity relation between them; spacetime would be represented by a mathematical graph rather than a smooth manifold. The geometry of space would only emerge from vast collections of such atoms. (A presentist would see this as discrete space evolving discretely in time.)

A discrete theory for spacetime has been suggested also, using a guidance equation derived from the universal wavefunction, as before. Here the guidance equation would likely be stochastic, determining the probabilities for the next configuration of space, given the current configuration. Over large enough regions and enough configuration steps, a spacetime manifold (again, with a privileged foliation) would emerge as an approximation.

I am not sure which of these approaches is more likely to be correct, or which will be easier to study. The mathematics of continuous configurations gets difficult: functional derivatives and integrals in infinite dimensional spaces. (Though some of the same mathematics is already needed and being worked on, for the path integral formulation of quantum field theory.) The mathematics of discrete configurations may be simpler, but figuring out how the continuum emerges from that discrete configuration is a hard problem.

Personally, I like the theory of a continuous spacetime, but I think both continuous and discrete ontologies need further investigation. Either way, the dynamics should probably be such that the privileged foliation is not detectable, to agree with the no-signalling theorem.

(You could also have a discrete space with continuous time, so that the discrete structure of space evolves continuously according to a guidance equation derived from the universal wavefunction, for example. Whether such an evolution would be deterministic or stochastic is something that obviously needs investigation.)


Here is where things get interesting, and we think about what everything is made of. Particles, fields, and (so-called) flashes have all been suggested as primitive ontologies for matter.


Perhaps the most obvious choice for the fundamental entities that make up matter is particles. Bohmian mechanics uses a particle ontology to reproduce the predictions of non-relativistic quantum theory, where the particles are guided by the wavefunction. Relativistic generalizations of Bohmian mechanics have been suggested with a particle ontology, where the guidance equation for the motion of the particles is supplemented with a stochastic process for particle creation and annihilation events. In these theories, particles are created, move through space, and are annihilated, all guided by the wavefunction.

Among primitive ontology theorists, there is some question as to whether the primitive ontology should include fermions (matter particles such as electrons and quarks), bosons (radiation and force-mediating particles such as photons and gluons), or both; and, if both are included, whether they should appear in the same form. Usually, non-relativistic Bohmian mechanics includes only electrons and atomic nuclei as the primitive ontology; the electromagnetic force between them appears only in the wavefunction.

It seems to me that the discoveries of quantum physics have shown that, whatever the primitive ontology of nature is, matter, radiation, and forces behave in a fundamentally similar way. Both electrons and photons display both wave and particle natures; in standard quantum field theory, both are considered to be excitations of a quantum field (even though the standard theory isn’t entirely clear about what a quantum field is). So a satisfactory primitive ontology should include both bosons and fermions, and in the same way.

One interesting way of doing this, suggested in this paper, has both bosons and fermions as particles, moving in spacetime and being created or annihilated. Their motion and interaction is guided by the wavefunction, alongside a number of other potentials defined on the configuration space. This theory has the capability of handling gravitons, so it is a possible way of unifying quantum mechanics and general relativity.


A primitive ontology where the fundamental entities are fields is, perhaps, more in line with the spirit of quantum field theory. In such a theory, what everything (or at least, everything physical) is ultimately made of are fields that pervade all of spacetime. Quantum fields have the peculiar property that vibrations propagating through them must possess discrete amounts of energy: this is what allows us to see such excitations as particles.

To me, the field ontology is the most attractive, since it explains in one framework the reason that particles of the same kind have exactly identical properties, the reason each kind of particle has a corresponding antiparticle, the mechanisms of particle creation and annihilation, interactions between different particles, and quantum fluctuations of the vacuum. (Here is an excellent series of articles explaining various features of quantum field theory on physicist Matt Strassler’s site.) However, precisely formulating the actual behaviour of these quantum fields is difficult.

Part of that difficulty is in figuring out how fermionic fields should be represented. (Fermions differ from bosons by being forbidden from occupying the same quantum state. Essentially, two fermions cannot be in the same place at once, which is what makes matter occupy space. This makes fermionic fields more complicated to represent than bosonic fields.)

I have no idea how that problem might be resolved. There are a number of suggestions, reviewed here for example (personally, I think P. Holland’s model is promising). But since the standard theory uses certain mathematical objects called spinors in connection to fermions, I feel like some insight might be gained from geometric algebra, a less well-known set of mathematical concepts that represent spinors much more intuitively (see here for detail) than the usual formulation.

Another significant difficulty in formulating the dynamics of quantum fields is the infinities: fields have an infinite number so degrees of freedom, while any system of particles has only a finite number of degrees of freedom. Interestingly, this could be made easier if spacetime is discrete, since that would reduce the degrees of freedom to a finite number (or, at most, a countably infinite number).


A curious primitive ontology that was first suggested for an objective collapse theory of quantum mechanics has come to be known as the flash ontology. According to a flash theory, matter is nothing more than patterns of flashes – point-like, instantaneous events. (In the context of an objective collapse theory, the flashes represent locations of wavefunction collapse.) Each flash “lights up” a single point in space for a single instant of time, and then it is gone.

From a philosophical perspective, I feel that with the flash ontology, there isn’t enough there for anything to have the causal properties that I believe underwrite the laws of nature. (This paper argues against the flash ontology for the same reason. But perhaps the causal properties are just a part of spacetime itself?) It is an interesting proposal, nonetheless.

In addition to particles, fields, and flashes, other possible primitive ontologies include strings and branes, as found in string theory. (However, I think the prospects of string theory have been overstated, so I find a particle or field ontology to be more likely.) Maybe even the metaphysical forms of this interpretation of quantum mechanics, which I mentioned in my last post, could be thought of as a primitive ontology (though I would consider it closer to idealism than scientific realism).


The dynamics of a primitive ontology theory must specify how the fundamental entities of physics behave. Because of the discoveries that have been made about quantum mechanics, we know that the dynamics will have to have some strange features:

  • The behaviour of the primitive ontology may be affected by what is possible for it to do, not just by what it actually does. (This is what we see in the unexpected effects of quantum superpositions.)
  • The dynamics will include instantaneous, non-local influences.
  • The dynamics will imply fundamental limitations on our ability to measure and manipulate the microscopic degrees of freedom of the primitive ontology (a side effect of which that we cannot exploit the non-local influences to send signals faster than the speed of light).
  • Not all of the properties we assign to classical systems can be assigned to quantum systems; some properties will be contextual and depend on how our measurement apparatus is configured.

But there seems to still be a fair amount of freedom in how the dynamics of the primitive ontology could be specified.

Determinism or Indeterminism

Despite the appearance of quantum indeterminacy, there is still the possibility that it remains only an appearance, a result of our ignorance of the details of the actual configuration of the primitive ontology. This is the case in non-relativistic Bohmian mechanics: the motions of the particles are completely determined by the wavefunction and their actual positions. (The analysis in Bohmian mechanics of how randomness arises from a deterministic universe is actually quite insightful, I find.)

Indeterminism, or laws of nature with fundamental randomness, may be required by certain primitive ontologies. Any flash ontology, any particle ontology that includes particle creation and annihilation processes, and any ontology with discrete spacetime all seem to require a component of randomness in their dynamics.

Maybe a way can be found for a field ontology to behave deterministically; the way it looks right now, I suspect that will be difficult. More work needs to be done to answer the question of whether nature is fundamentally deterministic or indeterministic.

Dependent or Independent

The easiest way to implement the strange behaviour of quantum mechanics in the primitive ontology seems to be to use a wavefunction, evolving according to the Schrodinger equation, as an auxiliary variable. Then the behaviour of the primitive ontology is derived from the wavefunction. (Though it is possible to use an object called a density matrix, evolving according to the Lindblad equation, in place of the wavefunction; and it may be possible to go without either, and specify the dynamics directly.)

There are two ways of deriving the behaviour of the primitive ontology from the wavefunction. One is to specify the configuration of the primitive ontology independently of the wavefunction, and use a guidance equation derived from the wavefunction to specify its evolution. This is the approach of Bohmian mechanics.

The other way is to calculate the configuration of the primitive ontology directly from the wavefunction, so that the configuration is dependent. This is the approach of objective collapse theories, when combined with a modification to the Schrodinger evolution of the wavefunction which prevents macroscopic superpositions.

Collapse or No Collapse

The evolution of the wavefunction itself can be specified with or without an indeterministic collapse process. This does not make too much of a difference when you have an independent primitive ontology. However, it produces two very different kinds of theories when you have a dependent primitive ontology.

When the configuration of the primitive ontology is calculated directly from the wavefunction, and an objective collapse process is included in the wavefunction’s evolution, the macroscopic world can turn out pretty much how we expect it to be: macroscopic objects have more or less have a definite location and physical state.

However, if the wavefunction evolves without collapsing, something very strange can happen: superpositions of the primitive ontology, and not just the wavefunction. You end up with multiple macroscopic distributions of matter superimposed on each other, but (after decoherence occurs) not interacting with each other, causally disconnected. This is how you can have a scientific realist version of the many-worlds interpretation of quantum mechanics.

The best analogy I have seen for what this looks like is a radio that is incorrectly tuned. If your receiver isn’t quite tuned to one channel, you can hear two channels superimposed on each other, and, if you were really good at listening, you could even follow both of them at once. On a realist many-worlds theory, reality is like this, containing many independent histories: worlds overlaying each other, branching into even more diverse worlds, all transparent to each other except in the effects seen in coherent superpositions.

Many-worlds interpretations offer a very counterintuitive picture of reality. They raise serious philosophical difficulties with our intuitive concepts of possibility, probability, personal identity, free will, rationality, and moral agency. On an ontological and explanatory level, I think that many-worlds theories introduce an incredible amount of unnecessary complexity into what exists, and so Occam’s razor prefers theories with just a single world.

Because it is also simpler to have no collapse in the wavefunction (and because there are indications that the wavefunction might not actually evolve in a quantum gravity theory, so collapse would not be possible), the best way to do this seems to be to have an independent primitive ontology. The primitive entities of physics are guided by, but not directly calculated from, the quantum wavefunction.

The Nature of Physical Reality

So, after all of that discussion (which turned out to be quite a bit longer than I had originally planned), here is what I believe. A realist theory of quantum mechanics, where the world we observe is made of physical entities existing objectively and mind-independently, seems entirely possible. There is no need to abandon our common-sense view of the physical world – the nature and dynamics of the fundamental physical entities needs to be modified, but their reality can be retained.

Personally, I think a field ontology is the most plausible, either on continuous or discrete spacetime. The particles that our macroscopic world appears to be made of are, if this is correct, excitations in quantum fields that pervade all of space and time. The field behaviour is guided by the wavefunction of the universe, an abstract representation of the laws of physics and the causal properties of the fields. Physical reality comes down to a complex pattern of excitations and disturbances in these fields, like ripples on the surface of a lake.

It also seems to me that the quantum behaviour of the correct primitive ontology, including the ontology of spacetime, will require a privileged foliation. The foliation may be hidden from us, but it grounds the absolute simultaneity required for quantum non-locality. It also means that our common-sense experience of time is correct: there is an objective distinction between past, present, and future. Change and the passage of time are not illusions, but a fundamental part of reality.

If you are an aspiring physicist, I think you should be encouraged: there is much work to be done (so your vocation won’t be going away any time soon), and physical reality can actually make sense. Here are my suggestions for possible research directions:

  • Primitive ontology approach: bring scientific realism back into quantum physics. More specifically, I think research is needed in how the primitive ontology approach can be combined with the path integral formulation of quantum mechanics, since it seems to offer a deep insight into the origin of the principle of least action in classical mechanics.
  • Geometric algebra: I believe this extension of vector algebra has the potential to simplify at least some of the mathematics in quantum theory, and even in classical mechanics. It even provides a geometric interpretation to some of the imaginary numbers that appear in quantum physics.
  • Something different: the wheels have been spinning on things like grand unification, supersymmetry, and string theory for decades, and not much has come of it. Approaching problems in the standard model of physics from a new perspective (such as this proposal for a framed standard model) might be productive.

(I am mostly putting these suggestions down so I can say that I called it when they turn out to be important. That might be overly optimistic, but hey, I can dream.)

And that concludes my discussion of the nature of physical reality. Next, I will begin to explore the highly important question of whether there is anything else.

Quantum Reality (III): Interpretations

I have been exploring quantum mechanics in my last couple of posts, and now I want to get to the important question: what does quantum theory say about the fundamental nature of physical reality? I think one of the main reasons why this central branch of physics is so hard to understand, and why the answer to this question has remained so mysterious, is this: the standard formulation of quantum mechanics fails to clearly specify what the theory is really about.

Many of the great successes of quantum theory have been brought about by physicists trying to get the right mathematical behaviour for their equations, leaving the physical reality behind those equations as a secondary concern, at most. When we look at the standard quantum theory, what we really find is a theory about measurements, not a theory about the physical world that those measurements are intended to shed light on.

The strange features of quantum mechanics have made things difficult as well: things like superposition, entanglement, non-locality, and the impossibility of giving definite values to all measurable properties of a system at once. These features have led many scientists and philosophers of science to believe that the project of scientific realism simply fails when it arrives at quantum mechanics.

In my view, a great deal of the confusion about quantum mechanics has arisen because it is formulated in an implicitly anti-realist way – that is, treating entities like wavefunctions and quantum fields as mere calculation aides for predicting experimental results – and the fact that it is formulated in this way has not been clearly articulated.

In fact, when we try to get a clear answer to the question of what quantum theory is about, it appears to me that the majority of interpretations of quantum mechanics are ultimately anti-realist in character.

Anti-Realist Interpretations

The most common interpretations of quantum mechanics seem to fall into one of the following three categories, none of which are satisfactory for a realist view of science. Since giving up on scientific realism is, in a significant way, giving up on the project of trying to understand the physical world around us, I think we should try to do better.


Instrumentalism is one of the main forms of scientific anti-realism, and it’s basic idea is that the things we talk about in scientific theories (in this case, quantum mechanics) are really just useful fictions, devices for predicting the results of experiments. “Shut up and calculate” is the instrumentalist view. It is just about the experimental outcomes, nothing more.

The standard interpretation of quantum mechanics still appears to be the Copenhagen interpretation. As far as I can tell, when you try to explain the Copenhagen interpretation clearly and consistently, there is nothing that really differentiates it from an instrumentalist approach. (Either that, or it turns into a vaguely defined objective collapse theory.)

Measurements are a direct part of the formalism in the Copenhagen interpretation: it says that the wavefunction of a quantum system evolves according to Schrodinger’s equation until a measurement occurs, at which point the wavefunction collapses to give a definite outcome for the measurement. A measurement has a clear definition in terms of what it does to the wavefunction, but its definition on a more realistic, physical level is completely unspecified.

But a measurement is clearly a complex physical process: any theory of physical reality should be able to explain what it is and how it arises from more basic physical processes. The Copenhagen interpretation cannot do that, since it needs measurement as a primitive concept to be able to explain anything else. So it does not explain physical reality; it merely predicts measurement results.

This interpretation often makes the additional claim that the wavefunction is just represents the observer’s knowledge of a physical system, rather than representing the physical reality of the system itself. This has spawned a number of related interpretations which treat information as the fundamental entity in physics.

To the extent that information is considered as belonging to the mind of an observer, it seems to me that such approaches are really just instrumentalist interpretations. (And to the extent that information is considered by such approaches as something that exists by itself and from which reality is formed, it seems to me that they are variations of immaterialism or philosophical idealism, and not physical theories at all.) None of these really say anything about how the microscopic physical world works.

(Note also that there is actually a theorem which, given reasonable assumptions, rules out interpretations of quantum mechanics which treat the wavefunction as merely a representation of someone’s knowledge about the system: the PBR theorem.)

Two further interpretations that should, in my mind, be thought of as forms of instrumentalism are the ensemble interpretation, and a basic version of modal interpretation. The ensemble interpretation treats the probabilities arising from quantum mechanics in a classical way, describing the frequencies of results we will get if we repeat the same experiment many times. Modal interpretations treat the predictions of quantum mechanics as being about what is possible, rather than being about what actually is.

Both of these approaches sidestep the question of what is actually going on at the most fundamental physical level, and so are not scientific realist views of quantum mechanics. (The consistent histories approach seems similar in this regard, since it is intended to calculate probabilities of possible macroscopic histories, but again does not specify the dynamics of anything beyond the wavefunction.)

Reality Is the Wavefunction

Interpretations that follow the Schrodinger equation to what appears to be its logical conclusion may deny that the physical world is anything like we normally think it is. They would say it is not made of matter and energy in space and time, but instead, everything is the wavefunction.

The most common interpretation in this category is the many-worlds interpretation, wherein the wavefunction describes the entire universe and never undergoes collapse, and every possible outcome of every possible measurement is realized in some part of the wavefunction. This is usually explained as the world branching into alternate histories whenever there are different possible outcomes of any event; but it should be noted that, in this interpretation, the world is not made of anything we would normally consider to be physical. Rather, it is just an abstract entity extracted from an incredibly complex mathematical function.

To understand how radical it is to claim that the wavefunction is all of physical reality, you have to understand that the wavefunction actually has almost no connection to the three-dimensional universe that we observe. The wavefunction is not a wave in three-dimensional space (or in four-dimensional spacetime). It is a function on an abstract space of a huge number of dimensions, or even an infinite number of dimensions.

We see three-dimensional wavefunctions in physics when we consider systems that, from a pre-quantum standpoint, consist of nothing but a single particle. Obviously, our universe has more than a single particle in it, so such examples are not at all realistic. A wavefunction to describe a system with many particles has three times as many dimensions as there are particles in the system: one dimension for each of the coordinates of each particle in three-dimensional space.

But if the wavefunction is all of reality, there seems to be no reason why a wave in a space of some enormous dimension D should appear as a three-dimensional world with D/3 particles, rather than (for example) a five-dimensional world with D/5 particles; or why the dimension of the wavefunction space should be constrained to be a multiple of three rather than five; or why any macroscopic world should emerge from this complex wavefunction at all.

When you think about it, the idea that the wavefunction is the fundamental entity of physical reality is absurd. The wavefunction isn’t even physical. It isn’t in physical space; it isn’t made of physical matter or energy. Which is why I believe that any interpretation of quantum mechanics that says the wavefunction is all there is has more in common with the next category of interpretations than it does with any realist view of physics.

This is a major problem for the many-worlds view as it is usually construed, as well as for (as far as I can understand them) the transactional interpretation and the relational interpretation. This even applies to some forms of objective collapse theory, if they don’t go any further than adding an objective collapse process to the dynamics of the wavefunction.


The measurement problem of quantum mechanics, combined with the fact that we never experience ourselves to be in a superposition of mental states, leads pretty quickly to the consciousness causes collapse interpretation. Similarly, the problems that the many-worlds interpretation has with sorting out just what worlds there are can lead to the many-minds interpretation. These are just two of the theories saying that quantum mechanics shows us that the physical reality is not all there is: the mental world plays a crucial role in the workings of physics.

One of the main problems with such interpretations is that they tend to reduce to philosophical idealism. They end up denying that the physical world exists at all, because they merely add a mental reality to the wavefunction, without postulating anything actually physical. (As I said in the last section, the wavefunction itself does not cut it.)

The other main problem is their implausibility in the light of our experience of the physical world. Idealist theories like these imply that there is no real sense in which physical things exist until a consciousness comes along to observe them: the moon does not exist if no one is looking at it. The explanation of any physical event becomes absurdly dependent on minds that, from a normal standpoint, would just be bystanders.

And I think these theories make it very difficult to explain why we observe the physical reality that we do, in comparison to explanations that retain the physical world as something that exists, and can undergo change, independently of mere observers.

Now, I don’t think idealist theories are absurd merely because they involve a mental reality. Indeed, I think we have very good reasons to believe that physical reality is not all there is, as I will write about in upcoming posts. I just don’t think any of those reasons have much to do with quantum mechanics.

(As a side note, there is one nearly idealist interpretation from Alexander Pruss that I find somewhat metaphysically plausible, and interesting from a philosophical perspective. This interpretation suggests that macroscopic physical reality is made up by Aristotelian forms, whose behaviour is guided by the wavefunction. If it were proven that there could not possibly be a viable physical realist interpretation of quantum mechanics, I honestly think something like the travelling forms interpretation would be worth looking into.)

Interpretations that take information to be the fundamental substance of reality also seem to fall in the idealist camp – at least, if they are coherent at all. I add that qualification because I am not quite sure what information even means if information is all there is. Information seems to be about something, so there would have to be a physical reality for the information to talk about after all.

Realist Interpretations

To really start to understand quantum mechanics, I think we need to get some clarity on what quantum mechanics says about the fundamental nature of physical reality. Here are the answers we have seen so far:

  • Instrumentalist interpretations: we don’t know.
  • Wavefunction interpretations: physical reality is the wavefunction. (… i.e., not physical.)
  • Idealist interpretations: physical reality is mind/information. (… i.e., not physical.)

These categories pretty much exhaust the mainstream interpretations of quantum mechanics.

Given that we have good reason to believe the physical world exists, and that science is capable of discovering truth about it, I think we should look elsewhere for a realist interpretation of quantum mechanics. Fortunately, there are a few physicists and scientists who agree. Recently, a different way of thinking about quantum mechanics, and fundamental physical theories in general, has been suggested, called the primitive ontology approach.

Primitive Ontology

The basic idea of the primitive ontology approach is that we should specify the fundamental physical entities that our theories are about, the things that make up matter located in space and time. (These entities are the primitive ontology.) Then our theory should specify the behaviour of those entities, and allow us to explain what happens on the macroscopic level in terms of that behaviour.

This seems eminently reasonable to me. And what’s more, while there are some difficulties in doing this for quantum mechanics, it is not impossible.

There are already ways to successfully cast non-relativistic quantum mechanics as a theory with primitive ontology. The main interpretations following this approach are Bohmian mechanics and some variants of the objective collapse theory. (Some modal interpretations may also qualify.) The difficult part is extending these theories to relativistic quantum field theory, but there are possible avenues to this (see here and here) – and there are even suggestions for including quantum gravity, something standard quantum theory has not yet succeeded in doing.

In these theories, what is really happening on the physical level is described by the primitive ontology. For example, in Bohmian mechanics, the fundamental physical entities are particles moving in three-dimensional space. Their motion is governed by a guidance equation, which is determined from the wavefunction, which is in turn determined from Schrodinger’s equation.

For another example, in one version of the objective collapse theory, the primitive ontology is a continuous matter distribution in space, and it can be calculated directly from the wavefunction. The evolution of the wavefunction (and therefore, of the matter distribution) is modified from the Schrodinger equation with a random collapse process, which occurs with a higher frequency the more entangled degrees of freedom there are in the system. This makes the matter distribution behave according to quantum mechanics on the microscopic scale, but according to classical Newtonian mechanics on the macroscopic scale.

In theories like these, the wavefunction need not be seen as a physical field that exists in an abstract space of enormous dimension. Instead, the wavefunction is a representation of the laws of physics, or of a property of the physical system to move in a certain way. This gives the wavefunction a clear meaning and a link back to the physical reality we observe.

Most importantly, theories with a primitive ontology completely resolve the measurement problem. The state of the fundamental building blocks of matter is specified at each moment, explaining the behaviour of the physical system being measured, the measurement apparatus, and the measurement results. Measurements are just another physical process, as they should be.

The strangeness of quantum mechanics does show up in these theories. The Kochen-Specker theorem implies that some of the normal properties we ascribe to matter cannot have the same meaning at the quantum level, and Bell’s theorem implies that the dynamics of the primitive ontology will include non-local influences.

As an example, in Bohmian mechanics, the positions and velocities of particles are well defined, but other properties like particle spin are contextual, depending on how the measurement apparatus is arranged, rather than being intrinsic to the particle. Furthermore, the velocity of a particle can depend on the simultaneous position of other particles arbitrarily far away.

But this strangeness is just a part of the reality we have discovered through science. It does nothing to damage the coherence of the picture of physical reality that the primitive ontology approach provides. This approach makes far more sense, to my mind, than any of the other interpretations of quantum mechanics.

Something interesting is that many of the primitive ontology quantum theories require a privileged foliation of spacetime to be made relativistic. In fact, allowing a privileged foliation and adopting a primitive ontology resolves serious conceptual difficulties in formulating a theory of quantum gravity, as discussed in this article (“Quantum Spacetime without Observers: Ontological Clarity and the Conceptual Foundations of Quantum Gravity”).

Many physicists and philosophers of science resist such theories, due to the feeling that a privileged foliation contradicts the spirit, if not the letter, of relativity physics. But this is just an unnecessary philosophical hang up, rather than a serious scientific objection. Adopting a presentist theory of time, with a privileged foliation of spacetime, could open new avenues of research towards a theory of quantum gravity.


So far, my readings into the different interpretations of quantum mechanics have led me to the conclusion that the primitive ontology approach offers the most philosophically coherent way of potentially understanding the quantum reality that we live in, along with the only interpretations that are ultimately in accord with the position of scientific realism. However, not much research has been put into developing these theories, so it is hard to say, in the end, what the fundamental nature of physical reality is.

Several different primitive ontologies have been suggested by different theories that are capable of reproducing the predictions of non-relativistic quantum mechanics. (Some of these have modified dynamics, such that they could in principle be distinguished by experiment; others reproduce the predictions of standard quantum theory exactly.) No primitive ontology theory has yet been successful in doing the same for relativistic quantum mechanics. And, of course, not even the standard quantum theory has been able to fully incorporate gravity.

It seems that there are ways forward; they are just difficult. For now, there is a lot of room to speculate. So, because I can, I will engage in some of that speculation in my next post.

Quantum Reality (II): Implications

In my last post, I began to explore the implications of quantum mechanics by introducing the measurement problem. In this post, I want to explore the theory further by discussing some of its strange features.


Directly related to the measurement problem and the apparent phenomenon of wavefunction collapse – where the wavefunction of a system in superposition seems to collapse to a definite state upon being measured – is the feature of indeterminacy.

Prior to the discovery of quantum mechanics, it was thought that physics was fundamentally deterministic. According to Newtonian mechanics, if you knew the position and velocity of every particle in the universe with complete accuracy, you could predict the future perfectly. Not so, it appears, in quantum mechanics: measurements of systems in superpositions are indeterministic. All we can predict are probabilities that different measurement results will occur.

Interpretations of quantum mechanics have differ on whether this indeterminacy is fundamental, or if it is due to our lack of knowledge of the true physical state of the universe, or if it is merely an illusion due to the way we perceive reality. This could go either way, I think. There are reasonable interpretations that are indeterministic, and ones that are completely deterministic. I will discuss these various interpretations in my next post.


One of the more well-known (though usually not well-understood) features of quantum mechanics is its uncertainty principles, which place limits on the precision with which we can simultaneously know certain pairs of properties. For example, if we know the position of a particle, we cannot at the same time know its momentum. And if you try to get an idea of how fast it is going, you start to lose track of where it is.

The reason for these uncertainty principles is that, according to quantum mechanics, certain pairs of properties are incompatible: a quantum system with a definite position, for example, must be in a superposition of states of momentum. Or, a particle with a definite spin about a given axis must be in a superposition of states of spin about any other axis.

The uncertainty principles put hard constraints on any theories that try to explain quantum mechanics in more classical terms, where all the usual properties of a system (like position, momentum, energy, and so on) always have a definite value. Specifically, the Kochen-Specker theorem rules out the possibility of naïve realism (that all observable properties of a system have definite values) combined with non-contextuality (that those properties are intrinsic to the system and independent of the method of measuring them). Any theory that ascribes definite properties to quantum systems cannot do so for all physical properties, while maintaining the usual mathematical relationships between them.

Similarly, the predictions of quantum mechanics, and the experimental results, also violate something called the Leggett-Garg inequality, which apparently rules out the possibility of naïve macrorealism (that all macroscopic properties of a system have definite values) combined with non-invasive measurability (that measuring a macroscopic property does not affect the system being measured).

I may need to do more reading on the Leggett-Garg inequality, because it seems like the observed violations of it have been found in microscopic, not macroscopic, systems. So the experimental results are perhaps claiming too much, as of yet. But according to this paper, quantum mechanics predicts that the inequality can be violated even in macroscopic systems, if we have sufficiently accurate measuring devices.

Scientists have demonstrated superposition effects for large molecules, and they are close (though not quite there yet) to being able to perform superposition experiments on much larger objects: for example, a mirror on a tiny cantilever beam, hypothetically put into superposition of different vibrational states from the impact of a photon. It will be interesting to see how far we can push quantum effects into the macroscopic realm.

(I should note that the superpositions mentioned in the last paragraph are coherent superpositions, which exhibit interference effects that we can identify. According to quantum mechanics, we should already be observing macroscopic systems in decoherent superpositions all the time. The fact that we do not is just the measurement problem.)

Entanglement and Non-Locality

Quantum entanglement is a phenomenon where two quantum systems, such as two particles, become interdependent. When this happens, the state of one of the particles can only be described in combination with the state of the other particle. In effect, the particles now form one irreducible system, described by one wavefunction.

For example, two entangled photons can be created in a superposition of polarization states, so that, according to the standard understanding of quantum theory, neither has a definite value of polarization. But because of the way that they are entangled, whenever the polarization of one of the photons is measured so that it takes on a definite value, the other photon also acquires a definite polarization value. (It may be the same or opposite, depending on how they are entangled.)

This is interesting, because it seems to imply a kind of non-locality in physics. A measurement of one entangled photon instantaneously affects the behaviour the other one, no matter how far away it is. If this is correct, it is in contradiction with the standard understanding of relativity theory, which says that no causal influence can travel faster than the speed of light.

An obvious question to ask is whether the effects of entanglement can be accounted for in a way that does not require these kind of instantaneous distant influences. For example, perhaps there is some unknown property of the entangled photons that determines what the result of their polarization measurements will be, which they acquired when they became entangled.

It turns out, according to Bell’s theorem, that this is impossible: the predictions of quantum mechanics, which have been extensively confirmed by experiment, cannot be reproduced by any theory with strictly local variables. There will always be correlations between spatially separated events that cannot be explained by influences propagating no faster than the speed of light. Quantum non-locality is here to stay.

Bell’s theorem is often cited as ruling out the possibility of “hidden variable” theories, theories that can explain the behaviour of quantum mechanics in terms of some variables in addition to the wavefunction. This is a misunderstanding of Bell’s theorem, however: what it actually rules out is local theories. There are non-local “hidden variable” theories that can reproduce the predictions of quantum mechanics exactly, by postulating an underlying reality that explains the quantum behaviour. (I will talk more about them when I survey different interpretations of quantum theory, in my next post.)

Bell’s theorem is also often cited as ruling out local realism, so that we either have to reject locality, or reject realism (where precise definitions of realism in this context differ). It seems rather strange to me, but many physicists have opted for rejecting realism, thinking that action-at-a-distance is “spookier” than whatever rejecting realism entails.

However, this also is a misunderstanding of Bell’s theorem, as demonstrated in the article by Goldstein et. al., linked above. The only relevant assumption of realism in Bell’s argument is that there are basic elements of physical reality located in space and time. Rejecting that assumption makes any concept of locality meaningless. So locality cannot be preserved by rejecting realism: either locality is false, or realism is false and it makes no sense to talk about locality at all.

On a related note, alongside claims that Bell’s theorem rules out local realism, you can sometimes also find claims that the Leggett inequalities (different from the Leggett-Garg inequality) rule out non-local realism. These claims are simply false; reports of realism’s demise have been greatly exaggerated.

The Leggett inequalities were originally reported to rule out a certain class of non-local hidden variable theories by showing that they cannot make the same predictions as quantum mechanics. But as I have already alluded, there are non-local hidden variable theories that do make the exact same predictions as quantum mechanics: they do not satisfy the assumptions that Leggett made in deriving his inequalities, so they are not ruled out. Which means there are still viable “realistic” interpretations of quantum mechanics.

Relativity and Quantum Mechanics

So, Bell’s theorem proves that non-locality is part of our physical world. This puts quantum mechanics in conflict with the standard interpretation of relativity theory, which requires locality. This is a conceptual problem for these two pillars of physics.

The problem persists even in relativistic quantum field theory. The ultimate predictions of that theory are in the form of probability amplitudes for different measurement outcomes, just as in non-relativistic quantum mechanics, and those predictions include the non-local correlations used to prove Bell’s theorem. (Essentially, measurement and wavefunction collapse introduce non-locality to quantum mechanics, in both its relativistic and non-relativistic versions.)

My exploration of the nature of time leads me to believe that there is an easy and natural way out of this problem. Given my philosophical reasons for accepting presentism, I believe there is a privileged foliation of spacetime, providing an absolute simultaneity relation and an objective ordering of events. This foliation can support non-local quantum effects. In fact, quantum non-locality serves as evidence that such a foliation exists.

The predictions of quantum mechanics, despite being non-local, still enable the no-signalling theorem: it is impossible to send information from one observer to another at any speed greater than the speed of light. (This is due to limits on our ability to measure and control the microscopic degrees of freedom of a physical system.) Which means that as far as empirical evidence goes, there is no conflict between relativity and quantum physics. The question is whether we can form a coherent picture of physical reality to accommodate and explain that evidence.

That is what I will explore next.