Theories of Everything with Curt Jaimungal - "There is No Quantum Multiverse" | Jacob Barandes
Episode Date: February 18, 2025In this episode, Curt Jaimungal speaks with Jacob Barandes, a theoretical physicist from Harvard, about the complexities of quantum mechanics. They explore wave-particle duality, Jacob's reformulation... of quantum theory through indivisible stochastic processes, and the historical perspectives of figures like Schrödinger and Einstein. As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://tinyurl.com/SpotifyTOE Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join Links Mentioned: • Watch Part 1 of this conversation here: https://www.youtube.com/watch?v=YaS1usLeXQM • Jacob’s talks covering many of his points in this conversation: https://www.youtube.com/@JacobBarandesPhilOfPhysics • Jacob’s first appearance on TOE: https://www.youtube.com/watch?v=7oWip00iXbo • New Prospects for a Causally Local Formulation of Quantum Theory (Jacob’s paper): https://arxiv.org/abs/2402.16935 • The Stochastic-Quantum Correspondence (Jacob’s paper): https://arxiv.org/abs/2302.10778 • Schrodinger’s wave function paper (1926): https://github.com/yousbot/Quantum-Papers/blob/master/1926%20-%20E.%20Schrodinger%2C%20An%20Undulatory%20Theory%20of%20the%20Mechanics%20of%20Atoms%20and%20Molecules.pdf • The Born-Einstein Letters (book): https://www.amazon.com/Born-Einstein-Letters-1916-1955-Friendship-Uncertain/dp/1403944962/ • Probability Relations Between Separated Systems (paper) : https://www.informationphilosopher.com/solutions/scientists/schrodinger/Schrodinger-1936.pdf • John Bell on Bertlemann’s socks (paper): https://cds.cern.ch/record/142461/files/198009299.pdf • John Bell on the Einstein Podolsky Rosen paradox (paper): https://journals.aps.org/ppf/pdf/10.1103/PhysicsPhysiqueFizika.1.195 • Can Quantum-Mechanical Description of Physical Reality Be Considered Complete’? (paper): https://journals.aps.org/pr/pdf/10.1103/PhysRev.47.777 • Causation as Folk Science (paper): https://sites.pitt.edu/~jdnorton/papers/003004.pdf Timestamps: 00:00 Introduction to Quantum Mechanics 06:01 Wave-Particle Duality Explained 08:44 Distinctions Between Waves 10:36 Quantum Field Theory Insights 15:10 Research Directions in Quantum Physics 24:27 Challenges in Quantum Field Theory 31:38 Quantum Mechanics vs. General Relativity 35:47 Fluctuations in Spacetime 45:09 Probabilistic General Relativity 54:00 Bell's Theorem and Non-Locality 1:20:48 The Nature of Causation in Physics 1:23:52 Causation in Modern Science 1:30:26 Reichenbachian Factorization Debates 1:31:44 Bell's Theorem Evolution 1:35:45 Indivisible Stochastic Approach 1:38:17 Understanding Entanglement 1:42:28 Information and Black Holes 1:45:44 Phase Information Loss 1:49:03 Heisenberg and Copenhagen Interpretation 1:52:29 The Nature of Electrons 1:53:09 Exploring Open Research Questions 1:59:09 Probabilities in Statistical Mechanics 2:11:30 Problems with Many Worlds Interpretation 2:27:42 Challenges of Probability in Many Worlds 2:35:14 The Case for a New Interpretation 2:43:11 Building a Collaborative Reputation Support TOE on Patreon: https://patreon.com/curtjaimungal Twitter: https://twitter.com/TOEwithCurt Discord Invite: https://discord.com/invite/kBcnfNVwqs #science #quantummechanics #quantumphysics #physics Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You hear that?
Ugh, paid.
And... done.
That's the sound of bills being paid on time.
But with the BMO Eclipse Rise Visa Card,
paying your bills could sound like this.
Yes!
Earn rewards for paying your bill in full and on time each month.
Rise to rewards with the BMO Eclipse Rise Visa Card.
Terms and conditions apply.
We don't have a single interpretation of quantum mechanics.
It doesn't have serious problems.
I traveled to the oldest laboratory in the United States
to meet with theoretical physicist,
Jacob Barandas at Harvard.
He's the co-director of the graduate studies department
there.
We delved into the technical depths
of his innovative reformulation of quantum theory
based on more fundamental mechanisms called
Indivisible stochastic processes. My name is Kurt Jaimungal
And this was part of my three-day tour at Harvard Tufts and MIT where I recorded five podcasts
One of them you're seeing now with Jacob Barndes. It was actually over seven hours long
So we're splitting it into two parts and this is part two part one is also linked in the description
The others are with Mike 11, Anna Chownika,
Manolis Kellis and William Hahn.
Subscribe to get notified.
In this episode we talk about what are the misconceptions
of the wave particle duality and entanglement,
is gravity indeed quantum,
what about non-locality and Bell's theorem,
and what exactly are indivisible stochastic processes?
Kurt, it's good to see you again.
Good to see you, it's been so long.
Wave-particle duality, what is that?
All right, so when Schrodinger introduced
the idea of his wave function in that paper in early 1926,
building out of Hamlet-Jacobi theory,
his undulatory theory of mechanics,
this wave function that lived in high dimensional configuration space.
He had provided a new methodology, a technique for computing things in quantum mechanics.
He used the wave function as an indirect way to calculate energy levels.
What are the energy levels of atoms,
which then corresponded to the frequencies of radiation that came out of atoms?
Einstein had a lot of problems with this, so did Heisenberg.
One of the few things that Einstein and Heisenberg agreed on was they didn't really like Schrodinger's
wave mechanics, metaphysically speaking.
Einstein, I think, said that physics had been fully Schrodingerized at this point.
And part of the reason that Einstein in particular was concerned was because Schrodingerized at this point. And part of the reason that Einstein in particular
was concerned was because Schrödinger embraced a kind
of what we would call wave function realism,
that the wave function is a real thing, physically,
metaphysically real thing in a high dimensional
configuration space that somehow projects its meaning
into three dimensions of physical space.
And that really where everything was happening
was in this high dimensional abstract possibility space, this configuration space.
That's where the waves were.
Eventually, Schrodinger recanted that view.
In one of our earlier conversations, I talked about how in 1928 in his fourth lecture, Wave Mechanics, Schrodinger expressed some doubt about wave function realism.
He indicated that, you know, maybe you could think of the wave function as playing out all the possible realities of what could happen to the system in sort of a very embryonic version of the menu world's interpretation.
But Schrodinger recanted that view in 1928 in the face of things like Born saying that the wave function should be understood as a tool for computing measurement probabilities.
computing measurement probabilities. But in the period from 1926 to 1928, when Schrodinger was still pushing this idea of
the wave function as being sort of physically fundamental, Einstein was very confused.
There's a very famous letter from December 4th, 1926, from Einstein to his colleague
Max Born, the same Born of the Born Rule, in which Einstein famously says
that he doesn't believe that God plays dice. This is famous, I just don't believe that
God plays dice. What people don't often know is that the very next sentence in that letter is a criticism of Schrodinger's wave function reality. He says waves in 3N
dimensional space as if rubber bands. And he even has like dots, like he writes an ellipsis
in the letter. He's like, doesn't even know what to say. What's interesting is that in the canonical translation of the Einstein-Born letters,
the collection of letters of correspondence between Einstein and Max Born, the letters
translated into English, it was translated by Irene Born, and the N is missing.
Einstein just says waves in three-dimensional space, as if by rubber, you know, the end is missing. And without the end, you don't realize that his concern is not waves per se.
His concern is three N dimensional wave functions and configuration space.
That's what he was nervous about.
So you miss a very important, but if you look in original German, the N is there.
So, you know, I said, had a lot of misgivings about this idea.
So, you know, Einstein had a lot of misgivings about this idea. But the idea has origins that go back earlier, right?
De Broglie's sort of matter wave idea that particles and electrons had waves associated
with them, in analogy with how light was thought to be a wave classically, and then there was
evidence coming,
starting from Planck and Einstein,
that light had a particle-like character.
This idea that certain phenomena
had both particle-like and wave-like features
became known as wave-particle duality.
And when people do a study of, for example,
the double slit experiment,
and they approach the double slit experiment
in the traditional way, one particle at a time,
a wave function that we can pretend is moving in three-dimensional space,
but this is really just an artifact of the fact that configuration space for one particle looks three-dimensional.
It looks like you should treat the particles as a wave as it goes through the slits
to get the correct pattern over many repetitions of landing sites.
You know, we don't actually see a wave on the other side.
What we see is dots, many, many landing sites
over many repetitions of the experiment.
The wave is inferred.
But when you measure where the particle is
at the end of the experiment,
or you measure which hole it goes through,
you get a definite result,
and that makes it look more like a particle.
So this is idea that sometimes things are particle-like,
and sometimes they're wave-like depending on
what feature of the system we're trying to study.
This became known as wave-particle duality.
This is further complicated by the fact that there are waves of a different kind in physics.
Electromagnetic waves, for example.
Light is a disturbance in the electromagnetic field
that propagates like a wave through three-dimensional space.
And those are waves.
I mean, like I said, I teach Jackson electromagnetism.
We talk about waves moving through three-dimensional space.
It's very easy to confuse the waves of a field,
like the electromagnetic field, with the wave functions
or Schrodinger waves of quantum mechanics,
but they're not the same thing.
And this has bled into the wave particle duality.
When Planck in 1900 and Einstein in 1905
and various people were proposing that light came in quanta,
discrete particle-like quanta called photons,
the wave that they were imagining was the wave
corresponding to photons
was a three-dimensional electromagnetic wave,
a wave of the familiar kind of wave.
The wave functions that Schrodinger introduced in 1926
were not like those waves.
They were not three-dimensional waves in physical space of a field.
They were these abstract, complex, valued functions in a high-dimensional configuration space.
And when you measured them, they collapsed.
Now, if you're in an MRI machine and they've turned on a very strong magnetic field,
you don't have to worry that if you do the wrong measurement, you're going to collapse the magnetic field in the MRI machine. It's not that kind of field. The
waves they're beaming at you are not those kinds of waves. So you have to make a distinction
between yield waves, the waves of a field, and Schrodinger waves. And I want to make
super clear that in the indivisible stochastic approach to quantum mechanics that we've been
talking about,
I'm saying Schrodinger waves are not real things.
These abstract things that live in this high dimensional configuration space, those are not physically real.
But classical waves or the waves of a field, which are a different, conceptually different kind of a wave,
those are perfectly valid.
And if you're studying a system that's not made of particles, but a system made of fields
you're gonna see wave like behavior as well, but those are a different kind of wave and
These are the kinds of subtleties that I think get lost when someone just says wave particle duality so again just to summarize the relationship between a photon particles light and a
electromagnetic wave is
Not like the relationship between an electron and a Schrodinger wave function
for the electron.
Now, what makes this even more confusing
is that electrons do have fields also.
There's a so-called Dirac field
plays a very important role in the standard model.
And this is a field, a field in three dimensions
for the electron, but the Dirac field for the electron is not
the Schrodinger wave for an electron.
So you know, these are super subtle distinctions, but it's important to keep them in mind.
What makes it even more confusing is that particles like electrons, which are called
fermions, these are particles that have an intrinsic half an inch or spin, they're the
particles that obey a Pauli exclusion principle
You can't put them all in the same energy state
They make chemistry possible by not having all the atoms collapse at the ground state electrons are like this quarks protons neutrons
Although they have fields associated with them the field associated with them are not classical fields like the electromagnetic field the fields are
much more bizarre and weird and I'm not gonna have time to talk very much about them except to say that one of
the limitations of Bohmian mechanics is that it has a great deal of difficulty dealing
with the kinds of fields associated with fermions.
And that's one reason why Bohmian mechanics has difficulty, the Bohm pilot wave theory.
I'm getting way ahead of myself, but I just wanted to just clarify what's going on in
wave-part particle duality.
So in the indivisible stochastic approach, there are no Schrodinger waves as part of the fundamental physics.
Of course, when you go to the Hilbert space picture, you can mathematically write down wave functions and use them,
write down Schrodinger waves, but they're not physically there.
You don't need them to explain the interference patterns.
The indivisible stochastic dynamics itself generically predicts that you'll have what look like over many repetitions of the experiment
dots that look like they're following some kind of wave equation but there is
no wave actually involved in those experiments. But I'm not saying that
field waves, the waves in fields are not there. That's a different kind of wave.
So speaking of these waves, you mentioned quantum field theory indirectly with Dirac.
Does your approach illuminate any aspect of quantum field theory or the standard model?
We've been talking about quantum mechanics, sure, especially in part one and part two.
What about QFT?
Yeah.
So, one of the nice things about Bohm's pilot wave theory is that it works really beautifully for systems of fixed numbers of finitely many non-relativistic particles.
That's a lot of qualifications.
It doesn't work so easily for fields.
You end up either having to do very complicated things or maybe even reducing stochasticity of some kind.
It gets kind of messy and there's a lot of difficulty handling fermionic fields in particular, the fields associated with particles like
electrons. One of the advantages of this approach is although, okay, so one of the
so let me just say something very quickly about Bohmian mechanics now this is
different because this is also related. In Bohmian mechanics for again systems
of fixed numbers of finally many non-reversible particles,
we have deterministic equations. There's a pilot wave that guides the particles around.
The wave function, the pilot wave obeys the Schrodinger equation.
Then another equation called the guiding equation is how the wave function, the pilot wave, guides the particles around.
And everything is deterministic. There's no fundamental probabilities.
There are some initial uncertainties
in the initial configuration of the system,
and these evolve to become the Born Rule probabilities later,
but the dynamics is fundamentally deterministic
and is not generating the probabilities
in a fundamental law-like way.
This picture is in some ways very elegant,
provided you're okay with a pilot wave living
in a high dimensional configuration space.
Although I should say that Goldstein, Dürr and Zange
have already proposed the idea that the Bohmian pilot wave
is law-like and not a physical thing.
So there are other ways to read this theory.
The problem is it helps itself to a lot of very special features of
models that consist of fixed numbers of finitely many non-relevant particles, features that
are unavailable when you go to more general systems like fields. So you end up having
to write down a very different looking model, including in some cases models that you need
to now deal with stochasticity and indeterministic dynamics, and they just don't really work very well when you try to go beyond.
One of the other things that Bohm mechanics requires is a preferred foliation of space-time.
So last time we spoke, we talked about how in special relativity, there's no preferred
way to take space and time and divide it up into moments of time, like different ways
to do it.
The guiding equation, the equation that takes the pilot wave
and explains how the pilot wave,
obeying the Schrodinger equation,
how the pilot wave guides the particles around,
they call the guiding equation,
depends on there being a preferred foliation of space-time,
a slicing of space into moments of time.
That's also not really great.
It works fine in the non-relativistic case,
but we wanna do relativistic physics,
like we often do when we wanna do quantum field theory,
which is the kind of models we use when we want to deal with special relativity and quantum mechanics together,
as in the standard model.
Preferred foliation is really difficult to deal with.
Not impossible, but it'd be nice if we didn't need it.
In the indivisible stochastic approach, there's no guiding equation.
There's no pilot wave.
It's not that you solve the Schrodinger equation, get a pilot wave, and then take the pilot wave and plug it into a guiding equation, which depends on a peripheral
volation and then the guiding, none of that happens.
There's just the indivisible stochastic dynamics, which can be represented in Hilbert space language.
But the dynamics is just directly happening.
There's no middleman.
There's no pilot wave and guiding equation in the middle.
This means the theory is not gonna be deterministic.
I think one question in the comments is,
is this fundamentally deterministic or not?
It's indeterministic.
It's not a deterministic theory.
But because there's no guiding equation,
there's no preferred foliation.
And because we're not relying
in all these special features of the particle case,
it's perfectly easy to now generalize this
to more general kinds of systems.
Have you done it?
Have I done it?
Good question.
So there's this thing called time.
And time is bounded and limited.
Is it?
It is, amazingly.
In your framework?
At least in my life.
OK.
And when we get to open questions, like research directions, which maybe people watching this
may be interested in because I mean the best part of a new formulation or picture or model
or whatever is are there things people can work on?
There are things people can work on.
This is one of the things people can work on.
So it is, the term here is straightforward in principle to generalize this to quantum fields
because there's no, none of the obstructions are there
like they were before.
One of the problems with Bohmian mechanics is
your wave function has to live in a space,
configuration space.
And fermionic particles don't have a familiar kind
of configuration space.
This is what makes it so hard to do Bohmian mechanics.
But there's no pilot wave here.
So you just don't even have that obstruction.
So many of the things that would have obstructed us
from just applying this to any kind of system
are just, they're just not there anymore.
So if you wanna deal with a field theory,
you just replace particle positions
with localized field intensities.
These become your degrees of freedom.
And then you just apply the stochastic laws to them
and it works the usual way.
The problem with quantum field theory
is that quantum fields in general is that they had infinitely many degrees of freedom, infinitely
many moving parts. At every sort of point in space in the most sort of, you know, I
mean, this is a whole renormalization story of field theory, but like at a simplest sort
of like bird's eye view, you have a degree of freedom at every point in space, you have
infinitely many of them. And this makes them very mathematically difficult to deal with.
Even in the traditional Hilbert space or path integral formulation, quantum field theories
are really mathematically tricky.
And there are very few, if any, I think there are none, rigorously defined quantum field
theories that are also empirically adequate.
Like none of the quantum field theories that make up the standard model have been rigorously
defined. This means that anytime you mention quantum field theories that make up the standard model have been rigorously defined.
This means that anytime you mention quantum field theory, you're going to run into mathematical difficulties that are just because quantum field theory is
mathematically very complicated. So I think there's a
research direction for an enterprising students to not only formulate quantum field theory in this language,
to not only formulate quantum field theory in this language, but also see does it make any of the mathematical difficulties easier?
Do any of them become harder?
Like what exactly does it look like when you do this super carefully?
And that's, I would say, an open research question.
But many of the obstructions that are in the way in, for example,
Bohmian mechanics are no longer in the way here.
As you know, on Theories of Everything, we delve into some of the most reality-spiraling
concepts from theoretical physics and consciousness to AI and emerging technologies.
To stay informed, in an ever-evolving landscape, I see The Economist as a wellspring of insightful
analysis and in-depth reporting on the various topics we
explore here and beyond.
The Economist's commitment to rigorous journalism means you get a clear picture of the world's
most significant developments.
Whether it's in scientific innovation or the shifting tectonic plates of global politics,
The Economist provides comprehensive coverage that goes beyond the headlines.
What sets The Economist apart is their ability to make complex issues accessible and engaging,
much like we strive to do in this podcast.
If you're passionate about expanding your knowledge and gaining a deeper understanding
of the forces that shape our world, then I highly recommend subscribing to The Economist.
It's an investment into intellectual growth.
One that you won't regret. As a listener of Toe, you get a special 20% off discount. Now you can
enjoy The Economist and all it has to offer for less. Head over to their
website www.economist.com slash Toe, T-O-E, to get started. Thanks for tuning in and
now back to our explorations of the mysteries of the universe.
In, for example, Bohmian mechanics are no longer in the way here.
What is it about quantum field theory that makes it not rigorously defined other than the path integral?
Because there are other approaches to quantizing than the path integral.
How much time do we have?
So, what makes it hard?
Not hard. Not rigorously defined.
Not rigorously defined.
So, well, I mean, we have rigorously defined quantum field theories,
but they tend to be quantum field theories defined in very low numbers of space-time dimensions where you can like rigorously define all the integrations
and take all the limits.
We have quantum field theories defined
by what are called the Whiteman axioms,
but these axioms are very strong
and preclude the kinds of quantum field theories
that seem most apt to describe sort of nature.
There's so many different angles I could take for this.
I'll just pick one.
So here's one way to see what can go wrong.
If you take quantum electrodynamics, which
is the quantum field theory that best describes electrons,
and if you want, you can add some of the heavier
cousins of electrons like muons, and interacting with photons,
with the electromagnetic field.
I should say, by the way, that most of what we do when we do quantum field theory is not
look at particles dancing around.
What we do is we introduce in the asymptotically distant past, a so-called in-state, a quantum
state vector that consists of some menu, some assortment of particles that are supposed to come into the experiment.
And then we write down some menu of outgoing particles.
You might go, how do we know particles gonna come out?
Well, we don't, we're gonna be computing a probability
that this goes to that.
So we start with some incoming particles,
we start with some proposed outgoing particles.
And then using either the path integral
or other calculational methods,
we compute the so-called
complex-valued scattering amplitude.
It's the complex number you get when you put these
things together.
It's the complex number that when you mod squared
is supposed to be connected to the probability you'll
get that particular outcome.
In practice, what we do is compute what are called
scattering cross-sections, which is like what fraction
come out one way, what fraction come out another way.
Notice these are all phrased in a way
that is perfectly consistent with the textbook formulation
of quantum mechanics.
We're not asking what's going on in between.
We're not dealing with macroscopic systems.
We're doing exactly what the textbook axioms ask us to do,
which is you prepare,
you compute probabilities of measurement outcomes,
all the averages and numbers you're doing
are like experiments repeated large numbers of times.
So you're not gonna run in, for the most part, to any of the fundamental inconsistencies or ambiguities in textbook quantum theory.
So it's very easy to do quantum field theory and think there's no problem. Everything is great. We're doing quantum field theory. What's the problem?
It's because most of what you're doing just doesn't run into any of those ambiguities you run into with the axioms. Now, this theory is very useful and we can compute a lot of stuff.
We can't compute everything.
There's some ingredients that you have to take from experiment, right?
The so-called physical couplings, you have to go out and measure and you plug them
into the model, because if you just sort of naively try to compute everything from
first principles, what you discover is that certain quantities you might want to compute depend very sensitively
on sort of the upper boundary of what you've put on the theory.
So when you do, when you study a theory like this, you recognize you can't access arbitrarily
high energy physics.
Our experiments don't pump in more than a certain amount of energy.
So we shouldn't extrapolate the theory to arbitrarily high energies.
We're going to put a cutoff on the theory.
We're only going to discuss what's happening in the theory
up to some energy level, some energy cutoff.
It's just that some of the things you might want to calculate
from first principles depends sensitively on the cutoff
and those are things your theory cannot provide you with.
So we have to take some things from empirical data and put them in.
We plug them in, they become some of the parameters in our theory.
The standard model has about 20 or so of these parameters you have to take from an data and put them in we plug them in they become some of the parameters in our theory
The standard model has about 20 or so of these parameters
You have to take from experiment and plug them in and once you have them you can now make
Huge numbers of non-trivial highly accurate predictions about what happens
But you still have this upper boundary and if you try to calculate things at arbitrarily high energies
Eventually your calculations stop working.
So one of the dirty secrets of physics is that much of the calculating we do is highly approximate.
A lot of it, when we do it by hand, is using a tool called perturbation theory,
which I cover in some of my courses.
Perturbation theory is a systematic recipe for predicting, for calculating things.
And this recipe just doesn't work very well once you start trying to push your predictions beyond a certain energy level.
There's a trick you can do to change
what the theory looks like
as you study the different energy levels.
It's called renormalization.
And what you find is that some of the parameters
in the theory, they stop having values
that make it possible to do consistent perturbation theory.
Now if you wanted to rigorously define a quantum field theory, what you want to do is take
some kind of a limit where you can study the theory at arbitrarily high energies.
This corresponds roughly speaking to being able to assign degrees of freedom to arbitrarily
small points in space.
And you see there's immediately an obstruction here.
For most of our theories, there is a cutoff.
There's a boundary we can't go beyond.
The theory simply doesn't let us go to arbitrarily high energy scales.
And so we're not going to be able to write down a so-called ultraviolet complete, like
perfectly fine-grained version of this theory.
We can only use a theory up to some cutoff.
The standard model, for example, is not expected to hold to arbitrarily high energy scales.
We think that the theory is reliable only up to like a 10 Te Tera electron volt scale.
And it's one of the reasons why we're building experiments or trying to build experiments or to do experiments
to probe the physics that's going on above those scales where the standard model may no longer be making the correct predictions. All of this takes us pretty far, no pun intended,
a field of what we were talking about before,
but these are the kinds of things,
like maybe quantum field theories in the real world,
real life out there in the wild,
quantum field theories are never perfectly defined.
Maybe all we have is a cascade
of so-called effective field theories
that are all well-defined within
some bounds.
And there is no ultimate theory that is like perfect and fine-grained and, you know, perfectly
fine-grained and ultraviolet complete.
Maybe there's just like a tower of these theories.
This makes questions of ontology and what's physically out there, I think, very murky.
Because if we think there's never going to be some fundamental theory at the bottom of all of this,
what really is out there in nature?
That's, I think, an open question.
Whether or not you wanna rephrase that question in the language of this sort of indivisible stochastic approach, I don't know.
Or it could be that these theories tap out to some ultimate quantum field theory,
or some very different kind of theory, like string theory or something like that.
I mean, there are many proposals for maybe where this terminates. But I don't know. What I will say is this, though. There is a view that nature is fundamentally described by Hilbert spaces and quantum mechanics, the Hilbert space formulation of quantum mechanics. And if that's fundamental, then we already know what nature fundamentally is. Nature fundamentally is some wave function.
That's it. Some wave function in some Hilbert space.
We don't exactly know the features of the wave function.
We don't know whether it's best described in terms of fields or something else.
But we already know the fundamental ontology of nature.
It's a wave function. So we're good.
I think that's too ambitious. In the indivisible stochastic approach, there's no wave function. So we're good. I think that's too ambitious. In the
indivisible stochastic approach, there's no wave function. So the
wave function is not what the ontology is. The wave function is
whatever your choice of configurations are. And if you're
modeling particles, you use particle configurations. If
you're modeling fields, you use field configurations. If there's
some ultimate system that grounds everything else, some
system at the bottom, some system that if we understood it and understood its laws,
we would have the unified theory of all the physics.
There would presumably be some configurations for that,
and we would use those instead.
But we don't know that theory yet.
And so I think it's premature to think that we know the right fundamental degrees of freedom.
So if someone asks me, what do I think is fundamentally out there?
I don't know.
But then I'm just where we were 100 years ago or 150 years ago.
We don't know the ultimate theory of nature yet.
And I think it's premature to at this point guess what the ultimate ontology is going to be
until we have that theory, should we ever have it.
Well, I'm interested in ultimate theories.
Theory of everything is the name of your podcast.
Sorry, I don't have one for you.
Well, I'm interested in your thoughts into how to merge quantum field theory with gravity.
So I know we have a slew of audience questions and we're going to get to them,
but they're gonna have to wait because I have these questions first.
These are like close to what I wanted to also talk about. So go ahead. Yeah.
Great. Great.
Okay, so two questions here.
Okay, so two questions here. People say that, okay, look, we have this Heisenberg uncertainty and that applies even
in QFTs and so thus, space-time is shaky.
Okay, but space-time in QFT is defined.
You can perfectly pick out an x, t and the values of creating particles in the fields themselves
vibrate or are uncertain, but the space-time itself is there and given. However, some people
say that if you were to zoom in and you follow Q of t because of Heisenberg's uncertainty,
you thus get to uncertainties of space-time itself. Is that argument valid? Now, I imagine that one of
the ways that they get to this argument is by saying you have an energy-time uncertainty
in general relativity, space-time itself has energy, and so thus space-time itself must
have some uncertainty to it. But then you could also say, well, in QFT, you don't know
if the energy that it's talking about is the same of...
Well, okay.
If you have a statement that applies to all natural numbers you can't just say that any...
So x squared is always going to be a natural number if you're pulling from natural numbers.
But not every number squared is a natural number.
So it depends on the scope of what you're quantifying over.
So is it the case that in Q of T we can apply the energy time uncertainty to gr?
I don't know because so that's one question. We should address that question first before we ask any more questions. Yeah, so
It's important to step back here and just make sure we know what we're all talking about
So a quantum fields just because maybe not everyone knows what they are
So in a typical Hilbert space formulation of a quantum system, we have observables. Observables are these self-adjoint operators.
And in a quantum field, we have operators associated with all points in space.
And if we work in a formulation in which we move the time evolution out of the state vectors
and into the observables, we have what's called the Heisenberg picture, and then our field operators depend on space and time.
It's a fancy way of saying everywhere in space time,
we've got these sort of local operators
that are associated with points in space and time.
Quantum field theories like QED,
we talked about quantum electrodynamics,
they presuppose this classical background space time.
There's no gravity.
Space time is usually treated as flat. We call flat ordinary special relativity space-time.
We call it Minkowski space-time.
You can do quantum field theory in a static curved space-time.
Still not treating gravity as dynamical, but that gets very complicated.
Let's start to start with class.
Quantum field theory is like QED on special relativity flat non-dynamical space time.
In that case, you're right.
X, Y, Z, the coordinates of where you are, and T, do not fluctuate.
They are fixed features of the background architecture of space time.
They're the stage on which the action is happening.
Your question about the uncertainty principle and about fluctuations of fields is an interesting question.
In the Dirac- von Neumann formulation of quantum mechanics,
nothing is fluctuating between measurements,
because nothing is happening between measurements.
The only things that are happening are measurements
in the Dirac- von Neumann formulation.
So to say, oh, when you're not measuring it,
the fields are just like dancing,
the Dirac- von Neumann axioms don't say that.
They say nothing about it.
They don't say that particles are zooming around.
The Dirac von Neumann axioms don't let you say, Oh, the reason this happened was
a photon emitted an electron.
That all that is for color.
I said this in one of our earlier conversations that physicists often talk this way.
They're like, Oh, this happened because an electron emitted that and did this.
And, oh, the field was fluctuating.
If you're only working on the Dirac Monomonaxiums, all of that is just
fluff, fluff. None of it is really legitimated by the axioms. Now, if you're frustrated by that, you're like, well, but
surely something is happening. I want to be able to say something is happening. Well, then you're on my side, which is
that we need to do something to the Dirac Monomonaxiums, you're making my case for me. Okay? So the uncertainty principle, the traditional,
we talked about this a little earlier,
an observable, you know, corresponds to a certain basis
and when the state vector of your system
is aligned with one axis of that basis,
you're definitely getting that result when you measure it.
If the state vector is not aligned with that basis vector,
it's got components along multiple basis vectors,
then you're gonna have probabilistic measurement results given by the Born rule.
And you can be aligned along the axis of one observable and have a definite result, but
not along one axis of another observable, and you don't have a definite result.
And if you change the same vector so that it's aligned along the axis of one, it's not
aligned along the axis of the other, and this is the uncertainty principle, that some observables
will have sharp values that when you measure them, you always get a definite result and
others won't.
And if you try to make one observable sharp,
others will become unsharp.
This is the uncertainty principle.
But notice again, this is all phrase
and level of measurements.
We're not saying that between measurements,
anything is fluctuating.
So honestly, there's like no way to really talk about what,
like to say that the field is fluctuating
on top of the space time,
or to say anything more about the high-simplitude
principle, other than this is the pattern of measurement results
we get when we do measurements on the system.
Now, if you wanna do something like Bohmian mechanics
or the indivisible stochastic approach
or the many worlds approach or something like that,
now we can actually begin to talk about
what's happening between measurements
because these are all theories that describe
things happening that are not just
the narrow class of measurements.
In some of the theories,
like in the indivisible stochastic approach,
there's stochastic behavior.
The fields really are fluctuating.
In the Bohmian approach, it kind of depends
on whether you're trying to get fields
into a deterministic kind of Bohmian approach
or whether you're going to allow the fields
to be stochastic in some sense.
There are many different formulations
of Bohmian mechanics for fields and I can't do justice to all of them. Some of them, the fields to be stochastic in some sense. There are many different formulations of Bohm mechanics for fields,
and I can't do justice to all of them.
Some of them the fields would be fluctuating, some of the fields wouldn't be.
In some of them you deny that there are fields and try to do everything with particles somehow.
Many worlds is a little more subtle because in many worlds,
there's not one reality in which things are fluctuating, it's just more subtle.
And we'll talk about the many worlds approach a little bit later.
So I wanted to just get that out of the way before we then talk about the more subtle
question about is space-time fluctuating.
So when you go from field theory, like quantum field theory like QED, where again, the thing
you're primarily computing is scattering amplitudes.
You set up a preparation, you get measurement results, you're computing cross-sections,
decay rates, those sorts of things.
Now you wanna talk about gravity.
So in general relativity, gravity is a manifestation
of the change in curvature of space-time.
Space-time doesn't stay flat, it curves.
People often ask, where's a curving?
Is it curving in some other dimension?
There's a way to define curvature called intrinsic curvature
that does not make reference to other dimensions.
You can define it solely in terms of the four dimensions of space plus time, so you don't
need an extra dimension for curvature.
But there is this notion of intrinsic curvature, and if gravity is quantum mechanical, does
that mean that the curvature or the shape of space time or the geometry is also fluctuating
in some sense.
Now there's this discussion about, well, if you zoom in, does it look, I mean, I don't even, I don't exactly know what zooming in means.
Do you mean if you're doing measurements or something like that?
I mean, if we do very, like precise measurements on a quantum field on a background space time,
we may get a large variance of results.
But those are measurement outcomes.
It's not the field doing anything between measurements, because again, without augment the Dirac-Vinoman axioms, we don't, we can't
talk about what the fields are doing. Is space-time fluctuating? Well, according to the Dirac-Vinoman
axioms, we can't say that. We could only say something like if you do some kind of measurement
of space, it's fluctuating. But I don't quite know how to measure space
the way we would like measure the intensity of a field.
It's like kind of subtle because the relationship
between the gravitational field
and the curvature of space-time
and the behavior of test particles,
of particles moving around on space-time,
it's like really subtle.
And even the notion of energy is super subtle
in general relativity.
Like there isn't an invariant non-trivial definition
of local energy density
in the gravitational field itself in general relativity.
So, you know, it's actually really hard to pin down
even what we mean by all of this.
And we're not gonna be guided by experiment
because we would expect to see
distinctly quantum mechanical features of the gravitational
field unless there's some miracle.
We wouldn't expect to see that until you're talking about Planck scale physics.
Planck scale physics is the physics associated with distance scales that are like 10 to the
minus 43 meters.
Right?
I mean, they're as far from an atom as like an atom is from like the observable universe or something like that
I mean, it's so far away from
Maybe not have to work out the exact orders of magnitude, but the plunk scales really really small
I guess that's 10 to the minus 43 seconds. That's the plank time the plank scale the length scales 10 to the negative 35 meters
So like these are these are super duper tiny, tiny distance skills.
And we can't do experiments there.
So we have no experimental data to guide us.
So this is exactly a situation in which we need
the kind of careful rigorous scrutiny that one gets from,
yes, understanding the physics as well as we can,
but also having a strong background in philosophy.
Because it's very easy to make statements
that are super speculative,
that build speculations on top of speculations,
to make what I call speculative metaphysical hypotheses, SMHs,
and the acronym is not an accident,
just to tower them on top of each other,
and then not to know whether what you're saying
is something that's genuine and reliable.
So I don't have any idea whether we should be thinking about space-time is truly fluctuating. The
indivisible stochastic approach, like all approaches to quantum mechanics, faces fundamental first order
conceptual difficulties in dealing with space-time that fluctuates like a curving dynamical space time. Let me explain why.
In order to talk about stochastic probabilities
and division events and all this stuff,
you need some notion of what you mean by time,
by slices of the universe at constant time.
You need the ability to talk about which directions in space time are the directions that are space-like directions,
and which directions are the time-like directions.
When you want to specify your configuration of your system, you're doing it at a time over some region of space.
And so you really need to know which slices of space time are the space slices.
And that's all well and good when you're doing Newtonian
or non relativistic space time, even in quantum mechanics,
not necessarily Newtonian,
or even special relativistic space time.
In special relativistic space time,
you're given which directions are time
and which directions are space and they're just fixed.
But when you consider dynamical space time,
so space times in which the so-called metric tensor,
which is the kind of fieldish thing
associated with space time and general relativity,
the metric tensor is the thing that tells you
which directions are time and which are space.
If that is itself fluctuating, you don't know a priori
which directions are the space directions
and which directions are the time directions and which directions the time directions.
So you can't even obviously phrase a probabilistic theory. And this is very curious. For one
thing it means it's very difficult to understand whether space-time fluctuates, even in an
invisible stochastic theory, because it's hard to even specify like, where do I put
my probabilities? What is my initial, you know,
these probabilities are conditional.
They connect one configuration to another
at one time to another time.
But if I don't know which directions are time directions,
how do I do this?
If the space time itself is itself fluctuating,
that's interesting.
But what's also interesting is it highlights a gap
in the scientific study of quantum gravity.
So here's a very interesting thing.
We can take classical Newtonian physics, we can numerically simulate it on a computer,
and we can also model many Newtonian systems probabilistically as Markov processes.
Often the Markov approximation is perfectly well and can be used all the time
to model Newtonian systems, to model other kinds of systems.
And there are stochastic methods, stochastic formulations
of other physical theories beyond Tony mechanics.
There isn't one for general relativity.
So Einstein's theory of general relativity
is not a probabilistic theory.
Einstein's theory of general relativity
is a deterministic type theory.
It's more subtle.
There's some questions over whether it's always
formulated in a Markovian way.
So there's even some evidence from general relativity,
even just ordinary general relativity
that maybe the Markovian picture is not quite the right picture.
And there's some amazing people like Emily Adlam,
who's at Chapman University, philosopher of physics,
and Eddie Chen at UCSD,
and Shelley Goldstein at Rutgers,
who are trying to think about laws of physics
in a different, more sort of global space-time sense and Shelley Goldstein at Rutgers, who are trying to think about laws of physics
in a different, more sort of global space-time sense
that would fit better with theories like general relativity.
And there may be some connections
to indivisible non-Markovian type laws.
But in any event, general relativity is phrased
in sort of a deterministic, non-probabilistic way.
And people are trying to work on quantum gravity now,
but you might've asked, shouldn't we have worked on an intermediate step first?
What about just a probabilistic version of general relativity?
Like a formulation of general relativity that is stochastic, that is, we take
Einstein's field equations, the equations that describe the deterministic
shape of space-time, and replace them with a probabilistic version.
Not a quantum version, with a probabilistic version,
not a quantum version, just a probabilistic version,
like as a stepping stone.
You might have thought that would be the natural thing to do
before trying to go to a fully quantum version of the theory.
And as far as I know, there's been very little work done in that area.
I could be missing something.
I don't have, I haven't seen everything that's been written
and maybe people will see this and they'll chime in the comments and say, wait a second,
there's a theory where this is happening. And there's current work. I mean, I know that
Oppenheim is working on a stochastic version of general relativity, but this is recent, right?
Like this is not 50 years ago. So I think this is a huge target for research.
And it kind of makes sense this would happen. I mean, general relativity, you know, is finished.
The level of the Einstein field equation being fully formulated in November of 1915.
You know, Einstein is giving these super high stakes lectures, the Prussian Academy of Sciences,
and he's scrambling to finish the theory in between the lectures and he manages to do
it.
And, you know, and then Schwarzschild comes along and writes down the Schwarzschild solution
shortly in the beginning of 1916.
But there's this whole story that Schwarzschild was doing it in the trenches of World War I.
He was not in the trenches.
Yeah.
There's this really great paper, I think by Dennis Lemkuhl, who's a historian of science,
who's great.
He was like, Schwarzschild was actually stationed at this very nice house.
And he was in the war,
but he wasn't in front of us, he was doing it.
But anyway, so people,
all this theory was developed in like 1915, 1916.
Stochastic process theory was not developed at the time.
Right?
I mean, even like Kolmogorov's
axiomization
of probability theory, that comes in 1933, right?
So that's like 17 years, 18 years after general relativity.
And that's not even stochastic, I mean, random variables
don't start becoming prominently used
until like the forties and fifties maybe.
And I think like a well-developed theory
of stochastic processes, if I'm not mistaken,
and again, my history on the theory of stochastic processes
may be somewhat mistaken so people can correct us,
but I believe it wasn't until later,
like the fifties and sixties.
I mean, Markov introduced, Markov matrices already
in like 1906, but like fully building out
like an actual comprehensive theory of stochastic processes, that comes decades later.
And people had already been working on quantum gravity
for decades by this point.
I mean, you know, people began trying to do quantum gravity
like especially by the 1920s.
I mean, Pauli is already trying to quantize
general relativity by the late 1920s.
And people are already like giving up
and pulling their hair out and saying, you can't
do it, right?
Already like decades before there's a theory of stochastic cross disease.
So it's actually not so surprising historically that no one said maybe before we do quantum
gravity, we should do probabilistic gender relativity and see if we can do that.
And there have been a lot of proposals to do this.
You know, maybe what you do is you want an ensemble of space time, of block universes
or maybe, but it's like not clear that any of these
are really the right way to do it.
I have some suspicions and this is now me
doing something I don't wanna do,
which is just like speculation, but you know what?
Let's just speculate.
Surmise away.
I think a fully probabilistic version of general relativity,
and I don't mean taking general relativity
and adding some small noise terms, like small corrections.
I mean, like a fully, fully probabilistic generalization
of general relativity.
I think that would either teach us a lot
about quantum gravity or even potentially be quantum gravity.
Because remember the indivisible stochastic approach
doesn't start with Hilbert spaces,
it's just probability, just very non-Markovian probability.
There's a sense in which general relativity in its most general formulation is like not
exactly, I mean, depending on the nature of the space time, if you've got certain kinds
of space time and certain properties, you can formulate it as a kind of initial value
problem.
But like, there's something about general relativity that's a little different from
the laws of other theories. And I have a suspicion that if you could fully probabilitize the theory,
you'd basically be doing indivisible stochastic mechanics, but for a gravitational field,
and that would already be the theory of quantum gravity.
Now, that is super conjectural. I want to be super clear.
I have not worked on this in any depth. It would be very interesting to study this problem more.
But this is the kind of question you can begin to ask because if you think that you have to start with Hilbert spaces
You'd go well
It must be the case that quantum gravity is going to be some Hilbert space thing or some generalization of Hilbert spaces
But because we didn't have to start with Hilbert spaces
We can now ask much more basic questions like what's just probabilistic general relativity
Indivisible probabilistic general relativity. And is that already all we need?
That's not to say it's easy,
because again, when you have a dynamical space time,
it's very hard to talk about
where you even put the conditional probabilities,
but at least it centers the question
on something a little more basic.
And I think this comports with a couple of other principles
I think that one gets from thinking philosophically
about doing physics.
One is it's usually better to isolate problems
as much as you can and deal with them
in the simplest circumstances.
I would much rather try to deal with probabilities
and general relativity first
before I try to do all of quantum gravity, right?
Like let's study problems
in their simplest initial incarnation.
Let's not teach people quantum mechanics
by starting with quantum field theory.
Let's start with the simplest kind of systems and add complexity step by step
rather than doing it all at once. That's one thing.
And the other thing is the idea that when approaching problems or conceptual confusions
or trying to make progress on a very thorny set of theoretical questions
involving one of our best physical theories, sometimes you don't want to just build stuff on the end.
Sometimes what you have to do is you have to go down into the deep programming of the model one of our best physical theories. Sometimes you don't wanna just build stuff on the end.
Sometimes what you have to do is you have to go down
into the deep programming of the model and do some debugging.
So for people who've done computer programming,
you know that sometimes when a program isn't working,
it's not because the end of your code is wrong.
Sometimes it's because like way at the beginning
of your code, you made some mistakes.
And to debug it, you have to go all the way back
to the beginning and really start with the definitions,
like how you've defined certain variables or how you define certain functions,
and like make sure all those definitions are really good before you proceed.
And that's kind of what I'm doing here. Rather than trying to glom gravity onto Hilbert space quantum mechanics,
I'm saying maybe we need to go and ask some very foundational questions first.
Debug this program all the way down to the roots, at the axioms, make sure the axioms really make sense.
And I can give a very concrete example
of where this breaks down.
So we talked about the uncertainty principle.
Another thing that you compute in quantum mechanics
are expectation values.
Now in a previous conversation we talked about,
an expectation value is an average.
It's an, you know, you have some observable thing
you wanna measure and you know the quantum state of the system and you can is an average. It's an, you know, you have some observable thing you wanna measure and you know the quantum status system
and you can compute as average.
And there's this way of thinking about those averages
that they're averages of just stuff happening,
a phenomena happening, but they're not.
They're defined by the Dirac von Neumann axioms
as statistical averages of numerical measurement
outcomes weighted by their corresponding measurement outcome probabilities, and that's it.
If you're not measuring stuff, there's no average there.
But there are a lot of physicists who think that when you put brackets around something,
which is the notation for an expectation value, we no longer have to think about measurements
anymore. We can just think about it as stuff happening. So people will say something like,
well, you know, quantum mechanics predicts measurements and if you measure something,
you'll get one of the eigenvalues, you get it with the Born rule. How do we get the classical limit? Oh, what we do is we take expectation values.
We average everything and then we show that those averages evolve in time the way that classical observables
evolve in time and this is how the classical limit happens.
But this is clearly wrong because at least if you don't think everything is a measurement, I mean, if you want to argue that every phenomenon happening is a kind of measurement,
then you can do this, but then you have the onus of trying to show that.
If you're not willing to say that everything happening is a measurement, you have got a problem.
Because things are happening all over the place.
Objects are sitting on Mars, not falling down, and primordial gases are mixing.
And you can't just put brackets around them
and the quantum mechanical things
and just say these are things happening
because those brackets only refer to measurement averages.
And if there's no measurements happening,
then those things aren't happening.
The conflation of a quantum mechanical
measurement expectation value with just,
on average, this is what's going on,
the conflation of those two things, measurement averages and on average, this is what's going on, the conflation of those two things, measurement averages,
and on average, stuff is just happening in a certain way,
is pervasive in the literature.
So if you take, for example, I'm sorry to mention this
because I really like this book,
it's Shankar's book, Principles of Quantum Mechanics,
wrote a book, it's a wonderful book,
it's a big pink book on quantum mechanics,
and chapter six is called the classical limit, and the entire chapter is based on this fallacy.
You just put brackets around things and then you can treat them like classical variables
where they're just happening and no one's measuring them.
But it's just wrong.
Now, at least according to the Dirac, Von Neumann axioms.
Now, if you're willing to augment or change the Dirac, Von Neumann axioms
and turn quantum theory from a theory of only measurements to a theory of phenomena happening generally
Like in an indivisible stochastic approach or bohmian mechanics or ever ready in many worlds of interpretation, then you are legitimated in doing this
But you need something to take measurement averages and turn them into just averages of things happening
This is just a category problem again that we're only referring the quantum monaxioms to this very narrow category of measurement outcomes,
and not the larger category of things that we want to be able to be happening.
So how does this then affect quantum gravity?
In quantum gravity, we often take quantum mechanical things,
put brackets around them,
and then plug them into the Einstein field equation
and treat them like they're classical things.
So one of the starting assumptions of semi-classical quantum gravity,
which is where we try to sort of mix a little quantum, is we take the distribution of matter broadly construed,
broadly construed matter is
like massive particles, massive objects, but also electromagnetic fields are considered a form of matter. Really anything that's not the gravitational field that can
source gravitational fields or respond to considered a form of matter. Really anything that's not the gravitational field that can source gravitational fields
or respond to gravitational fields we call matter.
And what we do is we take the quantum mechanical observables, these self-adjoint operators,
we put brackets around them, call them averages, pretend that they're classical averages, and
then put them into the Einstein field equation.
And a lot of what we do is like that in quantum gravity, but it doesn't make any sense, like
from the beginning.
Another thing people often do is they will take functional integrals, this is the Feynman
path integral approach where you take all the possible trajectories, right, this picture,
and they will stick a bunch of functions into one of these integrals.
Sometimes to make things more well-defined, they will take time and give it an imaginary part
and even rotate the time axis in the complex plane to imaginary time
to make the integrals more well-defined.
And they'll compute these things called correlation functions.
And sometimes I'll have a conversation with someone who does this, and I'll say,
what is this correlation function?
They'll be like, oh, it's a correlation function.
It's an average.
And I'm like, but your universe has no observers in it.
And you're describing a situation in quantum gravity
in which there's like no planets or people
or measurements happening.
So what is this an average of?
Are you saying that these quantities are just doing things
and we're averaging them?
That's not legitimated by the direct final monaxioms.
So what is the physical meaning of these quantities
that you're writing down?
And sometimes a lot of very sophisticated conversation
about this and we actually get, make some progress on it,
but a lot of times people like,
I actually don't even know what I'm doing, right?
So this is what I mean when I say that like,
applying rigorous scrutiny to the things we're computing,
like beyond the mathematics, like what do they mean
is actually kind of important
because otherwise you might find yourself
writing things down and you don't even know
what exactly it is that you're writing down.
I think what this gets across is that the difference
between quantum mechanics and general relativity
is actually much deeper than I think. I mean, we all know that there are differences.
Quantum mechanics is supposed to be this sort of
fluctuating probabilistic theory
and general relativity is supposed to be based
on smooth space times and things like this
and how do we reconcile them?
But I think that the difference between them is even deeper.
General relativity is a theory of things happening.
General relativity is a theory in which
you have a nice and field equation, you impose
appropriate boundary conditions, you introduce whatever distribution of matter and energy
and sources you want in your space time, and then you find a space time with the right
geometry that satisfies all the constraints and obeys the Einstein field equation.
And this is the space time which things happen. Projectiles follow what are called geodesics.
If they're only subject to gravitational forces,
geodesics can cross, they can meet,
they can intersect five times.
People can, you know, you can compute various
and variant quantities.
Not everything in general relativity is relative.
Some things are invariant.
They're like things are happening in this universe.
In quantum mechanics, at least the textbook Dirac-Vinoman picture, all you've got are measurement outcomes. At the beginning and end of your measurements, none of the stuff in between, no picture. So if you try to take general relativity and stick it into quantum mechanics, at least the traditional Hilbert space Dirac-Vinoman picture, you're losing all of the phenomena happening. You're losing all of the meat, all of the heart of general relativity.
There's like a much deeper problem here.
And I think one attitude is, well, but it works so well for QED, quantum electrodynamics.
But remember, quantum electrodynamics is only a theory of like, you set things up and you
take measurements
at the end, in the asymptotic past you set up your initial state, the asymptotic future,
we take the times to be infinitely in the past, infinitely in the future, these are
obviously just approximations, and we're just computing measurement results, cross-section,
scatter, you know.
But in general, in quantum gravity, we're trying to describe what space-time is doing.
We're trying to understand like what's happening to space-time.
And those are questions that just are beyond the kinds of things
that we usually do when we're doing QED.
We're demanding more of quantum gravity.
We're demanding more of a picture, more of a description
than the textbook quantum mechanics has been designed to provide.
And so I think that if you want to do quantum gravity
and really tell a story, tell a picture,
paint a rigorous picture of what's happening in space time,
you're just not going to be able to do it with textbook
Dirac von Neumann-Hilbert space quantum mechanics.
You're going to need a theory of something
in order to describe a space time
where something is actually happening.
I hope that makes sense.
So I think there are reasons why a conceptual shift
in how we think about quantum mechanics may be necessary
before we are able to address certain deep problems in quantum gravity. philosophy, along with my personal reflections, you'll find it all on my sub stack.
Subscribers get first access to new episodes, new posts as well, behind the scenes insights,
and the chance to be a part of a thriving community of like-minded pilgrimrs.
By joining, you'll directly be supporting my work and helping keep these conversations
at the cutting edge.
So click the link on screen here, hit subscribe, and let's keep pushing the boundaries of knowledge together.
Thank you and enjoy the show.
Just so you know, if you're listening, it's C-U-R-T-J-A-I-M-U-N-G-A-L dot org, KurtJaimungle dot org.
Problems in quantum gravity.
Great answer.
Okay, so let's get to some of the questions about Bell.
Yes.
So people had questions about Bell's inequalities and how they're represented in your framework.
Good.
Yeah.
So, ultimately, Bell's theorem is about entangled systems.
So I have to say a little bit about entanglement.
We've got to talk about entanglement first.
What is entanglement according to usual textbook quantum mechanics? This is what entanglement is about.
Entanglement is what happens when you take superposition of states
and extend it to composite systems, systems where you've got two systems.
Not one system anymore that can be the superposition of two states, but two systems.
So suppose that I have system A and it's the state one,
and I've got system B and it's in state one prime,
and that's all I have.
Well, then we would say, okay,
the composite system is in the state one and one prime,
system A is in state one, system B is in state one prime,
that's all there is to say.
I could also imagine that system A is in state two and system B is in state one prime, that's all there is to say. I could also imagine that system A is in state two
and system B is in state two prime,
and the composite system is in the state two, two prime.
Perfectly fine.
I could also imagine that system A alone
is in a superposition of one and one prime.
Let's say one over root two times one
plus one over root two times one prime,
because in quantum mechanics, when we superpose,
we put a number in front and that number when you square it is
supposed to be related to a measurement probability. The 1 over root 2's have the
property that you square them they become halves and you add them you get
1 that's probabilities adding up. You can imagine the system A is in the state 1
over root 2 1 plus 1 over root 2 1 prime. You can imagine system B is in the state
one over root two, two plus one over root two, two prime.
And you could imagine that those are the states,
the two systems, now the composite system is also in a,
so the composite system is in the state.
Well, it's hard to say.
Let me call the first state psi, the Greek letter psi.
Psi is the state one over root two, one plus one over two, one prime.
And let's, I'm sorry, I did my numbering wrong.
It's one plus two and two, and sorry,
because A can be in the state one or two,
and system B can be in state one prime or two prime.
I got it wrong, my apologies.
So psi will say the Greek letter psi, trident symbol psi will be one over root two,
state one plus one over root two, state two.
And psi prime, which corresponds system B is,
psi prime is the state that represents one over root two,
one prime plus one over root two, two prime.
And I can say that the composite system
is in the state psi,
psi prime, right?
To say, if I multiply everything out, I'll get four terms.
There'll be a term that's one half one one prime plus one
half one two prime plus one half two one prime plus one half
two two prime, okay?
We would say this is not an entangled state
Because it's factorizable I can factorize it into psi
Next to psi prime psi for system a psi prime for system B. I would say these are not entangled, okay?
And you can show that when they're not entangled they also have
Statistical independence if you do measurements on them and compute measurement probabilities And you can show that when they're not entangled, they also have statistical independence.
If you do measurements on them
and compute measurement probabilities,
you'll find that they are statistically uncorrelated.
But now let me propose a different quantum state.
This quantum state is gonna be
one over root two, one, one prime,
plus one over root two, two, two prime, and that's it.
Just those two terms.
Notice this is a superposition, but over both systems.
And now I've got one one prime in one term
and two two prime in the other term.
And I don't have all those other,
I don't have the one two prime term.
I don't have the two one, they're not there.
I only have one one prime plus two two prime, that's it.
I cannot factorize that into two different states.
There's no psi for the first system and psi prime for the second that would let me describe
them both as having their own states.
We would now say those are entangled.
Just a quick question here.
So people who are driving and they're listening to this or people maybe they have a pen and
paper and they're thinking, okay, well, I'm going to try to multiply some states to get
that and then they don't.
So then they wonder, okay, but just because I tried some, I didn't get to it.
Is there a way that I can look at this and then prove that there exists no factorizable component?
Yeah, there is.
And that's the way to think about this is just it's forgetting to FOIL when you do arithmetic.
So if someone gives you, for example, I've got X plus Y over here and I've got W plus y over here, and I've got w plus z over here, and I multiply x plus y as a quantity,
times w plus z as a quantity, I get four terms.
I get x w plus x z plus y w plus y, I get four terms.
If I see those four terms, I know I can refactorize them
and write them as a thing, x plus y times other thing,
z plus w.
But if I only give you X, Y plus,
sorry, not X, Y, X, W plus Z, Y,
I only give you those two things.
You can't factorize them.
They don't factorize into a thing times a thing.
This is like an arithmetic example of entanglement,
basically.
Now, entanglement, basically.
Now, entanglement is usually phrased as something that has no classical correspondence.
There is nothing like entanglement classically.
In fact, in a 1935 paper,
Schrodinger wrote that entanglement was not A,
but the feature of quantum mechanics that enforced its distinction from the classical case.
You can also link to that paper, I'll send you a link to it.
Now, you might go, well, there are certainly some things that are like entanglement.
For example, John Bell has this paper, Bertelman's socks.
He talks about this guy, Bertelman, who's got socks
and the socks are always different.
If you know what color one sock is,
you'll know what the other color is not the same.
Like there are systems in which, for example,
if I have someone preparing coins
and they always prepare the coins
so that when one is heads, the others are always tails.
Always.
And you discover one is heads
and you know the other is tails, they're correlated.
Even if the coins are very far apart when you look at them,
if you prepare the coins and send them far apart,
and you look at one coin, it's heads,
you know the other one even is very far away as tails.
This is called correlation.
And if you do this over many coins,
and the coins are flipped,
you don't always know what you'll get, heads or tails,
but you know the results will be correlated,
statistically correlated.
So statistical correlation certainly happened classically,
but entanglement is stronger than that.
And that's one of the things that
Einstein, Podolsky, and Rosen, and Bell,
they were trying to get at this feature of entanglement
that is somehow stronger.
You get correlations that are stronger
than you would think could be possible on normal,
how we usually reason about classical probability theory
for systems that are widely separated from each other.
To explain the Bell inequality, I have to start with where it came from.
So Bell's theorem in 1964 is in a paper called
On the Einstein-Podolsky-Rosen Paradox.
He's referring to a 1935 paper by Einstein, Podolsky, Rosen.
So I have to talk about that paper and what they did and then what Bell was supposed to do.
You should link to a copy of that paper. People should read it. I don't know how many physicists have actually sat down and read that paper really carefully,
but it's and even Einstein wasn't super happy with it.
He was a little upset about how it finally came out, but it is a very important paper to read.
You mean the EPR paper?
This is the famous EPR paper.
The Nobel paper.
Right, EPR paper.
Yep.
It's a very subtle argument, but it basically boils down to this.
If I've got two quantum systems and they're entangled, I prepare them.
I prepare them in some state that's entangled
and to get them entangled,
something has to be local between them.
Either they have to be together initially,
or you have to send something from one to the other,
but some kind of, at some point local thing should happen
in order to get them entangled with each other.
And then you send one of the systems very, very far away.
This is a weird thing about entanglement.
When I measure the first system,
usually people do these thought experiments,
they imagine Alice and Bob.
Alice has the first system and Bob is very far away
with the second system.
Alice does a measurement on her system
and she could measure a variety
of different observable features.
She could measure some observable feature
and when she does it, she will know if you have the right kind of entanglement, she'll know exactly what Bob will get when he does his measurement.
She'll measure observable A, she'll get some answer and then she'll know, I got this answer because of the entanglement, I know what Bob will get.
Bob will definitely get this other answer.
But Alice didn't have to measure that thing.
She could instead have measured a different observable.
She could have measured observable A' a different observable that is not compatible with A
in the same way that position is not compatible with momentum, which is what they originally used in the EPR paper.
The original EPR paper was written in terms of positional momentum, but these are incompatible observables.
They obey an uncertainty principle. If you know one, you don't know the other with certainty.
Sure. And so she measures A prime.
She can make Bob's system collapse,
have a definite answer for a different observable.
Okay.
So she can steer Bob's system.
This is called quantum steering.
The word quantum steering was introduced by Schrodinger shortly after the EPR paper.
Because it feels like Alice's choice of measurement,
she measures A or A prime, is like steering Bob's system.
Now the steering does not send signals.
Again, there's this theorem called the no signaling
and no communication theorem that shows that Alice cannot
deliberately send controllable messages this way.
The steering is something more subtle
and can't be used to send signals or communication.
This is rigorously established because of this theorem.
Nonetheless, there is some sense
in which she is somehow steering Bob's system.
She'll measure observable A,
she doesn't control what answer she gets.
A is uncertain, she could get this, she could get that.
Depending on what she gets,
Bob will get a certain corresponding thing.
But because she can't control what she gets,
she can't control what Bob gets.
She just knows that once she's done her measurement
and gets a certain result, she knows that Bob,
if he decided to measure the same thing,
she'd know exactly what he would get.
If Alison said measures A prime,
she'll collapse Bob's system to a different basis
and whatever result she gets, she'll know Bob,
if he measured that corresponding observable,
she'd know exactly what he would get.
Now there's two possibilities as far as EPR, Einstein-Belsk and Rosenberg, concerned.
Either Alice's decision is really changing Bob's system, and Bob's system, when they
do the experiments, could be a light year away.
And that would seem to be something superluminal, unacceptable happening, faster than light
happening.
But if not, Bob's system must already have known what answer it would get if he measured the first observable,
and what answer he'd get if he measured the second observable.
Because Alice could measure either of hers, and depending on which she measures,
she can make Bob's system have a definite value of one,
measures everyone, has a definite value of another, and if Alice is not really
changing Bob's system, Bob's system must have known all along
what it was going to have. They call their paper can and they
leave out the the, but can the quantum mechanical description of
reality be considered complete? They're saying that unless you allow
something faster than light to be happening,
Bob's system must already know the answers it should yield for all of his measurements because Alice
cannot possibly, by her choice of measurement, be doing the steering. So when EPR basically establishes
that there is a logical fork, either you allow faster than light influences of some kind,
causal influences of some kind,
or there are hidden extra parameters
and the wave function,
the standard approach to quantum theory is incomplete.
There's more to the story than just the wave function.
That's where Bell starts. In 1964, he says, well, so here's what Bell starts.
In 1964, he says, well, so here's what they said.
They said that either you have non-local or some kind of causal influence happening that's, you know, going from Alice to Bob.
Or there's more to the story than just the wave function.
There are some hidden variables.
Bob's system already knew what answers it would yield.
What Bell wanted to do was show
that that fork was actually not really there.
That there wasn't an escape from the non-local causation.
That if you tried to escape the non-local causation,
the way that EPR argued you should,
assume there's more to the story,
hidden variables, extra things,
that that actually doesn't let you escape.
And what Bell did in his 1964 paper
was prove his theorem, Bell's theorem,
it's an inequality, satisfied by in his view,
any theory of hidden variables that is purported to be a local theory,
and then write down a simple example
of a quantum mechanical system that violates it
that you can go and do an experiment
and check that it violates it.
So in other words, Bell is trying to close
a possible way out of the non-local causation.
EPR says there's either non-local causation
or hidden variables, and Bell is saying, well, even with hidden variables, you still get non-local causation. EPR says there's either non-local causation or hidden variables,
and Bell is saying, well, even with hidden variables,
you still get non-local causation.
Therefore, quantum theory is simply a non-local theory,
and that's the end of the story.
That's what he did.
This theorem has gone through a giant game of telephone.
People have... So first of all, I should say that the paper was like published in an,
I mean, Bell was not, and he was a particle physicist doing this foundational
work on the side and he would caution people against doing foundational work
because it was considered very bad for your career, which is really shameful.
I mean, physics is supposed to be an intellectual enterprise. And closing
down avenues of intellectual investigation, of exploration is just anti-intellectual.
That's a shame. But his paper somehow eventually became more widely known. And it's like through
a game of telephone. And eventually people began thinking that what he did was prove
there couldn't be hidden variables. As people would say, oh, you have a hidden variables theory?
That's ruled out.
Bell said there can't be hidden variables.
In fact, the Nobel Prize was given for experimental tests of the violation of the Bell inequality,
right?
There's this Nobel Prize that was given to Klauser and Zeilinger and was Aspey, I think it was Aspey also.
And it says, if you look at the press release
for the Nobel Prize, it says that Bell proved
there couldn't be hidden variables.
And this Nobel Prize is given
because they proved hidden variables are impossible.
That's not what Bell showed at all.
In fact, not only did Bell not show that,
but he said in the paper, that's not what he was showing. In fact, he begins the paper by talking about Bohmian mechanics. He says Bohmian mechanics is at least for
systems of fixed numbers of finitely many non relativist particles a perfectly empirically adequate theory of quantum mechanics
It is grossly non-local. That's the words he used used for it
Could there be a hidden variables theory that is better behaved than Bohmian mechanics
when it comes to locality?
And what he was showing was that there wasn't.
But he wasn't, his argument wasn't that, okay,
well, as long as you get rid of hidden variables,
you can keep locality.
He thought EPR had shown
that if you don't have hidden variables,
then you definitely have non-locality.
So it wasn't like he was saying, well, it's hidden variables or locality.
He was saying EPR said it was non-locality or hidden variables
and in fact hidden variables still non-locality.
Non-locality is just all you get.
That's what he thought he was doing.
And this paper has been widely misinterpreted.
Bell himself in later writings complained about how people kept misinterpreting his paper,
either not reading it carefully or getting it second hand,
or I guess like in the opening of, we'll be talking about the textbook that said,
oh, Bell proved, Bell showed that the orthodox approach is the only approach, right?
I mean, that's not what Bell said.
So, okay. But then where do we go from here?
Bell claimed that he'd shown that quantum mechanics
was just non-local full stop.
But the EPR paper, the original EPR paper
and Bell's 1964 paper, these are arguments.
They're mathematical arguments, especially Bell's paper,
which is a theorem.
And you have to be very careful when you talk about theorems
in a physical context.
So we were talking earlier about inductive, deductive,
all these different arguments.
In pure mathematics, a theorem begins with premises.
The premises should be rooted ultimately if you have to
in whatever the axioms are of the field you're working in.
Maybe they go back to the axioms of set theory, who knows?
And then you go through a sequence of logically valid mathematical arguments
that culminate in some conclusion.
That's a math proof.
And once you've proved it, as long as you have premises that are good, correct premises
and your logic was valid, you have a sound proof.
You have a sound deductive argument and you're done.
And if anyone wants to claim there's something wrong,
they're going to have to either challenge your premises
or challenge your reasoning.
And if they're both good, then you're just good.
So Euclid proves the infinitude of the prime numbers.
That's a great example.
You begin with certain premises
about how the natural numbers work.
And then you have this logical argument
that leads to the conjecture
that there cannot be a biggest prime number.
As long as you're willing to take on the axioms,
the standard axioms that we use for arithmetic. But physical theorems, theorems like Bell's
theorem, theorems that are about physics, the Cotchen-Specker theorem, the PBR theorem,
that's the Puzi-Berut-Rudolf theorem. There are all these other theorems that are so-called physical theorems.
And these can suffer from another problem.
They can succeed as mathematical theorems.
They begin with mathematically formulated ingredients that you use in the premises,
and then you proceed through rigorous logical deductive reasoning, and you arrive at a conclusion.
That's the theorem you claim
to prove, and that can all be fine. But your theorem is just floating out in math world,
unless it connects to something in the physical world. And that connection is where there can be
a problem. So your mathematical ingredients aren't just supposed to be pure math anymore,
they're supposed to have physical referents.
And I, I'm sorry, the way a singular is referent.
Referent is singular, reference is plural.
They're supposed to have things out in the world
that they are representing.
And the things they're representing
need to be sufficiently rigorously defined.
And the connection between those referents
and the mathematical representations,
the connections have to be sufficiently rigorous.
And if either of those two things breaks down,
we have a connection problem.
I call it the connection problem.
So let's take Bell's 1964 theorem as a good example of this, okay?
Well, let's even go back to EPR. Let's go back to the EPR paper. Sure. EPR theorem is a good example of this, okay? Well, let's even go back to EPR.
Let's go back to the EPR paper.
Sure.
EPR paper's a good example.
So the EPR paper has premises.
There are premises to the EPR paper.
One premise is that wave functions collapse
when we do measurements on them.
Another premise is, of course, the Drakvon-Norman axioms,
which include collapse.
Another premise is that we have a notion of causal influence that can be cashed out in
terms of interventions by agents.
I needed an Alice and a Bob to talk about this.
Alice is an agent who does an intervention on her system.
We call it a measurement in this case.
Bob is also an agent who does an intervention.
The interventionist theory of causation
is one particular way to talk about causal influences.
According to the interventionist conception of causation,
to say that a thing A causally influences another thing B
is just to say that if an agent comes along
and intervenes in some way on A,
there will also be a change in B.
That's what it means to say that A causal influences B.
But if there are no agents and there are no interventions,
then what do we do with this theory of causal influence?
And you might go, well, there are observers,
but if you want a theory of quantum mechanics
or theory of physics in which observers and measurements and measuring devices are not part of the fundamental axioms, you're going to have a lot of trouble talking about causation in that kind of a theory.
If you try to do EPR and subtract out the agents and subtract out the intervention and subtract out the wave function collapse, it's actually really hard to talk about
what's happening in the thing.
So the extent that you take all these things on,
sure, you have this rigorous statement,
sort of rigorous statement about what's going on.
It could fail because the reasoning is bad,
but could also fail because there aren't agents out there
and there aren't interventions out there.
And you might go, well, again, what do you,
I mean, there are people, Alice and Bob, but
the thing is people are made of atoms, we think.
Phrase it to meet the level of the atoms. Are atoms intervening? Are atoms agents?
And then you actually run into a kind of a deep problem. Like if
you really are asking me to phrase this, not with people, not with measuring devices,
but the level of the atoms, the individual atoms that are not making decisions and freely choosing to
do things and doing interventions and acting as agents. I don't even know what this theory
of causation is supposed to mean. And if you don't have a theory of causation, you don't
have a theory of causal influence and you don't have a theory of non-local causal influence
and the whole argument just breaks down.
And this is a thorny problem because causation is just like a nightmare subject in metaphysics.
People have been trying to understand causation
for a very, very long time.
Causation is one of these things where we feel like
we kind of intuitively understand causation.
In fact, Kant even argued that cause and effect
was like built into our brain architecture, we needed to think of the world in this way.
But it's really hard to pin down what you mean rigorously by cause and effect,
especially if you're trying to start from physics.
So there's a view of physics around the turn of the beginning of the 1800s,
the sort of Laplacian view of physics.
All there is is just the state of the universe,
all the particles of the universe,
all the particles in the universe
with their positions and velocities, that's it.
At one snapshot in time,
and then there is just a giant differential equation,
the laws of physics as a giant Markovian
differential equation that take this state
of the whole universe, all the particle positions
and velocities and tell you the next infinitesimally
in time state and the next one and also the previous ones.
And that's all there is.
That's all there is to physics.
That's all there is to the development of the evolution of physical systems.
From this point of view, there's no sense in which that rock over there
is causally responsible for the motion of that rock over there.
Exactly.
Right?
Because it's like, you don't need that.
You have the overall state and it's just sort of
propagating forward and backward,
this giant differential equation.
There is no role to be played by having these extra
ingredients, these idle wheels,
these notions of causal these idle wheels,
these notions of causal influences.
Now when we teach Newtonian mechanics, we often talk about,
oh, why did that rock begin to accelerate?
It's because this other rock exerted a force on it, right?
This other rock exerted a force on this rock and therefore caused it to move.
But if you step back and look at the entire universe,
there's just some giant state evolving forward
by some differential equation.
It doesn't look like there's any place
for causation in this picture,
at least at a fundamental microphysical level.
Bertrand Russell said in the beginning of the 20th century,
causation is a relic.
I think he said it was like the British monarchy.
It's something that continues to persist
under the erroneous assumption that it does no harm.
He thought you didn't need causation anymore in physics,
at least at the micro physical level,
we should just get rid of it.
Of course, if you get rid of causation,
then there's no non-local causation.
And then what is Bell's theorem even about?
What is EPR even about?
If there's no causation,
there's no superluminal causation
and then the problem is just solved.
I think if one takes the point of view,
as some philosophers do, John Norton, for example,
has a paper which he should also link to,
it's called Causation as Folk Science,
that there is no fundamental causation in nature,
that science in the early days
was about looking for cause and effect,
but we've really become more sophisticated in that we're not trying to phrase things in
cause and effect anymore.
Cause and effect is language we can introduce later just to simplify how we describe things,
but we shouldn't be looking for physical theories fundamentally phrased in terms of cause and
effect anymore.
That's a relic of an old time.
I think if you want to take that point of view, that's a self-consistent view, but then
you're not going to be able to then appeal to Bell's theorem and say that there's
non-local causation happening in invariable theories.
If you want to talk about non-local causation, you need a theory of causation.
You need to actually bite the bullet and say, we're going to talk about causal influences.
And if you rely on interventionist causation, you run into the problem that interventionist
causation just doesn't seem like a kind of fundamental microphysical definition of causation that we should be talking about.
We're talking about microphysical theories like quantum mechanics.
A lot of the no-go theorems that are related to Bell's original 1964 theorem, that theorem
itself, the EPR paper, the GHZ argument, a lot of them help themselves to interventionist
causation.
They ultimately involve agents manipulating things,
doing interventions. In a fundamental microphysical theory,
which is just atoms doing the things that they're doing
with no agents, no fundamental role of agents
or interventions or measurements,
it's not clear what these theorems are even saying.
Bell wrote another version of his theorem,
a generalization in 1975.
It's a remarkable paper and I think not widely read enough by physicists.
A lot of people I think tend to focus the 1964 paper.
The 1975 paper is much more sophisticated.
I'll put the link on screen.
Yeah, you should put the link.
It's a great paper.
It's beautifully written.
And in this paper, he tries to deal with this problem.
I mean, he doesn't use the words interventionism,
but he tries to get away from the reliance on measurements and on collapse.
He retreats to a much more primordial notion.
He just says, look, even textbook quantum theory is committed to some ontology,
things physically existing, measurement results, textbook quantum theory says
there are measurement results,
that's a thing it's committed to.
Those are the beables, the things that are really out there
according to textbook quantum theory.
There are actual facts of the matter
about how the measurements happen
and they're really out there in the world.
He calls them beables, not observables,
but beables, the way things can be.
Maybe that's all the beables you have.
Maybe there are more beables in your theory,
but textbook quantum theory only has those.
And to be clear, a beable is what?
An ontological entity?
Yes. A beable is what you think is real,
what you think is actually physically out there.
And according to textbook quantum theory,
you are committed at least to measuring things.
Now, this of course raises some questions.
If measuring devices are out there and they're really real,
what are they made out of?
In textbook quantum theory, there's just nothing.
And you can't say, well, they're emergent
because emergence requires a substrate.
Water, fluid water emerges from water molecules.
You need to have the things
out of which the emergence is constructed.
And in textbook quantum theory,
you can't just say measuring devices are emergent
without saying what are the things that it emerges from.
In a theory like Bohmian mechanics or many worlds or indivisible stochastic formulation,
you have those ingredients out of which the emergence is supposed to take place.
But in textbook quantum theory, you don't.
Okay, but that's putting all that aside.
You're at least committed to the measuring devices, measurement results as the beables
of the theory.
And so Bell just, he rephrases the premises of his theory differently.
He doesn't rely on interventionism.
He doesn't even propose any theory of causation.
He just says, look, I don't have a good theory of causation.
I'm not going to give you a full comprehensive theory of causation.
But I think any good theory of causation should have a certain feature. It should have a feature which today we would call
Reichenbachian common cause factorization.
This is just the statement that if A is a thing
correlated with another thing B, they statistically rise and fall together in some statistical way.
And A and B do not causally influence each other directly
maybe because they're so far apart when they happen that they can't communicate with light. in a very statistical way. And A and B do not causally influence each other directly,
maybe because they're so far apart when they happen
that they can't communicate with light.
Then there must be some other variable C
that is causally influencing both of them.
You know, so for example,
if people have one condition,
some medical condition and some other medical condition.
And it's clear that neither medical condition
causes the other, but they seem to be correlated.
You would think, oh, there must be something they ate
or something they did that was responsible for both of them.
There's something like Nicholas Cage movies are released
when people tend to die from drowning in swimming pools.
Sure.
I don't know if that's an interesting suggestion.
Well, it turns out that it's because Nicolas Cage releases movies in the summertime.
Ah, good, yes. It's a common cause in the summertime. Good, yes, exactly.
It'd be like saying, well, barometers show low pressure, and that's correlated with hurricanes.
But it doesn't seem that the barometers are causing the hurricanes.
Right.
Or the hurricanes, which haven't happened yet, are causing the barometers.
But there's a low pressure system that happens first, and this leads to both of them.
Okay. So this is the so-called common cause principle. And what Bell asserts is that any good theory of local causation should have the property that local beables, whatever they are, they can be measurement results, they can be beables in some other sense. He's being very general about this, but local beables associated with distant places,
if they're correlated, there must exist other beables
in the past, in the causal past,
in the so-called overlap of their light cones.
That's the fancy way of saying it.
And there must be a rich enough set of those beables,
a rich enough set of them,
that if you specify them all and know them all,
then they explain the correlation
in a very rigorous mathematical sense.
They lead the joint probability distribution
for the two things, the two beables, A and B,
they lead the joint probability distribution
to factorize cleanly when you condition
on the local beables in the past,
the common cause local beables.
This is called Reichenbachian factorization.
I don't know that Bell knew called Reichenbachian factorization.
I don't know that Bell knew about Reichenbach's work.
Reichenbach had formulated this idea in the 50s.
And it's certainly something you could imagine a good theory of causation should have.
Bell needed this factorization in order to derive his inequality
with these weaker, more general assumptions.
And this 1975 theorem was general enough
that it could encompass probabilistic theories,
theories with stochastic hidden variables
where the hidden variables didn't uniquely determine
the measurement outcomes,
but only determine them probabilistically.
So this is a more general theorem,
but he's changed his premises.
And now he's taking on this premise
that in order for a theory to count as locally causal,
his principle of local causality is, again,
it's locally causal if whenever we have statistically correlated
local variables A and B that are far enough apart when they occur
that they can't be causally influenced to each other,
there must be a rich enough set of causal variables in the past
that when you condition on them,
the correct joint probability distribution factorizes
in this neat way.
And this is necessary to get the theorem.
Now I'm not the first to suggest that Reichenbachian factorization is too strong a requirement,
too strong a condition to impose on a theory of causation.
Others, you know, Bill Unruh, for example,
in a 2002 paper, which I can also link,
has this long explanation.
He says, well, yeah, I mean, the things got entangled.
There was some interaction that entangled them,
but in quantum mechanics, interactions
are not variables or beables.
They're not the kinds of things that you can condition on.
There was a common cause, the interaction in the past,
but it's not the right kind of common cause
to get a factorization, so there's no problem here.
And various philosophers of science have made this argument also.
There's a bunch of papers by Jeremy Butterfield, who's a philosopher of physics at University
of Cambridge, who also has cast doubt on Reichenbachian factorization.
Why would we even think Reichenbachian factorization is good?
Well, it kind of works for everyday macro world joint probabilities, but that's, that's not a strong argument
that it should also hold for micro physical probabilities. And
there are already good reasons to be suspicious that in fact, it
should hold. So but this just sets up a target. If you deny
that Reichenbachian factorization is a good requirement of any
good theory of local causation, and Bell's theorem has no teeth.
It simply doesn't work anymore.
Now, in a lecture Bell gave in the early 90s
called La Nouvelle Cuisine,
which is in his collected work,
Speakable and Unspeakable,
his collection of all this papers,
but not the first edition,
the second edition of Speakable and Unspeakable.
He has this lecture, it's called La Nouvelle Cuisine,
which we can also link to.
He tells the 1975 theorem story over again and he modifies the premises a little bit.
He modifies the premises a little bit.
I've been having an email correspondence with a philosopher of physics, Ioanna Luke, about
this.
She's working on a paper where she's looking at all the different formulations of Bell's
Theorem.
And in 1991, he slightly changes the premises a little bit
so he's not relying on exactly the same kind
of Reichenbachian factorization,
but he still needs all these sort of like assumptions
about what a good theory of causation could be.
He's not proposing a theory of causation.
There are many theories of causation historically.
There's regularity theories to say that A,
causally influences B is to say that when A happens,
B happens later.
Or counterfactual theories that A causes B
just in the case that B would not have happened
if A had not happened.
And there's conservation law causation
and there's probability raising,
there's all these theories of causation.
Bell doesn't propose a theory of causation.
He just says, I think a good theory of causation should have this feature.
And if you assume this feature, you get this inequality.
The inequality is violated by quantum mechanics.
Therefore, whatever quantum mechanics is, it doesn't have this feature.
Therefore cannot have a good theory of local causation.
But he didn't propose a theory of local causation.
So this is a very long way of saying, in the indivisible stochastic approach, we replace
the differential
equations. We no longer have the Schrodinger equation as a fundamental equation or Newton's
laws or Maxwell equations or any of that. We don't have those things anymore. Instead,
we have these conditional probabilities, this sparse set of what I call directed conditional
probabilities. I'll explain the directedness in a moment. But these directed conditional
probabilities are exactly the kinds of ingredients that show up in the literature on causal modeling.
When you do probabilistic causal modeling, you've got random variables, these are the things that can change,
and they've got links between them that describe causal relationships, and those causal relationships take the form of conditional probabilities.
The probability of B having certain values, given that these other variables variables have their values and we would say therefore that those variables
Causably influence B
This is exactly the language in which the laws are formulated in indivisible stochastic formulation of quantum mechanics
So you might think well, they're phrased in a way that provides a very hospitable
hospitable domain for talking about causal relationships.
Maybe we should read those conditional probabilistic
relationships through a causal lens.
And now you have the opportunity that maybe you could build
a theory of microphysical causation out of these ingredients.
They're no longer based on a Laplacian paradigm
of differential equations.
Now they're based on exactly the kinds of conditional relationships that we might think
have a causal gloss to them.
So in one of my later papers, this is the paper New Prospects for a Causally Local Formulation
of Quantum Theory, I run with this.
I say, okay, well, let's take these and use these to talk about causal influences between
things.
And now let's say that what it means for a theory to be causally local is that when you
have two systems that are at space-like separation, they're far enough apart that they can't influence
each other, then there's a clean factorization of the conditional probabilities between them.
And I'm phrasing this very vaguely
because it's a little technical to write it down,
but you can read the paper.
But this is basically proposing an actual,
it's taking a stand, it's proposing,
like proactively proposing a theory
of microphysical theory of causation.
And then asking on that theory of microphysical causation,
do we get non-local causal influences in the EPR
experiments in particular?
And the answer is we don't.
So you can read this, this is in the paper.
I also have some talks online.
People can go and they can watch the talks where I go through all the technical details.
I very precisely define what I mean by causal influences based on these
conditional probabilities, and then I carefully define what I mean for two
things to be causally independent of each other.
And I define what it mean rigorously
to say that two things are not exerting
a non-local causal influence on each other.
And then I carefully go through the EPR experiments
and I show that in the EPR experiments,
there is a causal influence that goes from
the EPR experiment, there is a causal influence that goes from the instantiation of the entangled pair to the two particles, which makes sense because the instantiation is in their past
light cone.
But there is no causal influence that goes from whatever Alice does to whatever Bob does.
So I'll put a link to all of your talks on screen and in the description as well.
And maybe at some point when you have another talk planned, I'd like you to give it on
toe so that people can see some of the math behind what you're saying.
That would be very cool.
I hope that was all somewhat clear and understandable.
Well, many people have questions about Bell, so I'm glad that you were able to give this
explanation.
Yeah.
So that's a brief, like, summation of how to think about Bell's theorem.
But it's a general, like, it's a kind of care one has to take whenever approaching any theorem
about physics, any physical theorem.
It's not enough to check that the theorem is mathematically sound as a mathematical
argument.
You have to ask, do the things it refers to out in
the world, the reference, are they rigorously defined?
In the case of Bell, he needs local causation.
Are those terms sufficiently well-defined?
And I would argue they're actually not.
And then you have to worry about the connection between those reference and the mathematical
ingredients.
Is that sufficiently established?
That's where there's a weakness.
If Bell's definition of causation is not sufficiently rigor? That's where there's a weakness. If Bell's
definition of causation is not sufficiently rigorously established, then the theorem just
doesn't have any teeth. And if you can provide a theory of microphysical causation and a
theory of what it means on that theory of microphysical causation for things to not
be able to causally influence each other non-locally, then that's all you have.
If people still don't like it and still think,
well, it still seems there's too much correlation,
well, maybe that doesn't feel great,
maybe it's unintuitive, but it's not a source of brokenness.
Okay.
Now, this all leads to the question,
what do we even mean by, what is entanglement in this picture?
So this indivisible stochastic picture,
like what is going on in entanglement?
If there's no state vector,
if there's no superposition actually happening,
what do we mean by entanglement?
There's actually a very nice picture
of what's going on with entanglement now.
Suppose I start with two systems.
Think of two particles, let's say, or two qubits,
two simple systems.
And suppose that these systems initially
are independent of each other, they have their own configurations, they are not interacting with each other in any way.
Well then, according to the indivisible stochastic approach, by definition they're going to have their means to be independent and not interacting is that they have their own indivisible stochastic laws.
Now let's suppose that there's a certain time. We'll call this time t prime. At this time t prime, they interact in some way.
And because interactions happen locally, whether you're doing quantum mechanics or not, they
have to be nearby each other or sharing some intermediary in order to communicate, but
some way they begin to interact.
What does that interaction mean?
Well, even in Newtonian physics, when two systems are interacting, they no longer have their
own separate potentials anymore.
There's one potential function for both of them that doesn't factorize.
In the indivisible stochastic approach, the interaction is represented by the fact that
now there's an overall stochastic dynamics for the two systems and that overall stochastic
dynamics does not factorize while they're interacting.
Now what you might imagine happens is once you separate the systems and take them to
far distance separations in space, that they'll have their own separate stochastic dynamics
now.
And that's what would happen in the Newtonian case.
In the Newtonian case, but it doesn't happen here.
And it doesn't happen here because the overall stochastic map is indivisible.
It goes all the way back to when they first, like before they interacted.
It cumulatively encodes all the statistical effects between before they interacted and all future times.
And if there was a moment when it stopped factorizing,
it's not gonna start factorizing again.
So the two systems will not have their own separate laws.
There'll be one overall indivisible stochastic dynamics
that's not factorizable for the two systems.
But there's a common cause.
The common cause was their interaction.
But the common cause is not the kind of common cause that would be plugged into Reichenbach's
principle of common causes.
Now if you have an agent, if you want, Alice or Bob, or an environment, or even just one
of those little qubits we talked about, the detector bit that we did when we were talking
about the double-sit experiment, that interacts with one of the systems and reads off its configuration at some later
time T prime prime, T double prime later when they're far apart, it will produce a division
event.
That division event will let us restart the overall stochastic dynamics, but the systems
are now separated.
And so when the indivisible stochastic dynamics restarts,
starts cleanly, they're no longer interacting,
it's going to begin factorized and it will remain factorized.
And this is the breaking of entanglement.
So the two systems are not initially interacting,
they have their own separate indivisible stochastic dynamics,
we would say they're not entangled.
They begin interacting for some amount of time,
during the interaction and after until
the next division event they no longer have their own separate indivisible
stochastic dynamics that factorizes then when there's a division event later on
once they're far separated and we can restart that stochastic evolution we can
stop look at what configurations they're in and then write down new laws for them
they're separated now now they'll have their own independent laws again and
that's the breaking of entanglement.
Notice this is a picture of entanglement phrased entirely in terms of ordinary probability
theory with no Hilbert spaces.
This is the claim that's going on.
This is what's happening under entanglement.
And this picture of what's happening with entanglement comports with this microphysical
theory of causation I was describing before, a theory that does not permit whatever agent or environment or measuring system acting on one system having
a causal influence at space like separation on the other.
So that's what I'm, what we're saying is happening with entanglement.
It's a picture of entanglement at the level of ordinary probability theory.
Whether you call it classical probability theory is subtle.
It depends on whether you think that indivisibility is a classical property or not.
But it's certainly ordinary probability theory and it doesn't require Hilbert spaces and
so forth.
So that's one way to think about how entanglement is ultimately happening at the sort of deeper
level of the indivisible stochastic process.
So you mentioned that the stochastic dynamics somehow encoded had the memory, I know you
don't like this word memory, but somehow encoded what happened before into itself.
So if I was to think of that as information that's being encoded, well information if
you accumulate enough of it you form a black hole in a small enough region.
So does this mean that if entangled particles are entangled for long enough, then they'll
just form a black hole because the dynamics between them encode so much information?
Help me decode this question.
Yeah.
So I think there is a sense in which the overall indivisible statistic map is encoding sort
of cumulative,
like statistical connections.
But even that, I mean, I'm really sort of fishing for metaphors here when I say that
because it's not really memory in the traditional sense.
Again, a traditional non-Markovian process, the way we usually talk about non-Markovian processes,
we have this hierarchy, this tower of higher and higher and higher or conditional probabilities conditioned on more and more
and more facts that are all different.
And they contain a huge amount of information.
A Markov process is what happens when you assume
that all of them are equal to the first order ones.
They're there, but they just, they're all equal
to the first order ones, so they don't contain
any actually interesting information.
In an indivisible stochastic process,
they're not there at all.
Like we don't have all this information stored.
There's just the statement is just that the probabilistic description of the later configuration
of the system depends on its initial configuration and that can happen over not a, you know,
it can happen in the past.
It can be the configuration of the system at some past time.
It's not that information is being encoded in a literal sense.
It's not the kind of information that like the Bekenstein bound would say could saturate
the maximum amount of information that could happen in some region of space and might lead
to the formation of like, would exceed how much information you could have and might
necessitate a black hole forming. So I would just say that I don't think that information, it's not information, I think,
in the sense of like it's encoded on physical qubits in space that would back react on
space time and have gravitational effects.
It's just the laws are a little weird and stranger than we might have thought.
I see.
Okay, so tell me about the loss of phase information.
We talked about this off air, but explain this on there.
So one question you might ask is, okay, well, when I do this change of representation between
the stochastic process that has no complex numbers in it, no phases, but indivisible
dynamics and I go to this sort of quantum system where I've got phases and all that
sort of thing, right?
It seems like the phase information is really important.
I mean, we need it in order to make predictions of that interference.
How could it be missing from the indivisible side?
Well, the point is it's not missing.
The phases on the Hilbert space side are just the indivisibility on the...
So they're there. They're just manifesting somewhat differently.
But even then you might say, well, but come on.
I mean, I can indirectly measure those phases.
If I, like, take a unitary time evolution matrix
that I'm using to describe evolution on the Hilbert space side
and I, like, mod square the entries and I lose all the phase information,
how can that possibly still capture the same information?
How can it possibly do it?
And the answer is, in this picture, when you model a measurement process, you have to bring the measuring device in, just like Bohm did when he was writing those later chapters in his 1951 textbook
on the measurement process or in his Bohmian mechanics pair of papers in 1952. You have to bring the measuring device in.
And when you do that and describe the whole thing
as one giant indivisible stochastic process,
you don't need the phases.
You just run the overall indivisible stochastic process
with the measuring device, and it will probabilistically end up
in one of its measurement, reading, outcome configurations
with probabilities that agree with the predictions of the Born Rule.
And then the phases are immaterial.
You don't need them.
If however, I want to excise the measuring device from my formal description of the system,
if I don't want to deal with the whole measuring device, if I just want to remove it and just
look at the subject system and ignore the measuring device, treat whole measuring device. If I just want to remove it and just look at the subject system and ignore the measuring device,
treat the measuring device
as kind of like a background character,
not someone who's in the foreground of the story.
Then I need the phases to make predictions.
And then I would replace
the detailed physical measurement process
with a von Neumann-Lüder's collapse.
I would use the textbook Dirac-Von Neumann axioms.
So what I'm saying is, the textbook Dirac phenomenon axioms aren't going away.
We're just identifying them as describing a certain regime of validity.
When you're doing a standard measurement with a big measuring device on some microscopic system,
you could model the whole thing and include the measuring device and do everything,
and then you don't need all those phase factors.
You can just run the whole thing as some overall giant stochastic process,
and you'll get the right answer.
This is all done out in detail in the first paper, the Stochastic Quantum Correspondence paper.
But if we don't want to go to all that trouble, if we want to simplify our description
and ignore the measuring device, treat it as a background character,
and just focus on the system in question, then, and the system is microscopic,
so we don't run into the ambiguities that we might
run into, well then we can ignore the measuring device, we can treat the measurement as an
instantaneous collapse process, and then we do need to worry about those phase factors.
So the phase factors are a way of encoding not just the indivisibility, but also the
unseen measuring device.
That's one way to think about what happens to those phase factors.
This sounds like Copenhagen still.
So how is this not Copenhagen?
Okay.
So I mentioned that Heisenberg wrote a lot of philosophy.
He wrote a book called Physics and Philosophy, and it's a chapter in his book, Physics and
Philosophy, which we can also link to.
People can find it.
And he's a chapter called The Copenhagen Interpretation.
He describes what he saw as the Copenhagen Interpretation.
Now, there's not agreement or consensus on exactly what the Copenhagen Interpretation. And he describes what he saw as the Copenhagen Interpretation. Now, there's not agreement or consensus
in exactly what the Copenhagen Interpretation means.
And different people who are responsible
for what we think of as the Copenhagen Interpretation
had somewhat different views on it.
Let me just describe how Heisenberg basically described it.
He said that, he basically said,
well, Kant told us that our human brains
can only understand the world in certain ways.
We understand the world in terms of three-dimensional geometry and cause and effect.
But there's certain things that we just understand. This is how our brain is supposed to work.
And the quantum world simply doesn't work in those ways.
It doesn't work in ways that our brains can understand.
The classical macroscopic world does, and we have good theories for the classical macroscopic world.
We've got classical mechanics, classical physics. The microscopic world is simply beyond our
comprehension. So we use the mathematics of quantum mechanics, Hilbert spaces, wave functions,
the Schrodinger equation, not because we think the world literally is these things, the wave
function is real, but merely because they just give us a formal instrumentalist, meaning just a tool set for making predictions.
They give us a set of mathematical tools for predicting what will happen back on the macroscopic
classical scale.
A big macroscopic system sets up the experiment, a big macroscopic measuring device measures
it.
What's happening in between, we have no ability to understand.
We use the weird mathematics of quantum mechanics to make the predictions about what will happen.
But really, at the end of the day, everything has to then show up in some classical results.
And that's the picture that's the opening interpretation, at least according to Heisenberg.
And he had some words he said about where the probabilities came from. He's like, well,
there's an uncertainty principle. And for big macroscopic systems, we're all kind of uncertain. And when microscopic systems interact with macroscopic systems, that's where the probabilities came from. He's like, well, there's an uncertainty principle, and for big macroscopic systems, we're all kind of uncertain.
And when microscopic systems interact with macroscopic systems,
that's where the probabilities come in.
He had a somewhat more sophisticated picture about all of this.
And people can go and read his chapter on all of this.
This is not Copenhagen because I'm not practicing
the same kind of quietism about the micro world
that he was practicing.
I'm not saying we don't know what's happening in the micro world.
I'm not saying we just basically only have classical physics, and then the micro world
is inscrutable to us.
We need this other theory to describe the micro world, and all it does is make predictions.
I'm saying the micro world has an ontology.
I'm saying that classical things have physical configurations, measuring devices have physical
configurations.
Measuring devices are emergent from atoms,
and that's okay now because the atoms also have an ontology.
The atoms really exist. They really do have configurations.
And when you're doing the experiment, the particles are really doing things.
They're moving in particular ways.
The laws are these indivisible stochastic laws, which are a little bit unintuitive,
but things are really happening between the measurements.
And now I can hopefully tell, at least in broad outlines, a picture of emergence,
a story about emergence, where we have the particles or whatever the ontology is,
it could be fields particles, whatever, and then larger macro scale things emerge from them
the way, in spirit at least, that fluid water emerges from water molecules.
The Copenhagen interpretation doesn't do that.
You can't talk about how the classical world is emergent because the Copenhagen interpretation
practices quietism about the micro world.
It doesn't say what is there in the micro world.
It doesn't posit any kind of substrate, any lower level reality, physical reality out
of which the emergence of classical things is supposed to happen.
So these are all ways in which this picture is distinct
from the Copenhagen interpretation.
And of course, the Copenhagen equation also has this weird
unspecified boundary between what is quantum
and microscopic and what is classical and macroscopic.
This is the so-called Heisenberg cut.
There's a threshold above which you're classical
and below which you're quantum.
And that's a murky line and people have debated Heisenberg cut as a threshold above which you're classical and below which you're quantum.
And that's a murky line and people have debated whether it's really there or whether the idea
is you can move it around.
But in any event, it's not part of the indivisible stochastic approach.
Are electrons single particles?
Are they composite or are they point particles in your picture?
I don't know what they're made out of.
Our best theory, the standard model, describes electrons as not composites. Are they point particles in your picture? I don't know what they're made out of.
Our best theory, the Standard Model, describes electrons as not composite.
So I don't know if they're made of anything else.
I mean, there's also this interaction between electrons and the Higgs field, which is complicated.
But they're not any more or less composite in this picture than they would be according
to the Standard Model.
Okay.
So something I'm interested in is research.
What open questions does this pose?
Where can people come in to help you with this theory?
Yeah. So what I find exciting about this project is
it doesn't often happen that you stumble on a blank canvas
in an area of what you might've thought was settled fundamental
physics where you can ask questions that really have no answers yet and there are a lot of
directions that people can pursue when it comes to research.
This project opens up a lot of these directions.
One of them is just the mathematics of this new class of processes, these indivisible
stochastic processes, which only showed up in the research literature in like 2021 in this review article
by Simon Mills and Kevin Moody, which we can also link to people can look at it. It shows
up in this sort of figure in their paper, there's like figure five in this paper. You You know, mathematics has all these very simple ideas like functions, matrices, limits, derivatives
that are reasonably simple to define, but yet have profound implications.
It's not super often that you see relatively simple ideas, simple mathematical ideas that
have big applications and ramifications.
Indivisible stochastic processes are a fairly simple idea that I guess people just didn't
really think about.
And so there's just some interesting work to be done on trying to understand the mathematics
of these processes.
That could be interesting work for someone interested in math, applied math, theory of
stochastic processes.
We talked about how you would model real world systems like quantum field theories, like
the standard model.
There's a lot of work to be done in taking this picture and applying it to systems that
show up in solid state physics and high energy physics and the standard model to make sure
it works for one thing and also to see if it reveals any interesting features of these theories that might have been difficult
to see otherwise.
Dynamical symmetries are a really important subject in physics.
Dynamical symmetries show up in a very interesting way in this approach and so there's a lot
of work to be done there.
There are old problems in statistical mechanics. So one of the outstanding problems in the philosophy and foundations of statistical mechanics is
where do the probabilities in statistical mechanics come from?
In classical statistical mechanics, you're imagining you've got particles, like a gas is made of particles,
and the particles are all evolving because it's classical, according to Newtonian mechanics,
the rules of Newtonian mechanics, but Newtonian mechanics is not a probabilistic theory.
There's this lovely argument by the philosopher of physics, David Elbert,
that there's nothing whatsoever in the laws of Newtonian physics that would preclude a bunch of rocks
spontaneously falling together to form a bunch of statuettes of the royal family.
You might think that's impossible, but it's not impossible.
I mean, after all, you could start with statues of the royal family
and have them crumble into rocks.
And because Newtonian physics is time-reversal and variant,
the opposite should be possible.
And yet, we would just say that's unlikely somehow.
But Newtonian mechanics doesn't come with probabilities,
so where do those probabilities come from? One argument is the probabilities come from the initial state of the universe.
The universe began in some initial state, but of course, there was one initial state of the universe,
not a probabilistic collection of initial states.
So there's some work to be done in understanding how we go from the beginning of the universe in some sense
to some notion of probability distribution, and it has to be the right kind of probability distribution. how we go from the beginning of the universe in some sense
to some notion of probability distribution.
And it has to be the right kind of probability distribution.
On the one hand, it should be the kind of
probability distribution that doesn't lead to rocks
forming the statuettes of royal family,
because we don't see that around us.
We don't see that happening.
We look around and we don't see rocks
spontaneously assembling into statuettes
of the royal family.
And so we hopefully, we were looking for some kind of explanation for why we don't see that happening.
Oh, what I mean is if you were to wait around for long enough, wouldn't you see it?
Maybe, but only if the set of possibilities is bounded in the right sense.
Right? If the number of possible configurations of the universe is unbounded,
there's no requirement you ever have to revisit or visit every possibility.
If there's only a bounded, a so-called compact space of possibilities,
then there are arguments that eventually you have to get recurrences or you have to visit everything.
But in any event, in the time we've had since our universe has existed, we have not seen that happen.
We've not seen rocks spontaneously form.
What I mean is like, even if we have this space that's not bounded,
some events will occur that will be extremely, extremely unlikely.
Yes.
Of the same order of magnitude, if not greater than the royal family reform.
That's right.
But we don't expect them to happen all the time.
Right?
We live in a universe where they happen, but only rarely, not all the time.
It'd be very weird if this were happening all the time, right? We live in a universe where they happen, but only rarely, not all the time.
It'd be very weird if this were happening
all the time all around us.
How do we explain why it's not happening
all the time around us?
Somehow this is connected with how the universe began.
The universe began in some kind of configuration
that was very typical in some sense.
It was very generic.
It was very boring.
It didn't have the very special arrangements
that would lead to us seeing strange,
unlikely things happening all the time.
But we can't make it too typical
because there's some sense
in which the most typical initial configuration
is just very random and in some loose sense,
very high entropy.
We actually need the initial,
the beginning of the universe to begin
in a low entropy configuration
so that we get a well-defined thermodynamic arrow of time.
David Albert calls this the past hypothesis.
So there's something mysterious going on about the beginning of the universe if you're living
in a deterministic universe, where the laws are deterministic, because how else do we
get probabilities out?
They must come from some statement about the initial conditions, but those initial conditions
in the universe must be such that we began in low entropy and are rising
toward high entropy and yet are typical enough
that we don't see surprising things happening all the time.
In a theory in which the laws themselves are probabilistic,
stochastic, we don't have the same kinds of problems.
If the laws are themselves stochastic,
we're getting probabilities out of the laws. We don't need to get them out of the initial
conditions of the universe. So this gives a whole other way to think about where
the probabilities of statistical mechanics can come from. Now, one might ask,
okay, does that mean that all statistically fluctuating things in
statistical mechanics and in thermodynamics are ultimately quantum
mechanical in origin.
That's not quite the way I would phrase it.
The way I would say it is we need some source of probabilities
in order to get things like to get cisco mechanics off the
ground.
You need some statement like all else equal all of the
possible configurations or states of a system that are energetically accessible
are in some sense equally probable, right?
The technical term for this assumption
is that we're assuming the micro-canonical ensemble,
but it's basically all else equal
if a system can have lots of states
and they're all available, the system can get to them,
we should treat them all as being equally probable
unless we have some good reason to think otherwise.
How do we get that off the ground?
There were arguments for a while that maybe systems
just rapidly oscillated and changed,
even according to Newtonian mechanics
in a way that was called ergodic.
Ergodic systems are systems that rapidly explore
their possibility or state space very rapidly,
so rapidly that you can sort of pretend
that the system is equally likely to be in any of its states.
Unfortunately, proving that systems are ergodic is very hard.
And there are many systems that are known not to be ergodic.
So the ergodic hypothesis turns out not to hold
for a lot of systems.
There have been some information theoretic arguments
to get this off the ground.
But then you run into some very deep questions like,
if the probabilities are all just in my head,
how can the probabilities actually lead to coffee boiling or something like that, right?
It feels like the probabilities should somehow be out there in nature because they seem to
be doing physical work in some general sense.
So information approaches toward trying to derive the equal probability of all the microstates
is very hard. These, but theories that have probabilities in the laws
provide a different way to get probabilistic behavior
at this sort of necessary level.
Once you've got this probabilistic behavior
and can talk about Boltzmannian statistical mechanical systems,
you can then take these Boltzmannian statistical mechanical systems
with sort of all the states being assigned probabilities in roughly equal amounts,
you can couple them together.
You can take big, big, big systems called reservoirs, which model the environment and
little systems.
And you can, from these interactions, derive notions like thermal equilibrium at some temperature.
And then you can derive what's called the canonical ensemble,
which is the probability distribution we would associate to a system
that is energetically interacting with a larger,
a very large environment called a reservoir.
And these systems will exhibit fluctuations that are thermal fluctuations.
And those thermal fluctuations are distinct from quantum mechanical fluctuations.
So there's like a higher level of fluctuations, thermal fluctuations that you get for these
systems.
It's not that you need the indivisible stochastic approach to explain that higher level of emergence
of thermal fluctuations.
It's that you need these indivisible stochastic approaches or something like it or you need
probabilities from somewhere to get statistical mechanics, Boltzmannian statistical mechanics
off the ground. And once it's off the ground, you can then help yourself
to all the tools that have been developed for statistical mechanics
to understand the emergence of temperature,
the emergence of thermo-glybrium,
and the emergence of thermal fluctuations.
Earlier when you said that it's not just all in our heads
because the water is boiling and doing something,
are you referring to that some people think randomness is about our ignorance?
Right.
Right.
Yeah.
So one way to think about probability is that probability is objective chance type probability,
that nature is really behaving in kind of a chance-y, unpredictable way, that phenomena
are happening in an unpredictable way.
Another view is that the probabilities are all in our heads, right?
When we assign probabilities to things, we're talking about what are called subjective credences,
credences or degrees of belief. When we assign probabilities to things, we're not saying
the probabilities are really out there in any sense. We're just describing like our
belief and whether something is actually a particular way or not. And there's a relationship
between objective chancy probabilities and subjective credence probabilities. It's most famously formulated
as what David Lewis called his principle principle. The first principle is principle,
P-A-L, and the second is principle, P-L-E, which is just to say that if you happen to know the
objective chance for something and you condition on that, then your credence should be equal to
the objective chance. There's a connection that, then your credence should be equal to the objective chance.
There's a connection between objective chance and credence.
But in these sorts of pictures,
we acknowledge that there are different kinds of probability.
There are objective chance probabilities,
there are subjective credence probabilities.
From time to time, people have tried to say
there is only one kind of probability.
Maybe all there is is just subjective credence probability
and there is no fundamental
objective chance probability or vice versa, I guess. Maybe we'll talk a little bit about
that in the context of Everettian quantum mechanics in a little bit because it does
show up in that context. But the question is if all probability is really just subjective
credence probability, then how can subjective credence probability in our heads underlie Boltzmannian statistical mechanics, which
underlies thermodynamics and thermal fluctuations and all this stuff that happens in the world
around us?
I mean, if you just happen to know the exact specific state of a system and now that specific
state has 100% probability or nearly 100% probability, have we just mentally, like now
that my knowledge is changed,
I've changed all the probabilities
and so that suddenly make thermodynamics stop working.
That's obviously too quick a statement,
but there is a mystery, a little bit of a mystery here
around like, could it be just that all the probabilities
in our heads or is there something random
in some sense happening actually in the physical world?
The reason this is very tricky is because coming up with a self-consistent, unambiguous,
rigorous theory of objective chance turns out to be very hard.
That's one reason why people have retreated to thinking that probability is all credence
because if it's credence, it's okay if it's not perfectly rigorous.
Objective chance probability is very hard to specify.
It runs into all kinds of basic problems.
What does it mean to say that some thing out in the world
objectively has a chance of 72% of 0.72?
You might say, well, it means that in the long run,
if you repeat it many times, 72% of the time,
it will come out a certain way,
but that's actually not true, right?
If you take a coin, for example,
and you believe the coin is a 50-50 coin
and you flip it 10,000 times, it's not going to be,
if it's a fair coin, it's not gonna be heads 10,000 times, it's not going to be,
if it's a fair coin, it's not gonna be heads 5,000 times.
It'll be heads a little off of 5,000 times.
But if you think about it hard enough, you realize,
but actually there's a chance it could be heads every time.
It's very unlikely it could be heads every time,
but it could be heads every time.
And if you try to say something like,
well, okay, we need to take some kind of limit,
maybe in the limit as the number of flips goes to infinity,
it's like exactly 50%.
But that's not how limits work.
What about if the coin has a propensity to be 50-50?
Well, propensity theories of chance are tricky
because what is a propensity?
A tendency to yield results 50% of the time,
but you see it's like circular.
It's like very hard to pin down what you mean by propensity.
Propensity theories of chance say that
there are just certain objects that have,
they want to do something in a certain way,
but then how does, what does the 50% mean, right?
Are you saying that they want to do it a certain way,
this fraction of time,
but then we run into the same problems we have here.
This theory of probability that it's about frequency ratios
Whether their propensity like they're coming from the object or they're from the laws or whatever
that they're about the frequency with which you get certain results is called frequentism and
frequentism is
tough to make rigorous
You might say we'll just take the limit n goes to infinity take the number of trials to infinity
But that's not how limits work.
A limit, when you say that a certain sequence of things
has a certain limit, what you're saying is
that if you go beyond a certain term in the limit,
you go beyond the nth term in the limit,
then all the later terms are closer
to the claimed limiting value than whatever,
you give me some error, some epsilon,
I can find a far enough distance along the sequence
that everybody farther along is closer
to the claimed limit than epsilon.
If you make epsilon smaller,
I just go farther down the line.
Make epsilon smaller, I go farther down the line.
As long as I go far enough down the line,
everything later down the line
will be closer to the limit than epsilon.
Probability doesn't work that way. Fre. Probability doesn't work that way.
Frequentist probability doesn't work that way.
There's no number of times you can flip a coin
that will make, for sure, its frequencies closer to 50%.
You could flip a coin a billion times,
and it could land heads every single time.
It's unlikely, but it could happen.
So if I give you some epsilon,
you can't give me any n, any number of flips that will guarantee
that it will fall closer to 50% than that.
You might roll your eyes and say, oh, come on, but it's unlikely to do that.
It's likely to be closer than epsilon, but the word likely is probability.
What you can say is that if I flip the coin enough times I Can make the probability that it is farther from epsilon?
away from 50%
smaller than epsilon
But that's just relating one probability to another it's a totally circular definition the law of large numbers is phrased this way
It's just a circularity relating one kind of probability to another and the formal way to describe this is that when you do a limit
You have to have a notion of a measure, right?
A notion of like a metric, I'm sorry, you have to have a metric, you have to have a
notion of like how far away something is from something else.
And for probabilistic systems, the metric itself is a probabilistic metric, that's
what we're using for distance.
And so any attempt to use limits with a probabilistic metric to describe probability is going to run into the circularity injection
Nonetheless, even though we don't have a rigorous theory of frequentist probability. We certainly have
An intuition that when we look at a long sequence of coin flips
Or a long sequence of ones and zeros that we can distinguish a highly random sequence from a non-random sequence
If we look at 10,000 zeros and ones,
and we discover that about 50% of them are zeros
and 50% of them are ones,
and furthermore, runs of zeros,
runs of three or four or five zeros in a row,
or ones, three or four or five in a row,
occur with certain frequencies.
And this sequence obeys a number of other criteria
for randomness, like, you know,
the various criteria for randomness.
There's the right kinds of lack of correlation over time.
There's all these things you can run on a sequence of 10,000.
We would look at that and we'd go,
that to me looks like a random sequence
that was generated by a 50-50 coin.
It's not rigorous, you can't make it rigorous.
And maybe there will never be a perfectly rigorous
theory of probability at the level of,
like, frequentist probability,
but when you look at a long sequence,
there's at least an approximate notion
that certain sequences seem to have
all the hallmarks of probability.
So maybe we don't need a theory of probability
for system mechanics, maybe it's enough to rely
on randomness suitably defined.
There are terms that come up, people talk about Kolmogorov complexity
for characterizing how random a sequence is.
Maybe we can rely on those instead of relying on probability.
Maybe probability is all in our heads, and what's out there in nature
is something like complexity or Kolmogorov complexity or randomness.
Or maybe nature is just inherently chancy.
So there are a lot of ways to think about these kinds of problems.
Now, you just mentioned measure, incidentally,
but there's a problem of a measure in the many worlds interpretation.
We should talk about these other...
So why bother introducing a new interpretation of quantum theory at all?
Don't we already have enough interpretations?
I mean, there are a lot of people who are like,
we don't need any more interpretations.
The world just keeps adding more and more of them.
Why do we need any of them?
Here is the reason I think we need
a new interpretation of quantum theory.
The existing interpretations suffer from one of the following,
or more than one of the following problems.
Vagueness, they're vague about things
they shouldn't be vague about.
Or they're instrumentalist, which means they only talk about what happens in
measurements, but then what are measurements and what are measuring devices?
And are measuring devices made of things?
And you run into all these circularity problems.
You run into measurement problems, basically.
Or they're ambiguous when trying to deal with systems that are macro size.
We've talked about the V Vigners-Fren thought experiment.
Once you've got systems that are of the same size as, you know,
big classical measuring devices, does the theory render
unique or unambiguous predictions?
Or the theory is empirically inadequate.
Like it works for some systems, like Bohm mechanics works pretty well
for systems of fixed numbers of finitely many non-relativistic particles,
but doesn't appear to be empirically adequate enough to be able to handle the standard model.
Or finally, the theory relies on too many extra-empirical assumptions, axioms,
and speculative metaphysical hypotheses.
That is, to get the interpretation to work, we have to take on a whole collection of assumptions that
cannot be verified empirically and that seem kind of like desperate measures or seem very
far-fetched or seem difficult to justify, except that they give us the interpretation
we want.
So those are the problems, I think, that all the existing interpretations have.
All of them have one of them.
I mean, Bohmian mechanics suffers from it doesn't appear to be empirically adequate.
The philosopher of physics, David Wallace, who is at University of Pittsburgh, wrote
a paper that I think characterizes very neatly.
He says, you know, the sky is blue and the sky is blue and our best theory of why the
sky is blue is based on what's called Rayleigh scattering.
Rayleigh scattering is when you, I teach Jackson electromagnetism, we cover Rayleigh scattering.
When electromagnetic radiation impinges on charged particles, the charged particles vibrate and re-radiate radiation
and they do it, they radiate power according to a certain frequency dependence that favors high frequency radiation.
So you get much more scattering from high frequency radiation to low radiation.
Bohmian mechanics at this point doesn't seem capable of explaining Rayleigh scattering.
And it's been around.
I mean, de Broglie first introduced pilot wave theories in the late 1920s. Bohm again independently discovered them and then eventually began talking with De Broglie
in the early 50s.
It's been over 70 years now.
And the inability of Bohmian mechanics to account for these sort of familiar features
of our physical world is a sign of empirical inadequacy.
And that's a problem.
Copenhagen, well, instrumentalism, vagueness,
what is it measuring?
I mean, Copenhagen interpretation has lots of problems.
We've talked about all of those.
There are spontaneous dynamical collapse approaches to quantum mechanics.
And there are some of those, I think, that are still viable that haven't been ruled out empirically.
Some of them are now empirically been ruled out.
That means they're not empirically adequate.
They often involve some ad hoc choices you have to make.
They have to introduce sort of ad hoc parameters.
What's the time scale of which collapse is supposed to be taking place?
But some of those are still live possibilities.
People are working on them and people should work on them.
I mean, I'm not saying people should stop working on any of these things.
We should see if Bome mechanics can be made empirically adequate.
We should see if dynamical collapse approaches can work.
But so far they don't yet. They don't yet work.
And then there are other things that are even farther away from these things, like cubism.
So cubism, which comes from quantum Bayesianism, is associated with Chris Fuchs, who's at
University of Massachusetts at Boston.
And quantum Bayesianism, you know, begins with the idea that probability really is in
our heads.
There isn't really physical probability out there.
And that quantum mechanics, the formalism of quantum mechanics, is really a methodology
for dealing with uncertainty, for dealing with uncertainty about the world, and it's a particular mathematical framework that you
need to use to do this.
It purports to not be anti-realist.
It purports to be compatible with the idea that there is in fact a fact of the matter
about what's going on behind quantum mechanics, but it hasn't yet been able to formulate what
that picture is supposed to look like.
And I feel so bad because every time Chris gives a talk, at some point, you know, in
the question session, I'll raise my hand and I'll ask Chris this question about, well,
where's the picture?
What's the ontology?
What's going on?
He says they're just not ready to provide that yet.
I feel very bad whenever I ask him that because he's so nice and patient with me when I say
these things.
But, so I think the problem is,
we kind of don't have a place to stand, right?
I think one view is, what's the hurry?
What's the emergency?
Why do we need another interpretation?
Just do the textbook quantum theory,
Dirac von Neumann or Copenhagen or, you know,
Bohmian mechanics or whatever it is you want.
There's no rush, there's no problem. There are too many interpretations actually, and I would say there are too few.
We do not have a problem of underdetermination
with too many viable interpretations for one theory.
We have a problem of overdetermination, or at least a potential problem.
We don't have a single interpretation in my view that works, that meets all the requirements I laid out,
that doesn't have these serious
problems.
And without one, we're in danger.
We really, we're like at sea without a raft.
We need something.
And that's why I'm, I think that the time is due for a new interpretive approach.
Now, I've talked about the indivisible stochastic approach.
We've talked about many of its features.
We've talked about open questions.
And there's more open questions, right?
I mean, there are potential applications to quantum simulation and quantum computing
that people should think about.
I mean, after all, if Hilbert space pictures are dual to stochastic pictures, that may
mean that quantum hardware could be very good at simulating certain kinds of stochastic systems efficiently.
Just a moment. It's not exactly dual because you said it's many to many.
That's right. It's many to many. But the idea is that a given Hilbert space picture can describe many different stochastic systems.
That's good. It may mean that with quantum hardware, we can simulate many kinds of stochastic systems
that might have been difficult to simulate otherwise.
So one area of inquiry people can look into is, you know, and I'm certainly thinking about
this, is are there applications of this picture to finding new ways to simulate more general
kinds of stochastic systems, especially stochastic systems outside the Markov approximation,
using quantum hardware in an efficient way.
And then there's more formal stuff.
There's a whole formulation of quantum theory in the language of C star algebras we talked
about in our first talk.
What's the C star algebraic formulation that's appropriate for this kind of a theory?
And do we need something like that to talk about certain kinds of physical systems?
If we're not starting with Hilbert spaces anymore, then we're not beholden to Hilbert spaces.
We're not trying to build on top of them or modify Hilbert spaces.
We're starting at a different place.
We're starting just with ordinary probability theory.
Does this lend itself to generalizations of quantum theory that would have been impossible to get to if we'd started with Hilbert spaces?
Can we...
So when we start with a Hilbert space, the worry is that if you modify the Hilbert space picture in the wrong way, you'll get nonsense.
You'll get probabilities that are negative or probabilities that sum to more than one or or things that don't make any sense
But if you begin with a theory phrased from the beginning in language of old-fashioned probability theory
You're not at risk in the same way that generalizations are going to lead to
Inconsistent or nonsense results probability. You don't need to get to probability from something else.
When you start with Hilbert spaces, it could be the path you take to probability could break down.
If you modify Hilbert spaces in the wrong way, the path to get to good probability breaks down.
If you begin with probability, you're already there, and you're just not at the same risk of running into
inconsistencies with how you formulate probability.
And finally, as we've already talked about, there could be some potential avenues for rethinking our approaches to quantum gravity.
At this point, it would be great to talk about what is the many worlds interpretation and
what is the fundamental problem or problems with it.
Okay.
So open questions, other interpretations.
I haven't said very much about Everettian quantum theory.
What about Everettian quantum theory?
What about the many worlds interpretation?
Here is the story of the Everettian approach.
There's a cartoon picture.
In the cartoon picture of Everettian quantum theory,
every time you do a quantum measurement,
the universe splits into branches.
You have a cat, cats, you know, superposition alive and dead.
This is the cartoon version.
You measure the cat and now you split.
There's a universe in which there's a you and an alive cat and there's a universe in which there's a you and a live cat,
and there's a universe in which there's a different you
and a dead cat, okay?
This is how the cartoon picture's supposed to work.
And it seems kind of intuitive,
and if you want to take wave functions to be fundamental,
it seems like, well, this is the natural thing to do with them
if you sort of want to take them seriously.
But you run into problems almost immediately
with this cartoon picture.
One problem is that it's not always 50-50.
If the wave function is root two-thirds alive, cat, and root one-third dead, cat,
you still get two branches.
So in what sense is one of them now two-thirds likely, and one of them is one-third likely?
If they're two branches, how do we...
What does it mean to say that one of the two branches has a two-thirds
probability and the other one has a one-third probability?
How do we connect the branches with the notion of probability I was talking about before,
randomness?
If you've got a 50-50 random sequence, we expect to see zeros and ones according to
some distribution that looks random.
How do we get that picture of probability out of the branch picture of probability?
This is not obvious.
Now, one thing you might try to do is
argue that somehow when
you have a root one-third branch
and a root two-third branches, we should think of the root two-thirds branches
as really two branches and there's like three branches now.
But it turns out that branch counting arguments
don't work very well.
There's a well-known paper from 1989
in Analysts of Physics by
Farhi, Gold Goldstone and Gutman called How Probability
Arises in Quantum Mechanics.
And you can link it.
People can look at it.
They try to get this sort of counting picture.
You just consider infinitely or large numbers of experiments, large numbers of repeated
trials of experiments, and somehow argue that certain branches in the long run survive and others don't,
and you can sort of count them in some sense, and this is where probability comes from.
These sorts of arguments just, they fall into favor because they don't work very well.
So what do you do? Well, you could just add an axiom.
You could just say axiomatically, when there are branches,
the Born rule tells you what probabilities they have.
The problem is how do we relate these probabilities
back to the randomness probabilities we're talking about?
Like, what does it mean to say just by fiat
there's a probability here?
But there's actually a deeper problem.
You see, remember we talked about different bases
you could use?
In the Everettian approach,
there's just a giant universal wave function.
And there are infinitely many bases you could pick.
And if you change what basis you pick, then the branches change.
Right? All the components of the universal state vector are the branches.
And if you change your basis, you change the branches.
Which basis are the probabilities referring to? If there is in fact, you know,
parallel universes with probabilities assigned to them, in which basis do we do this? This
is known as the preferred basis problem. And I would add one more thing, like probability,
when you say something is a certain probability, what you're saying is that there are n possible ways it could happen, only one of which is realized.
In the many worlds approach, they all happen.
So is this even a probability at all? Is it even coherent to talk about this using probabilistic language?
And many worlds interpretation forces us to be skeptical about some things that we just see around us.
I mean, we do experiments, we get a single outcome. The outcomes appear to be happening probabilistically.
And the many worlds interpretation denies that that's true.
Right?
If you're gonna do that,
you better have good evidence for it.
Okay.
So what do you do with all these problems?
One argument is to say, okay, the preferred basis problem is kind of a problem,
but maybe nature dynamically picks out a basis.
Maybe as you let the universe evolve, decoherence works out well in only one basis.
There's a particular basis in which, a particular way to decompose the universal wave function.
So that when you decompose it in that way,
decoherence gives you branches that no longer interfere with each other noticeably.
I think that's Sean Carroll's argument in the Mad Dog Everett lecture, and I think you were there.
Yep. It's also the view that is at the center of David Wallace's 2012 book,
The Emergent Multiverse.
This idea is that we don't presuppose a particular basis in which the branches happen.
The universe just evolves, and decoherence just doesn't work in most bases,
but in a certain basis, we get nice, emergent, decoherent, no longer interfering branches. And
that's what dynamically is the correct branching and the
branches are not fundamental. The world's not fundamental,
they're not fundamentally there, they're just useful,
convenient ways to describe the wave function. But now we have
a problem. If the branches are not fundamental, if they're
emergent, we can't have a probability axiom that assigns them probabilities.
You see, the axioms, the fundamental axioms of your theory,
are supposed to refer to fundamental things.
If the branches are emergent, approximate things, not fundamental things,
the axioms cannot say, oh, if at some point in the future
we develop these emergent approximate branches,
then by axiom they'll be assigned probabilities. If the branches are now not fundamental, but merely emergent, merely just convenient ways
to describe what's going on, then it's very difficult to think about how you would make
an axiom that they should be assigned probabilities.
If we're not going to get the probabilities from the axioms, we now have a fundamental problem.
And this is where so much of the work in Ever-ready in quantum theory has happened, this problem of probabilities. If the
branches are emergent things, not fundamental, and we can't assign them probabilities by fiat through the axioms, how do
probabilities happen? Now, I think the argument I would make here is that they don't. If you were compelled to believe in an
outlandish metaphysical picture like the many worlds
interpretation, because you had to, because it was empirically
unavoidable, like we look at into outer space, and we see
galaxies, many, many, many billions of light years away, we
see countless galaxies, billions of light years away, that that
leads us to believe that there is a big universe out there. We see clocks on airplanes move at slightly different rates,
atomic clocks move at slightly different rates.
That's hard to believe, but we can do the experiments,
and we see this repeated rigorously many times.
It's not that we should never believe outlandish things,
but as Carl Sagan said,
extraordinary claims require extraordinary evidence.
The Many-Worlds Interpretation says that there is an uncountable, you know, an uncountable perfusion of universes that are coming out of every single moment, not even just measurements, but all the time.
That's an outlandish statement. And sure, we could believe it if we were compelled to by either rigorous logical reasoning or by just unavoidable empirical results.
But we're just not. logical reasoning or by just unavoidable empirical results.
But we're just not.
And when you're formulating many worlds interpretation, you run into this problem of, well, I have the per basis problem, I guess I can deal with that by letting the branches be
emerged into decoherence, but then I can't axiomatically assonding probabilities anymore.
At that point, you should just give up.
Because you're no longer compelled through rigorous logic
or empirical data that you have to believe in many worlds.
So why are you still trying to chase it down, right?
That is, this extravagant, outlandish metaphysical picture
is no longer forced upon us logically or by experiment.
So why are we chasing it down, right?
Why are we starting with the assumption
that they should be there and we need to somehow
gerrymander our axioms and principles and assumptions
to get the many worlds picture to come out?
And that's the impression that I get
when I see some of the work going on right now, right?
We're not compelled to take many worlds on
as a serious idea.
We can only get it off the ground by adding lots more stuff.
Why are we doing this?
So let me just describe a couple of the routes
people have taken and then we can quit
because that's basically the end of it.
One route is the route that David Wallace takes
in his book, The Emergent Multiverse.
It is an excellent book.
You should list it on the YouTube channel
and I recommend everybody interested should read it.
David Wallace is a fantastic, brilliant philosopher and also trained in physics.
And the book is a beautiful book.
I recommend it to everybody who's interested in quantum foundations.
In that book, he tries to solve this problem of probability.
How do we get probabilities assigned to these things?
By introducing a large number of additional assumptions. And when I have people read this book, I tell them,
read it and then just make a list
of every extra assumption he has to make.
He assumes that we should have the same
metaphysical relationship to many copies of ourselves
as we would if there were only a unique individual
we were to become.
That means you have to take kind of a stand on old questions like the metaphysical teleporter problem in metaphysics.
The theorem he uses requires invoking a notion of free will that requires taking a compatibilist stance
because in many worlds interpretation there's just a deterministically evolving universal wave function and yet he has in his proof of the
Born Rule
Agents, which is already a dangerous idea agents
We're bringing back agents making choices about which unitary operations are going to perform
This is a crucial part of the proof and he has a little footnote where he admits yes
This does entail certain assumptions about free will but free will is a big problem
I won't solve it, but that doesn't make the case.
If you're resting on an unsolved problem,
it doesn't make the case that what you're doing
is gonna work.
He introduces a number of what he calls richness axioms
and rationality axioms.
The rationality axioms are supposed to be
general good practices of what it means
to be a rational observer.
These were developed in a one world kind of picture.
And the assumption is that they also work
in a many worlds picture.
Basically the way that one tries to proceed here
is one says, what does it mean to be rational?
It means that you wanna use the tools of decision theory,
the formal, precise, probabilistic
tools for making good decisions called decision theory.
And people who use the tools of decision theory, who are rational, will end up assigning probabilities
to branches according to the Born Rule.
That's roughly and very gross outline how this argument is supposed to work.
Now John Norton, again, philosopher at University of Pittsburgh,
raised an objection to really any such approach to try to get probability out.
In a deductive argument, the conclusion cannot be any stronger than the premises.
If you're trying to get probability to emerge as a conclusion,
there must have been probability already in your premises.
In this proof of the born rule, one is trying to get probability out, so there must be probability
somewhere in the premises.
If you don't assume probability somewhere in the premises, somewhere you must be doing
something that is not legitimate.
And you can see how this unfolds for this decision theoretic argument, which goes back
to David Deutsch also.
There's an earlier version of it, which goes back to David Deutsch also.
There's an earlier version of it in a 1999 paper by David Deutsch.
This was quantum theory and decisions.
You can also link to that.
The argument is that if you obey the rules of being a rational observer and use decision
theory, you're going to end up assigning probabilities according to the Born rule.
But you can ask, why is that the definition of rationality?
I mean, in a many-worlds type universe,
there are going to be observers who behave rationally
according to the dictates of decision theory.
Some of those observers are gonna be very successful
over 10 years,
and others are gonna be very unsuccessful
because in the many-worlds interpretation,
everything will happen on some branch.
But there are also observers who do not obey the rules
of decision theory.
There's some very irrational observers who just choose
not to follow any of the rules of decision theory,
and there are gonna be branches in which they're gonna be
unsuccessful over 10 years,
and there are gonna be branches in which they're successful
over 10 years.
All those observers are just there.
Right.
And to say that, well, you should just be rational
and obey decision theory by axiom
does not solve the probability problem.
In a one-world picture, where only one future actually happens,
it seems to be the case that people who are rational
and think very carefully about their decisions
and use something like a decision theoretic approach
in the long run over 10 years tend to make more money
or healthier, live better lives, whatever it is that you want.
And that gives us reason to think,
oh, these are good rational principles.
If people who follow these principles tend to do better,
I see people who exercise
and people who make good financial decisions and hedge their investments.
They do better.
I go, oh, well, there are good reasons therefore
to do what they do and take on their principles.
But you can't turn it around and say that we're gonna
start with axiomatically, this is the way to be rational,
and then go backward and show that that then entails this is how probability should work.
And that's kind of the sort of reverse argument that's taking place.
I should say that not all Everettians take this decision theoretic view.
Simon Saunders, for example, tries to do probability in a more Boltzmannian statistical mechanical
way by coarse graining and actually counting in some sense, but it's still in its embryonic
form.
Yeah. in some sense, but it's still in its embryonic form. Yeah, so there are a lot of approaches
to many worlds interpretation.
And at present, none of them seem to find a way
to get probability off the ground.
And I don't think that you can.
And to the extent that you can
by just taking on more and more assumptions,
you're doing the thing where you're adding on
extra empirical assumptions
that can't be verified in an experiment.
I mean, I don't know how experimentally to test that I should have the right relationship
to many copies of myself.
That's an extra empirical statement.
If you have to take many of those on in order to get the picture off the ground, I don't
know how credible it is.
How much credence should I give to a theoretical picture that relies on a tower of SMHs, of speculative metaphysical hypotheses?
I feel like if you have to do all that work to get the theory off the ground,
then it lowers your credence that we should take on such an outlandish idea that there are all these many worlds.
So that's basically where I end up with the many worlds approach.
And this is one of the reasons why I think there's room for another interpretation that's much more conservative,
that says, well, we do experiments, we see one outcome,
maybe that's because there is just one outcome.
And the experiments look probabilistic, maybe that's because they are in fact probabilistic.
Nature is telling us it's probabilistic, we should listen to nature,
rather than saying, nope, nope, nope, got to be deterministic,
there's a universal wave function evolving deterministically,
it's got to be Markovian.
You know, maybe we should just listen to nature
and build a theory around what nature's telling us.
That's, I think, the conservative, non-Outlandish approach
that one should take.
I want to know, how is it that you got so great
at being articulate and smooth with your speech.
That's a very, very kind thing to say.
I really appreciate that.
That's really nice of you to say.
I think we all have different strengths.
I'm bad at many, many, many things.
There are a few things I've gotten good at through practice.
There's some things we're all born kind of a little bit good at. We've got like embryonic things that we're sort of good at, and then we hone those things.
I've taught many classes over many years here.
I've interacted with such amazing students, brilliant, idealistic,
just wonderful students who ask all kinds of great questions.
I just think it's practice.
You just talk a lot with people about very intricate topics,
and over time it gets easier.
That's the best answer I think I can give.
There's an Aesop fable I like to bring up with people.
It's about a stag and it's antlers.
So there's the stag who's drinking from a pool
and admiring his beautiful antlers.
He thinks his antlers are so magnificent, so glorious.
He goes on and on,
oh, my antlers are really the envy of the animal kingdom.
Then he looks at his legs and says,
but my legs are bony and ugly.
And if only my legs could be as remarkable as my antlers.
As the stag is pondering this, he suddenly becomes aware that a pack of wolves is chasing
him.
So he gets up and runs from the water.
He's trying to get away from the wolves and he sees a forest.
He's going to run into the forest to hide.
And as he runs into the forest, his antlers start getting tangled in all the vines.
And before he knows it, he can't run anymore, he's stuck.
And as the wolves approach him,
he realizes that the thing that he was praising,
his antlers, was his undoing.
And the thing that he thought was, you know,
his weakest feature, his legs,
they were the things that would have saved him.
If it had just been his legs,
his legs would have saved him.
So the reason I bring this up is,
in addition to saying that I think we're all good
at a few things and maybe have difficulty with a lot
of things, some of the things we think we're bad at,
seen in another way are the things we're good at,
and sometimes vice versa.
So I'm gonna say something that anyone
who has known me growing up will laugh at,
because it's so obvious.
I came into this world profoundly lacking in common sense.
Okay.
Okay.
Anyone who's ever known me growing up would say that's the most obvious statement I've
ever made.
Okay.
Profoundly lacking in common sense.
And as I grew up, you know, you get made fun of, you make a lot of mistakes, you do a lot of silly things
because you lack common sense.
And you see it as kind of a weak feature.
You see it as something you're a little bit embarrassed about.
When you get into philosophy and foundations of science,
philosophy of physics, what you see is a lot of people
whose common sense takes them in directions
they shouldn't go.
You see a lot of people who make arguments
or make speculations and make claims
that just seem very common-sensical to them.
And sometimes those are not really rigorously supported.
People can, their common sense can lead them into error.
Suddenly lacking common sense becomes a huge advantage
because when I read a philosophy paper
or I listen to a seminar or I'm trying to formulate an argument, I don't have the kind of common sense becomes a huge advantage. Because when I read a philosophy paper, or I listen to a seminar, or I'm trying to formulate
an argument, I don't have the kind of common sense that makes the answers obvious to me.
So I see every argument, and I have to take it apart and really disassemble it and understand
what all the pieces do, because I don't have an intuition, a common sense for how things
are supposed to work.
And what this means is that to some extent, and obviously, I mean, we all make mistakes.
I make errors too.
But I feel like some of the errors I might have made if I had more common sense, I'm less likely to make.
So a thing that I thought was my weakest feature, the stag's legs, in a different context, turned out to be really useful.
Like being on land and having only flippers for your arms and legs.
And then one day you discover the ocean
and suddenly what you thought was your weakest feature
becomes now your greatest asset.
So that's a general lesson I think
that everyone needs to take to heart.
Many of the things we think are, you know,
maybe our weak features can in a different way
actually be a strength.
So if you're the kind of person who has a lot of trouble paying attention to things
but gets super hyper focused on some things and you think that's a problem,
well, it could be a problem in some contexts, but in other situations
it could be a superpower. And we see this all the time with lots of things that people may feel embarrassed about.
And now you're speaking to researchers and
potential researchers, people who are younger
students, even people who are older students' perspective.
There are some people who are 70 and getting their PhD and watch this.
So what is a method that they can use to help figure out or distinguish between what is
an actual good feature versus an actual bad feature that they thought was good?
The best I can say there is experience.
Put yourself in different contexts.
If I had never become someone who worked in philosophy
and foundations of physics,
I might have gone my whole life thinking
that lacking common sense was really bad.
Maybe it is really bad in some contexts,
but like I wouldn't have seen it,
there are in fact flip sides to it.
And another thing is just talk to lots of people
and ask them what are, you know,
are there any aspects of themselves that in some contexts
they see as bad and other contexts they see as very helpful.
And if you talk to enough people,
you'll begin to hear them say things that remind you
about things about yourself.
And you'll go, wow, I have this feature
that I'm not feeling great about,
but this person has found a way to really use it really well.
Maybe I should follow their example and do what they do.
So, yeah, that's probably my best advice there for how to do it.
But let me add one last thing in closing, right?
When I teach a class, I just taught this class this fall term. We just finished teaching.
I just finished each of our classes for the fall term.
I said to the students, look, we've talked about a lot of physics in this class.
This was a physics class.
Sometimes each physics class, sometimes each philosophy class.
This was a physics class.
If in a year you don't remember some or most or maybe even all of the physics that we've talked about I
Won't be disappointed
But we have to be human to each other
You know, we have to be human beings to each other. And if you forget to do that, then I'll be super disappointed
You never know when you're interacting with somebody.
Is this somebody who five years from now
is gonna be the right person at the right moment
to play a really important role in your life,
your career, your wellbeing?
You have to treat everybody
like they could potentially be super important to you.
I mean, obviously, if you're getting 100 emails a day,
you can't treat every, I mean, just like a sheer amount of,
but to the extent that you can treat everybody
with basic respect and treat people like human beings
and be human to them, you should always do that.
Because you just don't ever know, you know,
they could always, I mean, obviously just for its own merits,
I mean, people should be treated like human beings anyway
But but it's also just a good strategy because you never know if someone ultimately down the line is gonna end up being important to you
When students for start here in our PhD program
one of the things I tell them is that one of the most important assets they have is their reputation and
A lot of people think that the right scientific reputation to have is to be intimidating for
everyone to think you're the smartest person in the room, for everyone to be, you know,
in awe of your intellect and almost afraid to talk to you, right?
People think that's the kind of reputation you're supposed to develop.
Not everybody does, but some people think that's what you're aspiring to.
And people can often think of figures in their lives, role models in some cases that have
that kind of reputation.
I would argue that's not the right reputation that you should cultivate, that you should
seek to have.
I talked about treating people like human beings.
Your reputation is in science, and this is for any students, researchers who want to
go into science, your reputation is worth its weight in gold.
The kind of reputation you want to have is someone people want to work with, someone
people want to go and talk to and ask questions to.
You want to be the kind of person who when people come to you and ask you questions and
talk with you, when they leave, they feel smarter than they did before.
Because if people come to you and they leave feeling smarter, feeling happier,
feeling like they can go out and do things more confidently,
they're going to want to come back and work with you again, talk with you again.
Yes, there are very successful people who don't have that kind of reputation, who are very intimidating,
and they're successful.
But they would be even more successful in my view,
if they cultivated the kind of reputation that made people want to collaborate with
them, work with them, and importantly support them. Because everybody in every walk of life
at some point will need someone to come along and help them out with something. And if people
see you as someone who is collaborative and helpful, and someone who builds people up
and someone who treats people like human beings,
then they'll be more likely to support you
when you need help.
And that's the kind of investment in your own career
and your own future that I think everybody needs
to take very seriously and think very seriously about.
Thank you, Jacob.
I appreciate you spending seven hours with us.
Kurt, it was a delight.
It was a complete delight.
And Addie and Will, it was really just a delight.
Wonderful.
I've received several messages, emails and comments from professors saying that they
recommend theories of everything to their students and that's fantastic.
If you're a professor or a lecturer and there's a particular standout episode that your students can benefit from,
please do share and as always feel free to contact me.
New update! Started a sub stack.
Writings on there are currently about language and ill-defined concepts as well as some other mathematical details.
Much more being written there. This is content that isn't anywhere else. It's not on theories of everything, it's not on Patreon. Also,
full transcripts will be placed there at some point in the future.
Several people ask me, hey Kurt, you've spoken to so many people in the fields of theoretical
physics, philosophy, and consciousness. What are your thoughts? While I remain impartial
in interviews, this substack is a way to peer into my present
deliberations on these topics.
Also, thank you to our partner, The Economist.
Firstly, thank you for watching, thank you for listening. If you haven't subscribed or
clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps
YouTube push this content to more people like yourself, plus it helps out Kurt directly,
aka me. I also found out last year that external links count plenty toward the algorithm, which
means that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it
shows YouTube, hey, people are talking about this
content outside of YouTube, which in turn greatly aids the distribution on YouTube.
Thirdly, you should know this podcast is on iTunes, it's on Spotify, it's on all of the
audio platforms. All you have to do is type in theories of everything and you'll find
it. Personally, I gain from rewatching lectures and podcasts. I also read in the comments
that hey, toll listeners also gain from replaying.
So how about instead you re-listen on those platforms like iTunes, Spotify, Google Podcasts,
whichever podcast catcher you use.
And finally, if you'd like to support more conversations like this, more content like
this, then do consider visiting patreon.com slash Kurt Jaimungal and donating with whatever
you like.
There's also PayPal, there's also crypto, there's also just joining on YouTube.
Again, keep in mind it's support from the sponsors and you that allow me to work on
toe full time.
You also get early access to ad free episodes, whether it's audio or video, it's audio in
the case of Patreon, video in the case of YouTube.
For instance, this episode that you're listening to right now was released a few days earlier.
Every dollar helps far more than you think.
Either way, your viewership is generosity enough.
Thank you so much.