Theories of Everything with Curt Jaimungal - A New Era in Quantum Mechanics Is Finally Here... | Jacob Barandes
Episode Date: January 30, 2025As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe In this captivating of Theories of Everything, Jacob Barandes... and I delve into the intricate world of Indivisible Stochastic Processes and their profound impact on quantum mechanics. We explore how these non-Markovian systems introduce quantum phenomena like superposition and interference without the traditional wave function collapse. Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://tinyurl.com/SpotifyTOE Timestamps: 00:00 - Episode Introduction 02:15 - Overview of Quantum Mechanics 05:30 - Introduction to Indivisible Stochastic Processes 10:45 - Understanding Symmetry Breaking 15:00 - Mirror Symmetry in Physical Systems 20:30 - Spontaneous vs. Explicit Symmetry Breaking 25:00 - Real-World Examples of Symmetry Breaking 30:15 - Connection to Cosmology and FLRW Models 35:45 - Implications for the Flow of Time 40:00 - Introduction to the Measurement Problem in Quantum Mechanics 45:30 - Comparing Indivisible Stochastic Processes to Bohmian Mechanics 50:00 - The Role of Philosophy in Physics 55:20 - Historical Interactions Between Physicists and Philosophers 01:05:30 - Deep Dive into Indivisible Stochastic Processes 01:40:06 - Markovianity in Quantum Mechanics 01:42:12 - Linearity and Unitarity in Quantum Evolution 01:43:23 - Unistochastic Processes and Quantum Channels 01:45:20 - Quantum Channels and Steinspring Dilation 01:46:18 - Hamiltonian Formulation Analogy 01:49:05 - Double-Slit Experiment with Indivisible Processes 01:52:08 - Measurement Devices and Emergibles 02:00:04 - Seminar Culture and Philosophy in Physics 02:02:38 - Coarse-Grained Double-Slit Example 02:05:03 - No Wave Function Collapse in Indivisible Processes 02:12:16 - Philosophical Insights and Importance in Physics 02:18:08 - Critique of David Griffiths' Quantum Mechanics Textbook 02:35:07 - Closing Remarks and Future Topics Links Mentioned (additional links in comments): - Jacob’s website: https://www.jacobbarandes.com/ - Jacob’s first appearance on TOE: https://www.youtube.com/watch?v=7oWip00iXbo&ab_channel=CurtJaimungal - Jacob’s talk on “A New Formulation of Quantum Theory”: https://www.youtube.com/watch?v=sshJyD0aWXg - The Stochastic-Quantum Correspondence (Jacob’s paper): https://arxiv.org/pdf/2302.10778 - McTaggart’s paper on time: https://philpapers.org/archive/MCTTUO.pdf - Putnam’s paper on time and geometry: https://www.jstor.org/stable/2024493?origin=JSTOR-pdf - Neil deGrasse Tyson on TOE: https://www.youtube.com/watch?v=HhWWlJFwTqs - Einstein-Podolsky-Rosen paper: https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf - Greta Hermann’s paper on quantum mechanics in the philosophy of nature: https://cqi.inf.usi.ch/qic/grete_en.pdf - John Bell’s paper on the Einstein Podolsky Rosen paradox: https://journals.aps.org/ppf/pdf/10.1103/PhysicsPhysiqueFizika.1.195 - Bell’s theorem without inequalities (paper): https://arxiv.org/pdf/quant-ph/0409190 - Quantum mysteries revisited (paper): https://www.physics.smu.edu/scalise/P5382fa15/Mermin1990a.pdf - Quantum Theory by David Bohm (book): https://www.amazon.com/Quantum-Theory-Dover-Books-Physics/dp/0486659690 - Bohm’s second paper on quantum theory: https://journals.aps.org/pr/abstract/10.1103/PhysRev.85.180 - Dirac’s textbook on quantum mechanics: https://www.amazon.com/Principles-Quantum-Mechanics-International-Monographs/dp/0198520115 - Wigner’s paper on the mind-body question: https://www.scribd.com/doc/240712078/Eugen-Wigner-Remarks-on-the-Mind-body-Question #science #physics #theoreticalphysics #quantumphysics Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
As a creator, I understand the importance of having the right tools to support your business growth.
Prior to using Shopify, it was far more complicated and convoluted.
There were different platforms, different systems, none of them meshed well together.
However, once we made that switch to Shopify, everything changed.
What I like best about Shopify is how seamless the entire process is from managing products
to tracking sales.
It's so much easier now and it's streamlined our operations considerably.
If you're serious about upgrading your business, get the same checkout we use with Shopify.
Sign up for your $1 per month trial period at Shopify.com slash theories, all lowercase.
Go to Shopify.com slash theories to upgrade your selling
today, that's Shopify.com slash theories.
Calling all sellers, Salesforce is hiring account executives
to join us on the cutting edge of technology.
Here, innovation isn't a buzzword.
It's a way of life.
You'll be solving customer challenges faster with agents,
winning with purpose,
and showing the world what AI was meant to be.
Let's create the agent-first future together.
Head to salesforce.com slash careers to learn more.
There was no wave function.
There was never a superposition.
There's never a need to get anything to collapse.
In this picture, some observable quantities
are reflecting things that are really there.
The people who gave us the biggest revolutions
in modern physics, quantum theory
and relativity, were all strongly
connected to philosophy.
Physicists have grappled with
the seemingly outlandish implications of
quantum theory that particles are purportedly in multiple places simultaneously and there's a mysterious
wave function that collapses upon measurement and a framework that requires so-called imaginary
numbers, etc.
I traveled to the oldest physics laboratory in the United States to meet at Harvard with
theoretical physicist and philosopher Jacob Barndes, who is the co-director of the
graduate studies department there, where we go into technical depth into his innovative
reformulation of quantum theory for more fundamental mechanics called Indivisible Stochastic Processes.
My name is Kurt Jaimungal and this was part of my three-day tour of Harvard, Tufts and
MIT, where I recorded five podcasts, one of them being with Jacob Barndes that you're
seeing now, which was actually over seven hours long,
so we're splitting it into two. The others are with Michael Levin, Anna Chownika, and Manolis Kellis.
There's also Professor William Hahn, a computer scientist, and that was filmed live at the MIT Media Lab. Subscribe to get notified.
Jacob's breakthrough theory raises new provocative questions such as what if quantum waves don't exist?
Did physics lose its soul by abandoning philosophy?
Does time flow differently in quantum physics?
And was Einstein right all along?
It's good to be here.
It's really lovely to see you again.
The last time we talked, I enjoyed it tremendously.
Yeah, me too.
So let's talk about physical philosophy versus the philosophy of physics.
People have heard about the philosophy of physics.
What is physical philosophy?
Yeah.
So my research has sort of two sides.
There's the physical philosophy side and there's the philosophical physics side.
Let me start with physical philosophy. So the way I describe physical philosophy, it's the use of results and ideas and discoveries
and theories from physics to address traditional questions
in philosophy, in particular in metaphysics.
So the kinds of questions that we're interested in here are questions about space and time,
the philosophy of time, causation.
There are very interesting connections between physics and causation, some of which we'll
hopefully have a chance to talk about today.
Also the philosophy of probability, which is a very subtle and very complicated area. And also the metaphysics of laws, which is a rich,
very interesting area of inquiry.
And so just thinking about what do our best
current physical theories bring to bear
on these traditional questions?
How do they constrain what we can say
about those sorts of things?
So here's a good example, a concrete example.
Philosophers, metaphysicians
have been thinking about the nature of time for a very, very long time, right? You go
back to Parmenides and Heraclitus and people thinking about, you know, is time something
that flows? Is time something that is just an illusion? And there's this very famous
paper, which maybe you can link to because it's a beautiful paper
and people should read it, by MacTaggart, a philosopher,
about over 100 years ago on the nature of time.
And he introduces this terminology,
the A series, the B series, the C series,
these different ways of thinking about time
and how much structure it has.
The A series is this idea that events that take place
in time can be classified as past, present, the A series is this idea that events that take place in time can be classified
as past, present, and future, but that seems to like distinguish a notion of a present,
and is that really a sensible notion?
And there's the B series, which has less structure.
This just says that events can only be classified in terms of earlier or later, so there's like
a pairwise relationship between them only, and there's no distinguished notion of a present.
And the C series has even less structure.
It just says that there's a sense of an ordering of events,
that given any three events,
we can say which takes place between the other two,
but without privileging a direction in time.
And he explores all of these ideas in the essay,
but right around the same time,
Albert Einstein was developing special relativity,
and special relativity has some pretty serious implications
for how we think about the nature of time.
In particular, although many people have heard
of phenomena like time dilation,
the idea that observers moving in different states
of motion will have different senses in which time flows,
like the rate at which time flows for one observer
may be different from the rate at which time flows for one observer may be different from the rate at which time flows
for another.
And maybe some people have heard of length contraction
or Lorentz contraction that spatial distances
in the direction of the change of reference frame
can be distorted.
Somewhat less known as the breakdown in simultaneity,
which is this idea that observers
in different states of motion
will assign different events to be in their present.
So for an observer in one frame of motion, some collection of events may be in what an
observer identifies as their present moment.
But to an observer in a different frame of motion, their sense of the present is like
tilted.
It's like tilted a little bit so that things that the first observer would have said were
in the future are in the second observer's present, as are some things that might have been in the first
observer's past. And this feature of special relativity that simultaneity is not an invariantly
defined concept has profound implications for how we think about the philosophy of time.
And these were explored by people like rather famously the philosopher
Hilary Putnam in the 60s, who wrote a really beautiful paper, which you should also link
to because it's just very beautifully argued. And he makes the case for what he calls four-dimensionalism.
You might call it eternalism, the idea that because different observers will disagree
on which events are in their presence, if we just take the totality of all the things that everybody thinks are in their presence,
it's basically all of space time.
If you look at observers at different locations, different frames of motion,
and does that mean that everything in space time is just in some sense there already?
You know, the future is not a thing that has yet to happen.
The future is not a thing that is unfolding,
but our experience of the flow of time is merely psychological
and special activity is actually telling us in some very strong sense that we're living
in an eternalist so-called block universe where everything is just there.
So this is a fantastic example of where the things we're learning from contemporary successful
physical theories, thoroughly empirically established like physical theories, bring
to bear some very important constraints
on what you might've thought were
purely metaphysical questions.
So I call this general subject physical philosophy.
On the other hand, I also work in what I would call
philosophical physics and the name is supposed to resemble
like theoretical physics, mathematical physics,
computational physics.
It's a methodology for doing physics, not philosophy, but rather than say mathematical physics, computational physics. It's a methodology for doing physics, not philosophy.
But rather than say mathematical physics,
where you're doing physics by stating axioms,
improving theorems, or theoretical physics,
where you're formulating models
and then calculating predictions within the models
and comparing them with experiments,
or computational physics, where you're simulating physical processes in a computer and making predictions that
way.
Philosophical physics tries to make progress on physics using some of the tools that you
might traditionally have associated with philosophy.
So what are those kinds of tools?
Thought experiments, coming up with counter examples, sharpening definitions, sharpening questions, right?
Sometimes you're dealing with a problem in science or physics more specifically,
where the questions are not yet sharply formed enough that they will, you know,
you could subject them to experimental study.
Sometimes you have to do some work beforehand
to sharpen those questions.
That's the kind of thing that philosophers like to do.
Identifying implicit assumptions, hidden assumptions,
and that's something we'll be talking about
because some of my work is closely connected
with this idea of identifying implicit assumptions.
Assumptions that may get in the way
of making scientific progress,
subjecting ideas to a logical analysis, and also to what I call rigorous scrutiny.
Because in philosophy, we're not generally guided by empirical data or observation.
You might think that there's just, well, if you don't have empirical data, what are you doing?
Well, you're not without any tools at all, right? So one of the things that philosophers like to do
Is just be really really careful in how they talk about things now
They define things being very careful with every step of their logic
Stealing their premises as clearly as possible so that if you find fault with their arguments
You can identify where the problem is. It's just sharpening everything and subjecting things to rigorous scrutiny.
These are all tools and techniques that come from philosophy, and I call them philosophical
physics.
And this doesn't even include just taking ideas from philosophy, like actual just picking
up ideas from philosophy and seeing if they have some use in physics.
And I think there's a proud tradition of this kind of work.
I've talked to philosophers of science who've told me that the greatest philosopher of physics of the 20th century was Albert Einstein. And
when you look at, especially Einstein's earlier work, what is he doing? He's subjecting definitions
and ideas to rigorous scrutiny. I mean, how much time did he spend trying to pin down
exactly what we mean by an inertial reference frame? You might think inertial reference frames are like the scenery of Newtonian physics.
Like, what could you really gain by spending a lot of time really rigorously scrutinizing what is the precise definition of inertial reference frame
and scrutinizing when you know that you're actually in an inertial reference frame?
But think about how much mileage Einstein got out of thinking about inertial reference frames, which laws of physics should be the same in every inertial reference frame, realizing
that the speed of light is one of these laws of physics that should be the same in every
inertial reference frame.
And then, you know, his greatest, what he described as the greatest idea ever had, which
is realizing that, you know, it's impossible to distinguish the local short duration effects of a gravitational field
from being in a reference frame that is in uniform acceleration.
And that realization is what, you know, eventually evolves into the equivalence principle and
leads him into his theory of general relativity.
He has this idea in 1907, and he described it as the happiest idea of his life.
So and of course, I'll get into this more a little bit later,
but Einstein was thoroughly steeped in philosophy.
We'll talk a little bit about the greats of 20th century physics
and how much philosophy they did.
So I think there's a really a strong tradition of philosophical physics in history.
And I'm certainly not the only one who practices, I think, this discipline,
but I think there's really something to say about it.
I think it really does contribute something
to how we can make progress in physics.
And so that's the other side, I think, of what I do.
So what's the standard view of philosophy of physics
or philosophy in general from people
who consider themselves to be scientists or who actually are practicing scientists.
So for instance, when I was speaking to Neil deGrasse Tyson, he was saying, well, philosophy,
what has it done?
What give me, Kurt, an example of something that philosophy has contributed to modern
day physics in the past 30 years?
Sure in the past, thought experiments.
That's a lovely question.
I actually know Neil Tyson.
We worked together when I was in high school.
This is some interesting history.
So I grew up in New York City, and I used to go to the Museum of Natural History all the time.
It was one of my favorite places.
I mean, if anyone who's like listening to this has never been to the American Museum of Natural History,
you are missing out.
It is, it's like going to a magical fairyland for, you know, like science and it's amazing.
And I got lucky when I was in high school to intern at the museum and
there was a period of time when I was working in the astronomy department and I worked with Neil Tyson. It was amazing. I mean, he's great.
So, of course, you know, I do very, very politely disagree with that sentiment.
I'll give you a couple of examples.
So here are some things that have come out of philosophical thinking about physics, philosophy.
I mean, the lines are a little blurry here because obviously if you're doing philosophy that's very close to physics, one could accuse you of just doing physics.
But, okay, let me think of an example. This one is from the 80s. Okay, so 1982,
Zurich and Wooters and then independently, an absolutely, definitely philosopher of science, Dennis Deeks, independently
formulated what we now call the no-cloning theorem.
So no-cloning theorem and quantum theory is a very simple, very beautiful theorem.
What it says is that if you were given, as traditionally formulated, a quantum state
in the form of a wave function or state vector,
just some object in Hilbert space that describes in a traditional sense
the state of your quantum system, and you don't do any measurements on it.
You've prepared it, but you haven't measured the position,
you're leaving it pristine.
Is there a way to build some kind of apparatus
that can make another system
be in exactly the same quantum state?
That is, can you, you know, if your first system was,
you know, some simple quantum system and it's in some state,
can you bring in a second quantum system of the same kind
and run both systems through some kind of machine
that will make the second system be in the same quantum state
as the first system every time, right, every time.
And the answer is you can't.
If you set the machine up so that it works some of the time,
then all you have to do is set up the first system
in a superposition of those possibilities
and that we'll find is that you cannot get the other system
to be in the right quantum state
that is also the same superposition.
This is called the no cloning theorem.
And the cloning theorem is useful.
I mean, people certainly shows up all over the place in physics.
It shows up in high energy theory.
It shows up in all kinds of places.
I guess that's 1982.
I mean, that's a little bit outside the boundary of the 30 years.
What's happened since the 90s?
And we're not allowed to talk about your work yet.
Ah, yeah, we can't talk about work yet.
I mean, that's an interesting question.
I mean, from the point of view of like...
So it's a bit difficult because, like I said, the boundaries here are a little blurry.
If you think about people like
Martin Lazord or Eric Uriel or JB Manchek,
these are people who work in foundations of general relativity. And, you know, their work
is super mathematical. I mean, if you ask them, do you consider yourself first and foremost a
philosopher? Or I haven't asked Eric or I haven't asked all these people like what do you first think of yourself as?
But they certainly come to philosophy seminars they give philosophy talks
they certainly sound like philosophers and they're doing like
work on general relativity. They're proving things about space-time that
are
interesting facts. So I
you know, and then of course the lines
between quantum foundations and quantum information
are also very blurry.
So I'll tell you, here's an interesting story, okay?
You've all probably heard of qubits, right?
Qbit, right?
So a qubit is supposed to be a quantum bit.
And usually the way that we're,
the term is introduced, people will say
it's a portmanteau of quantum and bit,
right?
You know, bit, I think the term bit goes back to Claude Shannon.
That itself is a portmanteau of binary digits.
You know, Claude Shannon introduced many of the ideas in information theory and communication
theory in the 40s, back when he was at Bell Labs.
And then the idea is that you have classical bits
and you also have these modern quantum bits or qubits.
But the name qubit is a very, it's a very funny word
because if you look at it, it's spelled Q-U-B-I-T.
You'd be hard pressed to think of many other words
in English that are a Q and a U
and then there's no other vowel,
just a consonant right away.
Why that very funny spelling? And I'm not saying I'm the first to notice this funny spelling.
In fact, there's a very famous physicist, David Merman, who also is interested in quantum foundations,
among other things, and he doesn't like that spelling because he says it's like the spelling is really very non-standard.
I think he likes to write it as Q hyphen bit and not Q-U-B-I-T.
So where did that spelling come from?
The term was coined by Ben Schumacher at Kenyon College and Bill Wooders,
who I mentioned before, at Williams College.
The story is that Bill Wooders was visiting Kenyon College, and this was in Ohio.
And they were both driving to the airport in Columbus.
And they were talking about how they needed a new scheme for talking about quantum information.
This was in 1992, just outside of the 30-year mark. I'm sorry.
We're just a little over 30 years. And it was Bill, so I got this information from one of Ben's students, Mary Gerhardinger,
she told me this story.
Bill was in the car and he said, wouldn't it be funny if we called these things qubits,
because there's a unit of measure in the Bible called a qubit.
But in the Bible, it's spelled C-U-B-I-T.
It's a unit of measure.
It's like, I don't know, distance from your elbow to the tip of your finger.
And if you read Genesis, you'll hear God telling Noah to build the ark,
and it's supposed to be this many qubits long and this many qubits tall, right?
And they're like, but it starts with a C, because the Bible is classical.
Do you get it? Like, what's more classical than the Bible?
Let's just replace the C with a Q, and it will be a modern version of a cubit.
That's funny.
And they both thought it was hilarious.
As you know, on Theories of Everything, we delve into some of the most reality spiraling concepts
from theoretical physics and consciousness
to AI and emerging technologies, to stay informed in an ever-evolving landscape, I see The Economist
as a wellspring of insightful analysis and in-depth reporting on the various topics we
explore here and beyond.
The Economist's commitment to rigorous journalism means you get a clear picture of the world's
most significant development.
Whether it's in scientific innovation or the shifting tectonic plates of global politics,
the economist provides comprehensive coverage that goes beyond the headlines.
What sets the economist apart is their ability to make complex issues accessible and engaging,
much like we strive to do in this podcast.
If you're passionate about expanding your knowledge and gaining a deeper understanding
of the forces that shape our world, then I highly recommend subscribing to The Economist.
It's an investment into intellectual growth. One that you won't regret. As a listener of
Toe, you get a special 20% off discount. Now you can enjoy The Economist and all it has to offer for less.
Head over to their website, www.economist.com slash totoe to get started.
Thanks for tuning in and now back to our explorations of the mysteries of the universe.
And they both thought it was hilarious.
And so the name comes from this conversation that they had.
And Bill Wooders did a lot of work in quantum foundations.
Ben Schumacher worked with John Wheeler, who
in addition to being a very great theoretical physicist
and a fantastic mentor who had many, many famous students.
Richard Feynman was one of the students.
So many students.
Jacob Bekenstein, many very famous students,
was also very philosophically curious. Hugh Everett of the Many Worlds Interpretation,
which hopefully we'll talk about, was also one of his students.
And so he really enjoyed having these really deep philosophical conversations,
and he created a real atmosphere of philosophical inquiry in his research group.
Zurich worked with John Wheeler. So does that count as a philosophical contribution?
I mean, Schumacher and Wooters were not at R1 universities.
They were not at major universities.
They were both at liberal arts colleges,
and they were interested in very foundational questions.
I mean, does that count as philosophy?
It's kind of hard to say, but... Now, I guess you can go farther. I mean, if you count as philosophy? It's kind of hard to say, but...
Now, I guess you can go farther. I mean, if you're willing to go farther back
and take ideas that came out of philosophical thinking,
but that are still proving themselves to be super useful,
there's a lot more, right?
I mean, you could...
I mean, if you're just talking about quantum mechanics,
not even general relativity,
you know, the notion of entanglement goes back
to philosophical disputes between, you know, the notion of entanglement goes back to philosophical
disputes between, you know, people like Schrodinger and Einstein. The Einstein-Podolsky-Rosen
paper from 1935, maybe the most cited paper Einstein's name is on, is an extended philosophical
argument. They lay out their premises. They lay out a very detailed philosophical argument, which we can talk about,
and, you know, they make a metaphysical claim
about quantum mechanics.
A physical and a metaphysical claim
about whether quantum mechanics could be considered complete,
even if it's practically, perfectly useful.
Um... Hmm. Killer messaged you yesterday? This is so dangerous, I gotta get out of this. Based on a true story.
New season Mondays at 9 Eastern and Pacific.
Only on W. Stream on Stack TV.
Their work, you know, the EPR argument, the Inus Podilska Rosen paper,
inspired a lot of people.
I'll get to who it inspired, but let me just say that before I get to that,
there's a paper by absolutely a philosopher, Greta Hermann.
So Greta Hermann was a philosopher, a student of Emmy Noether,
who, mathematician, made major contributions to physics,
symmetries and conservation laws, Noether's theorem is super important,
arguably the most important theorem in physics.
And so her student, Greta Hermann, she's a philosopher, she's a neocontian, and she wrote a large
number of papers on physics and metaphysics and on the developing quantum theory.
And she wrote a paper, we read it in my philosophy of quantum mechanics class, and it's a paper
in which she is studying the nature of causation in quantum mechanics.
And she sets up this thought experiment.
It's a beautiful thought experiment, and the paper was published in 1935, several months
before the EPR paper.
It was widely read apparently.
It's likely that Einstein read this paper.
And the thought experiment that's used in the EPR paper bears a strong resemblance to
the thought experiment that's in Greta Herman's paper. Okay, so like, you know, clear, interesting threads
going on in the 1930s.
Then of course, John Bell picks up the thread
and publishes a paper in 1964,
his famous no-go theorem paper,
the first appearance of what we now call Bell's theorem.
And that paper is called
On the Einstein-Podolsky-Rosen Paradox.
That's what he calls the paper.
He takes the EPR paper and he extends the argument into a statement about what he calls
what we now call hidden variables theories.
And you've got further results that build on Bell's Theorem.
You have the famous CHSH inequality, which comes in 1969.
That's Clauser Horn, Shimoniholtz, which is a somewhat more general version of the Bell
inequality.
In the 90s, so we're in the 90s now, you've got a Greenberger, Horn, and Zeilinger, the
GHD paper.
There's a beautiful version of this argument by David Merman in American Journal of Physics,
which I can also send you links to.
These are all beautiful papers to read.
And these papers all deal with entangled systems.
And in particular, Bell's paper and the GHZ paper deal with particular kinds of quantum states.
The GHZ paper introduces these GHZ states.
And GHZ states show up all over the place in atomic molecular optical physics today,
and in quantum information.
I feel like every time someone uses a GHZ state, or uses, you know,
Bell violating states or things like that in order to do something in quantum cryptography
or to certify the randomness of the quantum random number generator, they should have to pay royalties
to the field of philosophy of physics and quantum foundations.
The No Signaling Theorem is an important result that comes out of quantum foundations.
This is in the late 70s and early 80s, I think. One version of the argument is by the same people
who gave us the GRW spontaneous collapse theory, right?
And so, you know, and the no signaling theorem
is also a theorem that shows up all over the place.
But I think, you know, and I can make more examples.
I mean, there's quantum teleportation.
You know, in some of David Deutsch's early papers introducing some of the basic ideas
that led to quantum computing, he is very explicit in the papers about how what he's
trying to do in trying to imagine building a quantum computer is confirm that many worlds
is the correct interpretation of quantum mechanics, right? So he's motivated by interpretational philosophical questions to a substantial degree.
But I think probably my favorite example is decoherence.
So whenever you propose a new way to think about quantum mechanics,
people will often say, do we really need this?
Doesn't decoherence solve this problem?
There's a physics solution to this problem.
We don't need philosophy, do we?
But decoherence comes out of philosophy, right?
So, um...
How?
Okay, so there are some rudiments of ideas
from, you know, that we might associate with decoherence
that go all the way back to like Mott in the 1920s.
But the first serious, rigorous formulation of decoherence
that I've ever been able to find shows up in chapter 22. So these are the end chapters of
David Bohm's textbook quantum theory. This is
1951 he publishes this textbook. It's before he introduces his pilot wave theory and
You know, he's a very philosophically curious person in addition to being a physicist
And in his book on quantum mechanics, he doesn't just want to teach quantum mechanics.
He also wants to deal with some of these lingering questions about how the measurement process
is supposed to work.
So in the later chapters of his book, he goes through the measurement process in as much
detail as he is able to, based on the axioms that were available, the Dirac von Neumann axioms, the axioms
associated with Paul Dirac and John von Neumann from 1930 and 1932, respectively. And, you know, he, he, he
tries to formulate a measurement process, he has a system, a quantum system that is to be measured, it's in
some superposition, he brings in in like a physical measuring device,
he models the whole thing with a system to be measured
and the measuring device, he lets them interact,
he ends up with this superposition.
And then he argues that some of the probabilistic quantities
one might wanna calculate,
the averages one might wanna calculate,
they exhibit these very strange quantum effects,
these so-called interference effects.
But once the measurement proceeds far enough
and measuring device interacts sufficiently strongly,
and I don't remember in the book
whether he includes the environment or not,
I'd have to remember exactly what he does there,
but basically enough degrees of freedom
from outside the system get involved,
you get what he calls, the phrasing he uses
is the destruction of interference
in the process of measurements. That's what he calls it is the destruction of interference in the process of measurement.
That's what he calls it. The destruction of interference in the process of measurement, which is decoherence.
That's a more extended version of the term decoherence.
And that's exactly what decoherence is.
So he writes this book. He apparently has conversations with Albert Einstein about it.
This is 1951, 1952. Albert Einstein is still alive at that point.
And he has some conversations with Albert Einstein
and Albert Einstein is dissatisfied with his approach.
Says he should go work on it a little more.
And he goes back and he independently rediscovers
some work that was done by de Broglie in the late 1920s.
De Broglie had developed the first sort of early
pilot wave theories. Bohm independently redevelops these. Eventually he and de Broglie had developed the first sort of early pilot wave theories. Bohm independently redeveloped these.
Eventually he and De Broglie connect.
Now we call these theories the De Broglie-Bohm theory,
or sometimes just Bohmian mechanics.
And in this theory, you've got not just the wave function evolving
in some high dimensional space where the wave functions live,
but the wave function is also guiding around
these corpuscular particle-like things,
the Bohmian particles, the Bohmian projectiles,
the Bohmian corpuscles.
And, you know, one obvious question you could ask is,
well, if you have both these projectiles
and you have a pilot wave, and Bell and Bohm
in these papers specifically says that the pilot wave is real,
metaphysically real to the same extent
that the particle locations are.
Then when you do something like the double slit experiment
and you send a pilot wave through these two slits
of an experiment, right, and they land on a screen,
why doesn't the screen light up in all the places
where the wave hits it, right?
So the wave diffracts, you get these intensity peaks,
these famous interference peaks in the wave function.
The projectile is guided to only one of the locations.
When you actually run the experiment,
you only see one landing site.
But the question is, well, but what about the other empty waves, empty shells,
the parts of the wave function that land on the screen and don't have a particle in them?
Why don't they hit something and do something?
And Bohm wrote two papers. He was a pair of papers.
In the second paper, he goes through the measurement process.
He goes through this example.
And he basically uses decoherence to explain why we only see one landing site.
So decoherence was developed in this textbook and then put to use almost immediately in
a very applied sense to get his pilot wave theory off the ground and to make it to show
that it was empirically adequate, at least for non-relativistic systems of finitely many particles.
The threat is then picked up again in the 70s by Dieter Ze, who did some very deep and important work on decoherence.
Apparently, he had some career repercussions for working on such a philosophical foundational area. And then decades later, physicists, practical working physicists working with, you know,
AMO systems, atomic molecule optical systems, physicists trying to implement real-world
quantum computers, trying to implement unitary gates and actually get simple quantum computers
to work.
But even physicists in all kinds of other areas, physicists trying to understand early universe cosmology,
physicists trying to understand systems and condensed matter, people are now worrying about decoherence all the time.
You read papers, every paper talks about what is the decoherence time scale of this or that.
Decoherence is now a major component of what we do in physics today.
It's happening all the time right now.
And again, I would just say that maybe we should pay some royalties to poor David Bohm,
who was run out of the United States.
He spent a lot of his career in Brazil and couldn't get back to the United States,
and Ze, who suffered some career repercussions.
And there's this attitude that, oh, thinking about foundational philosophical questions
is a waste of time.
How could it be a waste of time,
given all of the fruit that it has produced?
So, you know, what I would say is,
and this is just a general message
to anyone who's thinking about how best to contribute
to the development of science.
The field of philosophy of physics,
the field of, I would say, like,
the more philosophical side of quantum foundations,
because there are also parts of quantum foundations
that are much closer to, like,
practical quantum information type stuff.
But the sort of more, like, philosophical side of it,
this is an area that has had very few people work on it, right?
There are very few permanent tenure track academic positions
that are devoted to this kind of work.
A lot of the people who are doing this work got their academic positions
to do something else and then transitioned into this work.
So there's very few people responsible for a lot of these results.
There is very little research funding.
And so if you were the sort of person watching this and wondering,
like, where would
funding make the biggest difference in physics?
Should we devote more funding to already very well funded areas of physics? Would an additional million dollars make a huge difference to some of
these areas? I can tell you that
like philosophy of physics, the intersection of physics and philosophy, the more philosophical side of quantum foundations and
physics, the intersection of physics and philosophy, the more philosophical side of quantum foundations, and also the more philosophical side of the foundations of general relativity and the
foundations of many other important areas of physics.
That's where I think every dollar would go very far.
So if you're in the market for endowing professorships, that is a way to make a huge impact on the
field.
Great. Speaking of the foundations of quantum mechanics,
are they inconsistent as they stand in textbook quantum theory?
So, if you take textbook quantum theory to be...
quantum theory is formulated according to the Dirac-von Neumann axioms.
Again, that's Paul Dirac as he formulated them basically in 1930
in his famous book, Principles of Quantum Mechanics,
and John von Neumann in 1932,
Mathematical Foundations of Quantum Mechanics.
People divide the axioms up in various ways.
I'll give a super quick discussion.
They say that to every quantum system,
we associate a kind of a vector space
called a Hilbert space, a space of vectors
involving complex numbers.
The elements of this vector space are called state vectors.
Loosely speaking, they can be also called wave functions.
There's some terminological nuance around that.
And these are supposed to represent in some sense,
the state of a quantum system.
More generally, you have to use what's called
a density operator, which is a little more complicated,
but we'll put that aside.
And then the next axiom is that as time evolves,
these quantum states, whether state vectors
or wave functions or density matrices,
are supposed to evolve according to what's called
a unitary time evolution.
If it's sufficiently smooth and nice,
it can be written as a differential equation,
an equation that tells you moment to moment,
in a Markovian way, what each next state will be given the
present state.
We call that differential equation when it's for wave functions, we call it the Schrödinger
equation.
If it's density operators, we call it the von Neumann equation.
And then there's all these axioms of measurements.
The observable things that you could measure about a system are represented by operators
or matrices, these self-adjoint things a system are represented by operators or matrices,
these self-adjoint things called self-adjoint operators, matrices.
These are the observables of your theory.
The possible results you can get when you measure one of them
is called an eigenvalue.
It's one of the eigenvalues of these operators.
The probability with which you'll get that is given by this formula
that takes the state, the quantum state of the system,
and takes, you know, we call it projection operator, but a piece of this operator,
you put them together, it gives you the probability
you'll get that result when you do the measurements.
And then once you've done the measurements,
this quantum state of the system is projected or collapses
to reflect the result and ensure that you'll get the same result
if you measure it again right away.
Those are the traditional Dirac-Vinam and axioms,
and people slice them up in different ways.
But I like to think of them as those five axioms, right? Hilbert spaces, unitary time evolution, observables as self-adorned operators,
the Born rule for probabilities of measurement outcomes, and then collapse.
There are problems with these axioms.
Not when dealing with, you know, microscopic systems.
Microscopic systems, things tend
to work out pretty well. And when people say quantum mechanics works just great, they mean
microscopic systems. If you want to do a tabletop experiment, an atomic experiment or laser
experiment, you're working with tabletop systems, microscopic systems, theory works great. Works
great. If you limit yourself to these particular examples, what's the problem? If you limit yourself to these particular examples,
what's the problem?
If you wanna do a particle physics experiment,
a high energy physics experiment,
you're building a giant particle accelerator,
you're talking about the Large Hadron Collider
or something like that,
you have beams of subatomic particles
that are flying together,
and there's debris that comes out,
you collect all the debris.
These are all microscopic systems.
Quantum mechanics makes beautiful predictions
about these results.
So you're not gonna see any discrepancies
for these kinds of systems,
at least to the extent that the Dirac von Nomenaxiums
give a self-consistent description
for microscopic systems, which they appear to do.
Perfectly empirically adequate description,
the theory works great.
So what's the problem?
Why does anyone have any issue?
We talked before in our last conversation,
and I talked about this thought experiment
that was proposed, I think originally by Hugh Everett
in his long form thesis in 1956, 57,
when he was a graduate student at Princeton,
working for John Wheeler.
Not the one he ultimately published.
He published a shorter version.
But in the longer version, he opens it up with a thought experiment that has since been
usually attributed to Eugene Wigner, who was also at Princeton.
And we talked about how Wigner had this paper called On the Mind-Body Question in the early
1960s.
But in his thought experiment, we imagine trying to do
quantum mechanics with a system that is not small.
It is not small, it is not ultra cold,
it is not, you know, pristine.
The quantum system is something big enough to be
a measuring device itself or even an observer.
So again, this thought experiment is called
the Wigner's Friend Thought Experiment.
Wigner is on the outside of a box.
The box is perfectly sealed.
Vigner is one observer.
Inside the box is Vigner's friend.
Vigner's friend is also an observer, a big system, but perfectly sealed inside the box.
And Vigner's friend does a measurement on some superposed microscopic quantum system inside the box.
And now we have a problem.
Because we can describe the situation in two ways.
We can treat Wigner's friend,
the person in the sealed box,
as a thing that is an observer, that does measurements,
and then we're supposed to use the measurement axioms
and the collapse axiom.
Or because the person's sealed in a box and there is a second observer on the outside
who has not done a measurement on the box or the contents, maybe we should treat the
box and its contents as not subject to the collapse axiom.
And now all of a sudden we have an ambiguity in the middle of the theory.
The Dirac-Vinamon axioms are simply ambiguous in this circumstance.
They don't render a judgment of what you're supposed to do.
And when we spoke last time,
I laid out a whole menu of possible things you could do,
and each of them leads you down a different road, right,
for how you would resolve this problem.
So this is just an inconsistency.
And I wanna make clear here,
there's a difference between a theory being unintuitive
or exotic or eccentric, and a theory being inconsistent.
So let me give you an analogy, a real world analogy.
You've got a friend and your friend is lovable
and trustworthy and nice and always there for you.
But your friend is eccentric, okay?
What do you do?
What you do is you love your friend
because we all love eccentric people.
Like we're all eccentric to some degree.
I'm eccentric, we're all eccentric.
We like eccentric people.
Eccentric people are interesting.
They're interesting because they're often surprising
and they're creative and you never know
what you're gonna get and it's always very interesting.
Even if it's sometimes like confusing what they do.
But it's okay, right? They're just eccentric.
We like eccentric people.
And we've got eccentric physical theories.
Newtonian physics, you know, despite its reputation of being,
oh, it's Newtonian, it's classical, it's when everything made sense.
There's a lot of stuff about Newtonian physics that doesn't make a lot of intuitive sense, right?
You know, Newtonian physics says that a system, a body in motion
will just stay in motion unless it's acted upon by something.
It doesn't come to rest.
That's very unintuitive.
We intuitively feel like things should come to rest.
Aristotle believed that the natural state
of all objects was at rest.
And in Newtonian physics, it's just not.
You need an explanation for why a thing would come to rest.
But there are a lot of other examples like this
in Newtonian physics.
We have all this intuition about how circular motion should work.
We have this intuition that when you're, you know, moving water around in a bucket
that there's a centrifugal force pulling it in.
That's intuitive, but that's like not how it really works.
Gyroscopes are where all intuition breaks down, right?
That's all Newtonian physics.
Newtonian physics is filled with things that are a little bit unintuitive.
And things only get worse from there.
I mean, special relativity is super unintuitive.
We talked about time dilation,
the idea that the time progresses differently
for observers in different frames of motion,
that what we call all the events that are simultaneous,
like they're all happening now,
our notion of now is relative and the different people will disagree on which events are now and
which events are in the future and the past.
That's very unintuitive.
And don't even get me started about gender relativity.
So I love gender relativity.
I have taught gender relativity here.
So we have a graduate level course in gender relativity, physics 210.
It's like one of my favorite courses to teach.
I've been teaching for over 10 years.
I learn something new about general relativity every time.
I'm always surprised.
General relativity, that class is like an eccentric friend.
Every time I teach it, it's like you learn something
totally new and surprising.
And you're like, really, it really works this way.
It really does.
So, general relativity is another example of a theory
that is just really, really deeply unintuitive.
I mean, we always set aside like a half hour every time I teach the class just to talk about the weirdness of black holes.
And this is just not even like when you start worrying about quantum effects, just like regular treating black holes as classical objects.
Black holes are really unintuitive in a lot of ways.
And so I'll just set aside half an hour. I'll just open the floor and I'll just ask students to ask me whatever weird questions they have in the black holes. Like, student will be like, if I stick my arm in a black hole, and can I pull it out? Or, or all these weird things you could ask, like, what does it look like if I send someone to a black hole? Can they ever come? All these like weird questions you could ask. So, general relativity is super, super unintuitive. Jackson, Jackson level electromagnetism. So one of the course I teach is Jackson level electromagnetism.
So one of the course I teach is Jackson level electromagnetism.
There are a lot of very unintuitive things that happen in electromagnetism also.
So it's not that we don't have theories that are eccentric, we do.
I should say that all of these theories do have places where they break down.
The breakdown in a physical theory is called, well, a singularity in the theory.
Singularities aren't necessarily geometric point-like things.
They're just places where your equations stop working.
Electromagnetism has this famous divergence
of the self-energy of point particles.
General relativity famously has singularities.
Everyone's heard of the singularities of the Big Bang
and black holes and various other places
where general relativity breaks down.
Newtonian mechanics has weird singular behavior
in certain kinds of systems.
Yeah, yep.
There's a famous paper by Shia in the 1990s
on a five body system that exhibits singular behavior.
And it was originally predicted that this could exist
by Poincare, I think a century earlier.
So, you know, all these theories of places they break down
and what do we do as philosophers,
as scientists, as physicists, whatever, we look at these theories and we go, okay, the theories work in these regimes.
Sometimes they make very unintuitive claims or predictions, and we're okay with that.
We like weird, cool physics, as long as it's self-consistent.
And there are certain places in which the theory breaks down,
and we'll need to either replace the theory with something else if we're lucky.
If we're unlucky, maybe we won't find a better theory that will replace it, whatever.
Quantum mechanics is kind of like that, right?
There are regimes in which you're working with microscopic systems and quantum mechanics
is nice and self-consistent.
We don't run into these ambiguities or inconsistencies.
It's a little unintuitive in some of these situations, sure.
But when you confront something like the Vigner's Friend thought experiment and other thought experiments people have proposed over the years, now you're
talking not about things being unintuitive. You're talking about a singularity in the
theory. You're talking about an inconsistency. You're talking about something where the theory
is just broken. No one would say, well, you know, the self-energy of a point particle
like magnetism is just unintuitive.
People would say, this is clearly something wrong that we need to fix.
And all I'm saying is the same thing is true of quantum mechanics.
There are regimes in which it works well, and there are other situations that we can
extrapolate it to where things seem to break down.
These situations do entail an extrapolation of the theory from microscopic physics to
macroscopic physics.
I mean, we're assuming you can go from, you know, the level of an angstrom, 10 to the
negative 10 meters, that's 10 billionths of a meter, all the way up to a one meter scale,
the size of a human.
You know, to get the Wigner's Thenthor experiment, you have to have an extrapolation of the theory
to that size. Either you can do that extrapolation and you run into the Wigner's Synthartic experiment, you have to have an extrapolation of the theory to that size.
Either you can do that extrapolation and you run into the Wigner's friend problem,
or you can't, but if you can't,
well then there must be some other place
where the theory breaks down.
In any event, something goes wrong.
And we just have to deal with that.
We have to confront that problem and manage it.
But I would go a step farther than that.
It's not just that the theory seems to have places where it's ambiguous
about what it's predicting or where it's inconsistent.
The theory also only makes a very narrow kind of prediction,
according to the Dirac, Von Neumann axioms.
So the Dirac, Von Neumann axioms, and again, these are the axioms you'll read
if you pick up Griffith's textbook on quantum mechanics
or Shankar's textbook on quantum mechanics
or Sakharaj's, you know, all the standard textbooks,
Townsend's book on quantum mechanics,
Gottfried's book on quantum mechanics, Leibov.
The theory predicts measurement outcomes.
It predicts what you'll see on the readings,
on the dials, on the gauges, on the displays of measuring devices.
And it predicts the probability with which
you will see those readings on measuring devices.
You can compute averages in these theories.
They're called expectation values, but those averages are
definitionally, axiomatically, statistically weighted averages
of numerical measurement results, weighted by
measurement outcome probabilities.
We're talking about a very narrow kind of phenomenon, right?
A very narrow category of phenomena.
The Dirac-von-Neumann formulation of quantum mechanics
predicts measurements, what happens with measurements.
It's a very narrow slice,
a very narrow category of phenomena.
When we spoke last time,
we talked about how there are all other kinds of phenomena
that seem to be happening around us.
In the deep past, you know, primordial gases mixing in the early universe.
Today, birds foraging, people falling in love, all that stuff.
There's like lots of phenomena that seem to be taking place
that all lie, strictly speaking, outside the axiomatic ambit
of the Dirac-von Neumann axioms.
So what are we to do with this?
We either say that Dirac- Neumann quantum mechanics simply is not giving a complete description of nature
Which is kind of sad and if it's not where is the outer boundary? Now, there are some philosophers who defend this point of view
Nancy Cartwright is a famous philosopher who has argued that maybe we just have different theories in different domains, the universe is many dappled.
They're just like different theories for different things.
But it would certainly be an interesting intellectual
exercise and it would certainly be nice if we could extend
quantum theory to describe more of the world.
If we do that, now we have a job to do.
We either have to explain why all these other phenomena
are in fact measurements, despite the lack of measuring
devices or whatever, we have to somehow show that all of these things are in fact measurements, despite the lack of measuring devices or whatever.
We have to somehow show that all these things are in fact measurements,
and therefore in fact lie within the axiomatic ambit of theory.
Some people propose that. But the onus is on them to show that it really works.
And it's not clear that it does. Or we need to somehow extend the theory,
change the axioms in order to encompass more of the world.
And that's part of what I'm interested in doing.
Now, there are people who will say,
wait a second, decoherence.
What about decoherence, right?
I mean, sure, okay, fine.
The philosophers, or at least physicists
who cared a lot about philosophy,
they gave us decoherence, but we haven't now.
Doesn't decoherence solve all these problems?
Well, Bohm certainly didn't think it'd solve these problems.
He introduced decoherence, and it was insufficient.
And ultimately, he introduced his pilot wave corpuscular theory
in order to actually get a result.
The reason decoherence doesn't do the job is decoherence takes a wave function
or state vector, more generally what's called the density matrix,
the density operator.
And it shows that under certain circumstances, when it's evolving interacting with an environment
in the right way or interacting with a measuring device that's coupled to an environment the
right way, the density matrix will change in a certain way.
It'll become approximately what we call a diagonal matrix in a certain representation.
We call this a basis.
And you know, there'll be some entries on the sort of diagonal of the matrix and the other
entries will be approximately zero.
At this point we're supposed to say the measurement's been done and there's an outcome.
The problem is there's nothing in the dynamics, not in the Schrodinger equation, not in the
generalizations of the Schrodinger equation, not in the Lindblad equation or the quantum
channels that you use to describe this. There's no known dynamics within normal quantum theory
that singles out one of the outcomes.
You still need to apply the Born rule
to get a probability out of this.
And you still need to apply a projection postulate
to single out one outcome.
There's nothing in the dynamics that will do that for you.
And from time to time, people have proposed
that maybe there's some sufficiently complicated
quantum standard textbook quantum dynamics
that will in fact cause the system
to single out one outcome, but this is impossible
because of the no communication theorem,
the no signaling theorem that I mentioned before.
Collapsing down to individual states
looks superficially like a non-local process,
and you can't get a non-local process happening from what we call a local Hamiltonian.
And this is guaranteed by the no signaling theorem.
So there's definitely something that's just not working here.
And pinning all this down, not just hand waving and saying,
oh, decoherence, don't ask me any more questions,
pinning this down is a worthwhile exercise.
Subjecting this problem to rigorous scrutiny may bear fruit.
It has borne fruit already, and I would argue it will continue to bear fruit going forward.
Hi, everyone. Hope you're enjoying today's episode.
If you're hungry for deeper dives into physics, AI, consciousness, philosophy,
along with my personal reflections, you'll find it all on my sub stack.
Subscribers get first access to new episodes, new posts as well, behind the scenes insights, and the chance to be a part of a thriving community of like-minded pilgrimers.
By joining, you'll directly be supporting my work and helping keep these conversations at the cutting edge. So click the link on screen here. Hit subscribe and let's keep pushing the boundaries of knowledge together.
Thank you and enjoy the show. Just so you know, if you're listening, it's C-U-R-T-J-A-I-M-U-N-G-A-L dot org. KurtJaimungal dot org.
And I would argue it will continue to bear fruit going forward. Now going forward, I'd like to talk about your approach, your indivisible stochastic processes.
Prior to that, okay, and just so people know, there are three sources.
One is the previous interview that we've had with Jacob, and that's on screen right now.
It's over two hours long and it's quite in-depth.
Another source is I have a substack post where I cover my interpretation of Jacob's
theory.
And then the third is obviously your papers directly.
All three of these will be linked on screen and in the description.
Prior to that, I want to talk about Wigner's thought experiment, his friend.
How is that not just Schrodinger's cat but replacing the cat with a friend? You're completely right.
I mean, if you think of the cat as an observer,
and the quantum system being observed is the radioactive atomic nucleus
that is momentarily in a superposition of decayed and not decayed,
it is Wigner's Friend's thought experiment.
When, I mean, the origins of the Schrodinger's cat thought experiment go back to discussions
between Einstein and Schrodinger.
Einstein had a somewhat less playful example involving gunpowder that could either go off
or not go off.
And Schrodinger in this 1935 paper replaces this with this famous cat.
But Schrodinger doesn't describe the cat as an observer.
I think the real innovation is just rethinking the experiments.
You could, I think, absolutely think of the cat as Wigner's friend.
If you think that cats count as observers, which I do,
and friends, cats are friends and observers, absolutely,
then it really, it's more a question of perspective, right?
If you don't think of the cat as a candidate observer, then you're not doing the Vigners
Friend Thought experiment.
If you think of the cat as just as legitimate observer as a person, despite not having a
PhD, there's always obligatory jokes that go with the Vigners Friend Thought experiment.
You're always supposed to say something like, but don't call the humane society.
I mean, I don't know why people always do that.
Scott Aronson has like some joke about why is it people always make jokes about this.
But yes, if you think of the cat as an observer,
then it is just the Wigner's Friend Thought Experiment.
I think part of the reason why people like to separate these two
is in some versions of the Wigner's Friend Thought Experiment,
you're supposed to be able to ask a question to Wignerers Friend, like slide a little note in and ask questions.
And if you ask the wrong kind of question, you'll cause a collapse.
And if you ask the right kind of a question, you don't get a collapse.
And you can't do that with a cat. Cats don't talk.
At least all the cats I've ever met don't talk.
So there are some situations in which actually we do like it to be a sentient and sapient
observer who can communicate.
But broadly speaking, they are very similar thought experiments.
Okay.
Now, just before we move on, and we're going to get to some of the audience questions that
people have asked both in the comments and then also on Twitter threads and so on about the previous podcast.
Okay.
You mentioned Putnam earlier and then it's this eternalism because you can slice up any
moment of now.
So if there are multiple moments of now, then is it just that every single moment of now
exists?
Okay.
What are the counter arguments to that?
It's tough.
The Putnam argument for four-dimensionalism or eternalism is, it's a tough argument to
refute, but there have been attempts.
And I would recommend that people watching this read some of these papers because, you
know,
they're beautifully written by very smart, amazing people.
So again, just to rehearse this argument, if you think of space for all purposes just
as like a horizontal line and time as a vertical line, you can visualize all of space-time
as like a, like think of graph paper with like the horizontal axis is space,
all three dimensions of space somehow like projected down into one dimension
and then a vertical dimension, right? The idea is that what an observer calls now
might be a horizontal line in this picture.
All the moments at all points in space that are all happening at the same vertical time coordinate all the same time coordinates now and
What special activity suggests is that if you're in a state of motion?
Then that slice is tilted a little bit
It's like not quite horizontal anymore relative to the first slice and so things that are further in the future
compared with one observer are
In the present of the second observer and so forth.
One way I like to think about this is the pancake model of space-time.
So imagine a stack of pancakes.
Each pancake is supposed to be all of space, the whole spatial universe at one moment in time.
And think of this in the pre-relativistic idea, like the Newtonian conception that there's just a well-defined notion
of what is all of space at every moment in time,
you've got a stack of pancakes,
and let's say one of the pancakes is the hot pancake.
The hot pancake is the now pancake.
It's the pancake that is what's happening right now.
And somehow the hot pancake is somehow incrementing, right?
And this is the passage of time.
Now you run into some very deep questions
like how quickly are the pancakes moving?
They're moving at one second per second.
Does one second per second make sense?
If you divide a second by a second,
isn't that just unitless?
Like, what are we talking about?
How does, okay, but let's put all that aside for a second.
Just imagine that there is some notion of a moving now,
this sort of metaphysically presentist idea
that the present exists, it's a well-defined thing,
and it is somehow incrementing forward.
What special TV seems to suggest is that
observers in different states of motion will slice the stack of pancakes slightly diagonally.
And so an observer who is moving at some, you know, it doesn't even have to be super fast,
because if you talk about the whole universe, even slow motion will actually, you know,
ultimately produce a clear discrepancy.
But, you know, the slices are now tilted.
And if you're cutting the stack of pancakes
at a slight tilt, how can there be a hot pancake anymore?
The two observers don't even agree on whether the slices
are horizontal slices or diagonal slices.
How could there be a metaphysically invariant notion
that one of them in one particular slicing
is in fact the hot pancake.
So you can see why this is like a very difficult argument to refute.
But there have been attempts.
So Brad Scout, MIT has a book with a particular perspective on this.
I encourage people to take a look at this approach.
Another approach is Jean Annis-Mail, who is at Johns Hopkins University,
philosopher of physics, philosopher of science.
She has a book called How Physics Makes Us Free.
And she has a very interesting argument about how you might get around this problem.
I'm not going to be able to do justice to the argument here, but I would recommend people read the book,
because she's a fantastic writer, and I think people will find it very interesting to read this book.
There's a paper from not long ago by David Wheelys at Princeton.
He's a metaphysician who does some work in philosophy of science also at Princeton.
And he has an argument that, you know, putting aside the special at this argument,
he's like, you know, maybe we can't, maybe it's a difficult argument to deal with,
but at least rather than try to refute that negative argument, an argument against presentism, maybe there are good arguments
for presentism. We should focus on those. And his argument in that paper is that many
of our fundamental laws of physics appear to be Markovian. And Markovian laws are by
definition laws that take a notion of a present state and tell you the later states without
really worrying about anything in the past. And he's like, you know, why should the laws be Markovian if we're living in an eternalist
universe, if we live in an eternalist four dimensional universe where the whole block
is just like there?
Why would our laws only care about the present state?
There'd be no explanation for this.
But if only the present really exists and the past doesn't exist, that would give an
explanation for why our laws only care about the present.
Unfortunately, as I'm sure you and people who are watching
will surmise, I have some issues with this argument
because the whole point of the formulation of quantum
mechanics we're gonna talk about is that actually the laws
may not fundamentally be Markovian at all.
And if they're not Markovian, then all of a sudden
that argument doesn't maybe work so well.
Yeah.
It's my understanding, we're going to get to it, that your evolution laws depend on
the past, and it's not as if they depend on the future.
Right.
So in the eternalist state, when they're saying that why would we privilege the present, and
then you're saying, well, in yours, you're not, but you're also not privileging the future.
You actually are privileging the past.
So you still don't take an eternalist's block.
So this is a subtlety, right?
The fundamental laws of physics that we know about today, to a very good approximation,
have a feature called time-aversal invariance or time symmetry.
Newtonian mechanics, famously if you run a system in one
particular, you know, in some particular process and then
someone plays a reverse version of the, you watch a video of
it and someone's rewinding and they're rewinding at normal
speed and you watch the whole thing happen in reverse.
Although they look weird or unlikely, there doesn't appear
to be any violation
of Antony physics in what you're seeing.
So famous example, a teacup falls off of the table,
shatters, you could also have all the pieces
have just the right initial conditions
so that they have some initial velocity
that's just right to bounce them together
and they just land perfectly together
and they reseal together and form a
teacup and the teacup has just the right kinetic energy that it bounces off the lines on the table like it's unlikely but doesn't actually
violate any of the laws in 20 physics and this this this works out pretty well until you get to the standard model there are
some systems involving
K-ons and standard model that exhibit a slight violation of time reversal invariance.
Technically it's CP invariance,
but the symmetry in which you replace particles
and antiparticles is called C symmetry.
The symmetry in which you do a parity or spatial reflection
and everything is called parity symmetry
and then there's time reversal.
And the standard model,
our best theory of fundamental particle physics
that's based on quantum field theory,
has this feature that is exactly CPT invariant.
If you take any process in the standard model
and do a combined particle goes to anti-particle,
vice versa and parity and time reversal transformation,
everything stays the same.
And there are some systems that violate CP symmetry.
And then by the CPT invariance,
that means they violate T or time-reversal symmetry.
But there are rare processes.
It's not clear that they have anything to do
with the macroscopic distinction of future and past
that we tend to see.
There are some conjectures about this,
but no one has a firm argument
that these two things are closely related.
So the idea of a physical theory like we're talking about that somehow privileges the past but not the future in an eternalist universe seems a little bit odd.
I should say right off the bat that the indivisible stochastic formulation does not fundamentally
privilege the past or the future.
You could just as well formulate a quantum system that's going the other way.
The theory doesn't specifically,
it's just that when you actually consider
a particular system,
it'll be a system in which the past plays a different role
from the future, but you could also consider a system
in which the future plays a different role from the past.
We call it not a fundamental breaking of time reversal
and variance, but a spontaneous
breaking of time reversal and variance.
Not a breaking in the sense that fundamentally,
like nature's somewhat picking out future versus past,
but just that in a given instantiation,
in a given situation,
nature has to pick one particular direction
in each instantiation and depicts one over the other.
Like the laws of Newtonian mechanics
fundamentally are rotation invariant,
but you are not rotation invariant.
Your atoms formed in a particular shape, and whatever shape they formed, and toning mechanics fundamentally are rotation invariant, but you are not rotation invariant.
Your atoms formed in a particular shape
and whatever shape they formed,
picked out a direction in space.
So you spontaneously break rotation invariance,
even though fundamentally like the laws
in toning physics are rotation invariant.
The laws of quantum physics,
according to this indivisible stochastic approach,
are fundamentally time reversal invariant,
but in a forgiven model,
they will typically favor one direction of time over the other. I should have said there is one other way to save
At least I can think of one other way to save
the idea of a flow of time a directness to time and
This is to say well, okay special relativity is a very good theory, but we're living in
is a very good theory, but we're living in
an expanding universe. We're living in a space time that appears to go back
to some kind of big bang hypersurface,
some initial big bang everywhere.
And, you know, distances between objects are growing
with time in some particular way.
It turns out that in these kinds of,
we call them cosmologies.
So cosmology is the subject of the study of the universe, but a particular like space-time
model that solves Einstein's field equations at the level of the whole universe is called
a cosmology.
And in the cosmology, we seem to be living in approximately what's called Einstein-Dissider
cosmology, described by the FLRW model.
In this model, there is in fact a preferred slicing of space, right?
You just take every point in the universe
and ask like how long ago was the Big Bang from that point?
And take all the points that are all 13.78 billion years
since the Big Bang
and they form a hypersurface.
And if you just do that, you'll actually get kind of
like a preferred slicing of the universe.
And you might go, that seems to contradict
special relativity, well,
but general relativity is not special relativity.
General relativity is a different theory.
And in general relativity, different space times
can globally, like at the scale of the whole space time,
spontaneously break the symmetry of the slicings,
the so-called Lorentz symmetry.
Because there's a preferred Big Bang hypersurface,
there's a preferred foliation or slicing of space time
into slices of now.
And so just the history of the universe,
the way that it happened to be instantiated for the Bing
appears to have picked out a preferred way
to slice the universe.
And then you could say, okay, well,
then that really is the fundamental notion of time.
Time is really flowing from one such surface to the next.
And sure, if you zoom in really close to like a planet
or a star or some very localized thing,
things look like special relativity.
You don't notice the whole shape of the cosmology
and all the different reference frames look the same
and all the slicing's look like they're in the same footing.
But on a global cosmological scale,
there in fact is a preferred way to slice things
and time really flows from one slice to the other
in some fundamental sense in that picture.
But this leads to some other really deep questions.
Is the geometry of our cosmology a fundamental fact?
Or is it merely a contingency?
Could the universe have formed in lots of other ways?
And it just happened that it formed in this particular way
with this preferred slicing.
And if it was contingent, if the universe could have formed
in lots of ways or could have been eternal,
but happened to form in this particular way,
can you metaphysically say that time flows in some fundamental sense
based on what was a contingent fact about the universe, right?
Like, if you think the flow of time is some fundamental feature of reality,
should it depend on accidents of history?
No pun intended, but you get my point, right? So, just to close the circle, that's like one other way people have thought about maybe
restoring a notion of the flow of time.
Okay.
Sorry, that was a lot, but yeah.
Okay, briefly, before we close that circle, you mentioned that in GR you can spontaneously
break Lorentz symmetry.
What's the difference between breaking a symmetry and spontaneously breaking a symmetry? Okay, so...
Um...
If you take a theory, and the basic, like, equations that formulate laws of that theory,
simply fail to have a certain symmetry.
And by a symmetry here, I mean a transformation that would,
maybe for certain kinds of theories,
certain kind of laws, leave the laws unchanged. For this particular theory, they do not leave the laws
unchanged. So let me give an example. Let's suppose that the universe is described by
the Newtonian physics near Earth's surface, and that is the whole universe. There is in fact no Earth.
The universe is infinitely big,
and it is like Earth's surface forever, right?
Earth's surface just goes on in every direction forever.
There is the ground, and it never ends.
You can go as far as you want, and it doesn't wrap around.
It just goes on forever, right?
There's no planets or anything like that.
The whole universe is just life near an infinitely big surface.
That's all that there ever is.
And that's fundamentally all that there is.
It's not an accident of history.
That's just how the universe is.
It was fundamentally like its existence is just this infinite ground that goes forever.
And gravity points down.
And that's a law of nature, a completely fundamental law of nature that gravity points down.
This is a theory that fundamentally breaks three-dimensional rotation invariance.
The basic laws of this theory include that there is a force called gravity that points in a particular direction and
if you take any system in this
universe and rotate it in a way that is not parallel to the ground,
universe and rotate it in a way that is not parallel to the ground, the laws look different now. Right? Things fall sideways now. Like the theory, the laws are not invariant under
this transformation. We would call that an explicit breaking of rotation symmetry.
I see. Okay. So I guess terminologically, I'm confused because breaking to me seems
like it was there before and then now it's broken. So breaking in this instance just
means it fails to have it to begin with.
That's right. Fails to have it to begin with, that's the better way to say it.
I guess the terminology is because these are often symmetries that were there in earlier theories
and then maybe in a deeper theory the symmetry is gone.
So we say that like, that kaons violate time reversal symmetry, right?
We call it a violation or breaking because before we discovered these particles,
we thought it was a symmetry and it's broken by canes, right?
So I think maybe that's one way to think about it.
Or maybe it's that we have a lot of theories
in which certain symmetries are approximately true,
but they're like not exactly true.
And then we can think of certain particular things
in the theory that violate it.
Whereas broadly speaking, it's not violated.
That's entirely fair.
But the example of Spineisbrigg would be like
the actual earth, right?
The actual earth is not an infinite surface.
It's a sphere in space.
And when you're on the earth,
you do feel like gravity is pointing
in a particular direction,
but this is just an accident of earth being there.
If you could delete the earth
and we're just floating in space,
then suddenly you would have full access to the right. There'd be no notion of up or down or left.
Like the left or right is actually subtle
because it's parity invariance, which is a different thing.
But I mean, there's no preferred like direction in space
if Earth isn't there.
Earth's presence by an accident of just,
that's the way this particular corner of the universe
was instantiated, not at the level of fundamentally
changing laws of physics, but Earth being there and us living on the Earth makes us think
that a symmetry that is fundamentally there, rotation variance,
full three-dimensional rotation variance, is in fact not there.
It's hidden from us.
Some people say spontaneous symmetry hiding is a better term
than spontaneous symmetry breaking.
But yeah, so this just points to this distinction between,
am I really saying a certain symmetry is not there
in the fundamental statement of the theory,
or is it just hidden or missing because of the way that a particular system has instantiated the theory?
Okay, so I didn't know that, for instance, when you were talking about molecules that form you
and the molecules have some rotational invariance, but you don't, thankfully, have that rotational invariance.
I have a rotational invariance.
As far as we can tell.
But that forming of you was not spontaneous.
So the person who's watching or listening is thinking, well, it took quite some time
to form Jacob.
Jacob didn't-
The word spontaneous is a turvart, yeah.
Okay.
Because I thought spontaneous symmetry breaking had to do with you have a potential well and
then you get to the minimum of it and you produce some ghost-tone bosons.
And I thought it was just for that case.
I didn't realize it was any time that a symmetry was broken,
you call it spontaneously broken.
When it's broken by the way a particular system happens to form
or a particular solution to the theory.
So another way to, here's another way to think about spontaneous
symmetry breaking.
There is a puzzle called Shape by Shape.
It's this little, it's a square,
it's this delightful little puzzle.
You've got these yellow shapes and these orange shapes,
and you have a picture you're supposed to create.
And you take all the shapes
and you put them into the square
and you try to fit them all in the square.
And they have to fit exactly in the square with no gaps.
And they have to replicate the picture you see.
It's like a yellow background
with an orange shape in the middle,
and you have to replicate that.
And the setup of this game is mirror symmetric.
It's a square, the playing field is a square,
which is perfectly mirror symmetric.
And although some of the pieces are what we call chiral,
chiral means that they're handed.
There's like, the pieces look like a left-handed piece
or right-handed piece.
It turns out they're not really chiral because you can just flip them over.
You can flip them over and then you discover the fact that, yeah.
But in any particular way that they're flipped, they pick out what seems like one direction.
So, you know, and there's like an equal number of pieces.
Everything about the setup of this game is mirror symmetric.
You can lay out the board, lay out all the pieces, lay them out in a way that, you know,
you look at it in a picture and you just flip the thing over
and it looks exactly the same.
And you could even have a picture you're trying to make.
And this picture you're trying to make
is also mirror symmetric, right?
You look at the picture,
it's some totally symmetric looking figure.
And so you're like, okay, I should be able to
take my symmetric puzzle pieces
and create a symmetric picture. And I should be able to do it by putting the pieces
into a configuration that is mirror symmetric.
And it turns out some cases you can't.
In some cases, all the solutions to this problem,
despite the fact that the problem,
the laws of this system are mirror symmetric,
none of its solutions are mirror symmetric.
Once you've solved it, you discover that the solutions are always lopsided.
There's like different pieces on one side versus the other, and there's
no solution that is balanced and symmetric. We would say this is a system that entails
spontaneous symmetry breaking. The equations, the fundamental laws are symmetric, but all
the possible ways to solve it are not. Now, you could also have systems, maybe there's a puzzle in which it can be solved in a symmetric way
and it can also be solved in a non-symmetric way.
We would still call the solutions that are not symmetric, but still solve the puzzle,
we'd still call them spontaneous breaking solutions.
So, this is a phenomenon that happens all over the place.
There's another example that we've, you know, one of my colleagues here,
Kamran Vafa, a professor of high energy theoretical physics, works in string theory.
He has a lovely book called Puzzles to Solve the Universe, I think.
It's connected to a seminar course he teaches here.
And he has this puzzle he really likes.
It's this puzzle where you've got four cities
that are on the corners of a square.
And the question is, how can you connect them with roads?
So you can get from any one city to any other city,
driving on roads, using the minimum amount of road length or pavement possible.
This problem is completely rotation symmetric.
It's a square, right? It was stated in a way that didn't privilege any direction,
and you can always rotate the square any way you want.
It looks the same.
So you might think that the solution is going to be, I don't know, an X,
but, you know, an X doesn't...
An X works because you can make an X,
and then you could go to the middle of the X, and you could get any city.
It turns out that Xs use too much pavement.
The solutions that minimize the amount of pavement used
are these kind of like double Y solutions.
And there's one that goes horizontally
and one that goes vertically.
And these minimize the amount of pavement
and they both violate the symmetry of the problem.
So this is spontaneous image breaking
and we see it all over the place.
If you've ever wondered,
how can the universe look so asymmetric
given that the fundamental laws of physics
appear to have so many symmetries?
Fundamental laws of physics don't seem to privilege
direction to a very good approximation.
They don't seem to privilege handedness,
left versus right handedness.
They don't seem to privilege like the direction of time.
And yet the universe looks so asymmetric.
This is basically all just a huge set of examples
of spontaneous visual breaking.
You can have a fundamental system
where the laws or the equations are fundamentally
invariant under a bunch of symmetries,
but when you actually write down the solutions to them,
every molecular configuration
is a solution to the standard model.
Many of them simply fail to have all the symmetries
of the underlying theory.
Okay, so we've made the audience sit elated too long now. So tell the audience and myself
again a recapitulation of last podcast about what indivisible stochastic processes are.
And if you feel like relating it to Bohmian mechanics, because people already know what
Bohmian mechanics are, you've covered it, then feel free to do so as a bridge.
Sounds great, yeah. So the top line version of this is, you know, there's this joke about the television show Seinfeld being a show about nothing.
This is a theory about something. It's a theory about phenomena happening.
So again, we had this problem in the Dirac-Van Noyman formulation of quantum
mechanics, the textbook formulation, that the only category of phenomena we're talking
about is this narrow category of measurement outcomes, whereas we have this much bigger
category of phenomena that's going to be happening all over the place. I call this the category
problem. It's distinct from the famous measurement problem of quantum mechanics. So it'd be nice
to have a theory in which stuff is just happening. Now, this is not the only approach that does this, and later on I'll talk about
pilot wave theory, Bohmian mechanics, the Everettian or many worlds approach,
spontaneous collapse approaches. There are other
reformulations of quantum mechanics in which phenomena
are happening, and they all, in various ways, address this category
problem that is
not adequately addressed by standard theory.
So one way to think about this is this is yet one more way to actually have phenomena
happening in a broad sense beyond merely the narrow category of measurement outcomes.
Another way to think about this is it's a way to make the world safe for good old-fashioned probability theory.
So when we do quantum mechanics according to the Hilbert space Dirac-von Neumann formulation,
we have these very exotic mathematical entities, these vectors or density operators and Hilbert spaces and the Born rule,
and it feels very different from ordinary probability theory. Probabilities do show up, but there are many things
that you just can't do using probability theory.
You have to use this much more ornate, esoteric apparatus,
this formal apparatus in this sort of Hilbert space language.
In this approach, we restore our ability to do
quantum mechanics using good old-fashioned probability theory.
That's one other way to think about what we're trying to accomplish with all of this.
But let me actually go one step deeper before I lay out how it works.
Another way to think about this is there are two long-running prevailing assumptions about
what physical theories are supposed to be like.
One assumption is that laws of a theory should be Markovian.
Laws of a theory should take some notion of a present physical state
and then tell you what happens next.
If your theory is time-reversible, the laws are time-reversible,
then it should also be able to tell you what would happen previously, at least in principle.
Another of the two assumptions is that there's an all-or-nothing deal
when it comes to observables.
The things we can observe about a theory are either all there,
they're all, when we measure some observable, we are passively revealing some existing feature
or property of our system, in which case we should be able
to describe these things with any kind of probability
distribution we want, including joint probabilities,
where we can say, what's the probability
that this observable has this pre-existing value
and that observable has that pre-existing value
and that observable, and we can put a probability distribution on all of these things in some simple way.
And if we can't, then we just give up.
If we can't, then it's all out the window.
Can't do it.
There are good reasons to think that we can't do this in quantum mechanics.
There's a famous no-go theorem, originally proved by Bell, but because of the way that
it was published,
the first version that came into print, I think, was by Kochen and Specker in the 1960s.
It's called the Kochen-Specker-Nogo theorem.
And the Kochen-Specker theorem basically says that there are some quantum systems
in which this is impossible.
There are some systems in which you cannot assume all observable things
have prior values that you are merely passively revealing.
And there's a beautiful way to explain it.
A very simple version, much simpler than the original version, it's due to Asher Perez
in the 1990s.
Imagine you're playing a weird version of Tic-Tac-Toe.
This is a very strange version of Tic-Tac-Toe.
Here's how it's going to work.
You know Tic-Tac-Toe?
This we call it in America.
I know it's not called, it's called crosses and O's
or whatever in different places.
But Tic-Tac-Toe, you've got a grid,
it's got nine empty squares.
We're not gonna play it the normal way.
We're gonna play it the following way.
I am going to close my eyes and I'm going to imagine
an arrangement of O's and X's on this board.
I'm gonna close my eyes and I'm gonna tell you,
I promise you, I can visualize, I see it,
I see a bunch of O's and X's.
It's in my head.
Believe me, I really know what it is.
And what you're gonna do is you're gonna call out
a single row or a single column.
That's it.
And if your row or column that you call out
has an odd number of X's, you win.
If it has an even number of Xs, you win. If it has an even number of Xs, you lose.
And you only get one try.
If you guess and you fail, then we're done.
You lose that round.
We can play again, but then I have to come up
with a new board, okay?
So super simple game.
So the way that the game would work,
and when I teach my philosophy quantum mechanics class,
we do this, we do this example.
I say, I've got the board, okay, and I'm gonna go to,
first person,
you pick a row or column, the person says,
second row, and I write X, O, X.
Sorry, even number of X's, you lose.
We erase it, I say, I'm coming up with a new board,
and the second person says,
I'm gonna pick the second row again,
and this time the second row is O, O, O.
Sorry, even number of X's, zero in this case, you lose.
And then I erase it, I come up with a new board,
the next person goes,
and they keep trying different rows and columns.
What they discover is that every time they pick a row,
they always get an even number of Xs.
First row, it's always an even number of Xs.
Second row, third row, always an even number of Xs.
When they pick columns,
the first column always an even number of Xs,
the second column always an even number of Xs.
But the third always has an odd number of Xs every time. And so they learn this, they'll just keep picking the last column always an even number of X's, but the third always has an odd number
of X's every time.
And so they learn this, so just keep picking the last column and they always win.
Just keep picking the last column and they always win.
But then they go, well, wait a second, this doesn't make sense.
If you've really thought of this board in your head, then you're saying that you've
thought of a board in your head where every row has an even number of X's.
The first two columns also have an even number of X's, but the third column has an odd number of Xs.
This is impossible.
It can't be because even plus even plus even is even
if all the rows secretly in my head
have an even number of Xs.
Like it can't be just an image in your head that's static.
Right, there can't be a static image in my head
in which all the rows have an even number of Xs.
And the first two columns have an even number,
the third has an odd number because row plus row plus row,
but even column plus column plus column would be odd.
That's clearly incompatible.
You must be creating the results
only when we ask about them.
You can't already have a preexisting board in your mind
and I go, you win.
You're right.
But there are quantum systems with this feature
with nine observables, nine observables with the feature that the rows all have
an even number and the first two columns have an even number
but the last one is an odd number.
And you look at the system and you go,
there's no way this system could already know beforehand
what it was gonna reveal.
The measurement process must bring about,
at least in some cases, the results.
So the hard way to read this is to say, well, if some results
are the result of the measurement process and don't merely passively reveal a pre existing
situation, then then there's simply no pre existing results. It's all or nothing. They're
either all there or they're not there. That is also something I would push back on.
So again, Markovianity in the laws, this assumption that laws should be things like differential
equations that take the present state and give you later states.
And second, there's this all or nothing relationship, either all the things you could observe are
there, waiting to be seen, can be assigned joint probability distributions as needed,
or they're just like not there, there's just nothing there.
And these are two things that are challenged in this approach.
Both of these things are dropped, the Markovianity and this assumption.
In this picture, the fundamental laws of nature are not Markovian
and some observable quantities are reflecting things that are really there
and others are emergent effects of the interaction between
the measuring device and the system being measured.
This is one way in which the theory is actually quite similar to Bohmian mechanics.
People have known for a very long time
that in Bohmian mechanics, some things you observe
like where your particles are,
are revealing preexisting facts of the matter.
Bell gave a word for things that were really out there.
He called them not observables, but beables.
It's the way that the system can be,
like beable instead of observable. Some people read it and think it's beable, but it's but beables. It's the way that the system can be like beable instead of observable.
Some people read it and think it's beable, but it's actually beable. Whereas other things you can
observe are not really reflecting a thing that was there. And they're just emergent features of the
of the story. I call these emergibles instead of beables. To an external measuring device, they
look just as real as a beable. But really, what you're seeing in a measuring device is this sort
of emergent pattern.
It's not really reflecting something that's fundamentally there.
Now, Bohm mechanics does this.
In Bohmian mechanics, you know, the positions of particles are beables.
You measure them and you're really seeing where the particles were.
But when you measure like the momentum of your particle,
the momentum you actually see in your experiment is not literally the pre-existing momentum,
it's this sort of immersion defect of the interaction with the system.
And there's a paper, you can look up on the archive, it's called Naive Realism about Operators.
It's from 1996, it was written by Dahmer, Dürr, Goldstein and Zanghi.
And it's specifically about this thing, that to think that every self-adjoint operator
that in textbook quantum mechanics, direct quantum mechanics, we would associate with
the observable, is revealing something that pre-exists in the system is actually, as they
put it, too naive.
You can really have some things that are there and some things that are not there, and it's
totally fine.
It's not a serious problem.
And I'll come back to this point a little bit later because some of the questions that
we've gotten from people have been about like basis dependence and in this picture, can
you really measure everything?
And the answer is you can.
It's just that some of them are going to be be-ables and some of them are going to be
emerge-ables.
Okay, so we drop those two things.
There's an old saying, I can't quite pin down who said it.
Some people attribute it to John Wheeler again, that if you could explain quantum mechanics,
you should be able to say it in one sentence.
I don't know if Wheeler was the one who originally said this.
People can source this.
But here it is in one sentence.
In the indivisible formulation of quantum mechanics,
every system has an actual configuration
belonging to some menu of possible configurations we call the configuration space.
I do have to put an and in, and there's a comma and an and,
it's still one sentence, and the dynamics,
the dynamical rules, the laws by which
the configuration changes with time is characterized
by a sparse set of directed conditional probabilities
that generically fail to be divisible in time.
That's the whole picture.
And in principle, you can get everything out of that one sentence.
Everything is now just mathematics.
So there's no more ands.
There's no more ands.
That's it.
That's the picture.
Let's now, let's talk about like what, so if you want to talk in terms of Occam's razor,
like this is, you know, we'll come back to Occam's razor in a little bit.
Please.
But, but, just say like this is, this is like axiomatically pretty simple.
There's no statement about Hilbert space or whatever.
Everything is phrased in terms of things we know and are familiar with,
configurations that physically are.
Which configurations depend on the system?
If you want to model a system of particles, you use particle configurations. If you want
to use fields, you use field configurations. If you want to use whatever discrete registers
in a qubit memory register, you'd use those. Whatever it is you want to use, discrete,
continuous, whatever you want. And then the laws themselves are phrased as probabilistic
statements, but they're classical probabilities. They're classical, ordinary probabilities, classicals may be too strong a word because maybe you
might demand that classical time-evolving probabilities must be Markovian.
I don't, I think it's a little bit prejudicial to say that they have to be that way.
But certainly they're just normal probabilities.
They're not weird Hilbert space things.
They don't have complex numbers in them.
They're just regular probabilities.
They sum to one.
They do the things that probabilities are supposed to do.
That's the picture.
And then in the first of the papers, the paper called the Stochastic Quantum
Correspondence, which is on the archive, it's now just a set of mathematical mappings.
You take this picture and you just do these sort of mathematical transformations
and you end up with the same story
but formulated in this Hilbert space language
with time evolution operators and state vectors
and density operators and self adjoint.
This picture is just like a different mathematical phrasing
of this indivisible picture.
And there's a lovely analogy to be found here
between these indivisible systems,
these indivisible systems,
these indivisible stochastic systems
that have this Hilbert space formulation
that's very exotic and has complex numbers in it
and it's got all these weird symmetries,
it's got basis invariance,
you have all these different bases you can use.
There's a beautiful correspondence between that connection,
that stochastic quantum correspondence
and the so-called Hamiltonian formulation
of classical physics.
So Newtonian systems, classical Newtonian systems,
interestingly are not quite Markovian, right?
Think about it, right?
You can't predict how a particle will behave knowing only its position.
You actually have to know its position,
and you have to know its infinitesimally earlier position. You actually need to know its position and you have to know its infinitesimally earlier
position. You actually need two pieces of information. Now we don't usually formulate
it that way because it's a little cumbersome to talk about where it is now and where it
is infinitesimally before, like DT, you know, D like infinite calculus, infinitesimal D
before. That's like kind of cumbersome to do. So instead what we do is we subtract the
two and divide by DT and call that a velocity.
But it's the same information.
If you want to like numerically simulate a Newtonian system and you like discretize time
to do it, you would specify where it is now, where it was a moment ago, and you'd plug
this into a discretized version of Newton's second law of force equals mass times acceleration
and you would predict the behavior of the system.
So even a 20 physics is a little bit non-Markovian.
But we do tricks.
We don't like having it be non-Markovian.
So what we do is we replace the two positions
at the two infinitesimally adjacent times
with a position and a so-called instantaneous velocity,
which is kind of this trick
so that we can treat these two things
as both being at the same time.
And now we've made the formalism look Markovian.
Now we have a state at the initial time that consists of both the coordinate, so
called coordinate, the position and the velocity.
But we need twice the variables now.
We've increased the number of variables in order to phrase this thing as a
Markovian system, and this is an old thing you can do with, in stochastic processes,
the theory of stochastic processes,
if you consider a system that is a little bit non-Markovian,
so not extremely non-Markovian,
like these indivisible processes we're talking about,
but a system in which you need to know the present state
and maybe just one earlier state,
maybe two earlier states,
to then figure out how this system will evolve.
You can take these systems
and you can treat them like they're Markovian by just increasing
what you mean by the state space.
I see.
Like you triple the state space
so that now you can just treat these three things
as if they're all on one slice
and now you've turned it to a Markovian system.
And so there's this lore that you can take
any non-Markovian system and make it Markovian
just at the cost of making the state space big enough.
Just like in Newtonian mechanics,
we take what was just coordinates
and we double it to be coordinates and velocities,
and now we have what looks like a Markovian description.
This doesn't work for an indivisible system.
When systems are sufficiently extremely non-Markovian,
you can't do this.
And I think this was a little bit of tunnel vision.
People thought, well, you can always do this,
so what's so interesting about very non-Markovian systems?
But if the system is non-Markovian enough,
this trick doesn't work,
and you actually have something
that's really non-Markovian systems. But if the system is non-Markovian enough, this trick doesn't work, and you actually have something that's really non-trivially different.
But then what you can do is reformulate the Syntonian system
in what's called the Hamiltonian formulation.
And the Hamiltonian formulation,
we rephrase what we were talking about the state space,
the states, the positions and velocities
in terms of what are called canonical coordinates,
which kind of generalize the notion of the position,
and the canonical momenta,
which generalize the idea of a velocity.
And then we call the state space a phase space.
The terminology apparently goes back to Boltzmann,
who used the idea, because he was thinking in terms of like
the phase of like a pendulum or something like that,
some oscillatory system, like where is it in its motion,
in its cycle, what phase is it in?
This idea is much more general than that.
We call this a phase space.
And in a phase space, we've got these variables,
the Q variables, which are like the generalization of position,
and the P variables, which are like the generalizations of velocity,
the so-called canonical momenta.
And in terms of the Qs and P's, we can reformulate Newton's laws as first order differential
equations, which basically means Markovian.
You specify the Q and the P, and then you uniquely get what the next Q and P are at
all subsequent moments in time.
So this makes the system look like it's beautifully Markovian.
Position and momentum are now in a perfectly equal footing.
The differential geometry speak for this as we're now working on what's called the cotangent bundle
of the configuration manifold,
but it's not necessary to conceptually understand
what's going on here.
But what's interesting about this Hamiltonian formulation
is it looks weird.
It has all these enhanced symmetries.
For example, now that we've put the coordinates, the Qs,
which again, generalize the notion of position,
and the Ps, which generalize momentum,
we've put them on this sort of very similar footing
and they enter into the theory in this very equitable way.
We can now do changes of variables
where we can take Q and replace it with P
and P and replace it with negative Q.
And in so doing, you could take a harmonic oscillator
with a mass and a spring constant
and get a new harmonic oscillator where the
mass of the new harmonic oscillator is the reciprocal of the spring constant of the old
and the new spring constant is the reciprocal of the mass of the old.
But you could also do much more bizarre transformations.
These changes of variables, which are carefully designed to keep the equations, the laws looking
very similar, are called canonical transformations. And there is a beautiful connection between these,
changes of variables of your phase space,
and basis transformations in Hilbert spaces.
There's actually a beautiful mathematical relationship
between them.
It helped inspire Dirac to introduce his formulation
of quantum mechanics, this sort of analogy
between these two things.
And the analogy can make it even stronger
through a series of papers.
There was a paper by Franco Strocci in the 1960s
called Complex Coordinates for Quantum Mechanics, I think.
And then a paper in 1985 by Andre Heslott.
And that paper, I can't remember the name of the paper,
but it's from 1985, where they make this analogy
much tighter.
They show that any quantum system in Hilbert space language can be rewritten in a way that
looks just like a classical Hamiltonian system for a system of coupled harmonic oscillators.
It's this beautiful framework I call the Stroche-Heslott formulation.
And I think there's like a YouTube video of a talk I gave on it.
If people want to see the details, we can link to that.
People can see how it works. So the connection between this freedom to do
all these weird changes of variables
that mix up what you mean by the Qs
and what you mean by the Ps.
You could make a new Q be Q plus P and a new P be,
mix them up in all these bizarre ways
that make the fundamental picture very murky.
Like once we've done this, it's sort of hard to remember
what our original system even was.
There's a close connection
between that enhanced set of symmetries
and the basis rotations in quantum mechanics.
You also see the emergence of complex structure,
complex numbers in the Hamiltonian formulation.
There's a beautiful way you can take the Qs and the Ps
and write the Qs as Q plus square root of minus one,
the imaginary unit times P, up to some engineering dimensions.
You have to get the units right.
But basically you can define a complex variable and rewrite the whole picture in complex coordinates
and it simplifies the mathematics in a very beautiful way.
And this brings the analogy out with quantum mechanics even tighter.
For those viewers who know about Poisson brackets, if you rephrase Poisson brackets in terms
of these sort of complex representations, it makes canonical quantization look even viewers who know about Poisson brackets, if you rephrase Poisson brackets in terms of
these sort of complex representations, it makes canonical quantization look even more
clean.
So we have this incredible similarity.
You start with a non-Marcovian system, in this case just second-order non-Marcovian,
Newtonian mechanics, depends on coordinates and like the previous time coordinate. And we can reformulate it as a, uh, Markovian looking, uh, beautiful, this
Hamiltonian phase space formalism with all these enhanced symmetries and even
complex structures, and we can do really powerful things.
I mean, these canonical transformations can be used to solve all kinds of
difficult problems in ways that are difficult to do in Newtonian mechanics.
They even lead to a wave-like picture
called the Hamilton-Jacobi formulation,
which led to Schrodinger discovering wave mechanics.
And there's this complete analogy
where you could take an indivisible stochastic system,
which is also a non-Markovian system,
much more non-Markovian and probabilistic,
the system I just described to you that I laid out,
and do this change of mathematical representation,
and you get a Hilbert space picture,
which also is very exotic looking,
and also has a very murky sort of physical world picture,
and also has all these enhanced symmetries,
these basis rotations,
and also sees the emergence of complex numbers.
The analogy is actually very close between these two pictures.
And again, that change of representation to the Hilbert space picture is called the stochastic quantum correspondence.
But now we can do all kinds of things that we couldn't do before.
Okay, so one example.
Why is the dynamical equation of quantum mechanics Markovian?
It's Markovian because you can just see as you do this change of representation,
you can see how we took a non-Markovian thing and wrote it as a Markovian kind of evolution
at the cost of introducing all of these weird new ingredients.
The phases, the off-diagonal entries, the density matrices, interference effects, superposition. All of these ingredients are the cost that you incur
by trying to represent what is fundamentally
not a Markovian system, an extremely non-Markovian system
as a Markovian system, they're the prices you pay.
We talked before about like, are these memory effects?
Memory is not quite the right metaphor
for indivisible stochastic processes.
Traditional non-Markovian processes
require that you specify sort of conditional probabilities,
condition and arbitrarily many previous times.
That's very complicated and has a lot of structure
and contains a lot of information.
And you can legitimately ask,
where is that information being stored?
An indivisible process is actually much simpler.
It doesn't entail the specification of all those higher
order non-Markovian probabilities.
It's much sparser.
All you're supplied with are first order
conditional probabilities that don't concatenate,
that can't be divided, that aren't Markovian.
So there's actually less information and less memory,
much less memory than is in a traditionally stated non-Murcovian process
There's no question about where is the memory being stored?
It's not that there's memory per se it's that the system's evolution doesn't depend only on the present state
So memory is not quite the right word for it, but there's a kind of
memory ish quite the right word for it, but there's a kind of memory-ish kind of thing.
And that's what's being encoded in all of those coherences and superposition interference,
which otherwise don't have any, you know, in the traditional textbook formulation, they're
just math that produce, you know, empirical implications when we do experiments.
But like, we don't have like a meaning that we can attach to them.
Here, we can attach a meaning.
They're the artifacts of taking what is ultimately not a Markovian process
and forcing it to be in a Markovian formalism.
But we can go beyond that, right?
So we can explain why the equations look Markovian.
We can explain what the interference terms are and the phases and the coherences
and the superpositions.
We can explain what those things are.
We can also explain why the evolution law is linear.
For a closed quantum system,
a system that is not engaging in information exchange
with its larger environment,
the evolution is given by a linear equation,
the Schroedinger equation,
if the system is sufficiently smoothly evolving in time
or more generally unitary evolution.
These are linear.
And the question is, where does that linearity come from?
It's just an axiom,
according to the textbook Dirac von Neumann axioms,
but in the indivisible approach,
it just comes right out of the change of representation.
It comes out of the fact that ordinary probability
has a linear law.
If you're given initial probabilities
and you wanna compute final probabilities and you're using the conditional probabilities that are given to you in laws, the relationship between the early probabilities and the later probabilities is a linear law.
It's given by what's called Bayesian marginalization. I mean, there's various terms for it. Law of total probability.
But it's a linear relationship and that linear relationship becomes the linearity
of time elution of quantum mechanics, so now that has an explanation too.
And you might go, well, what about unitarity?
I mean, I didn't start my indivisible process, how did it know to become unitary?
It turns out that there's a class of indivisible processes called, they're based on what's
called a unistochastic transition matrix. The term unistochastic goes back to
a lecture in like the late 1980s, but the idea actually goes back, it's a much older
term. So Robert Thompson introduced the term unistochastic, but the original term was orthostochastic
and it was introduced by Alfred Horn, a mathematician in the 1950s.
And he wasn't studying stochastic processes.
He was just studying the analytic properties, the pure math analytic properties of stochastic matrices.
And he noted there was a particular kind of stochastic matrix, the kinds of matrices that show up in stochastic theories.
These matrices are matrices with non-negative entries.
They're square. They have non-negative entries, they're square, they have non-negative entries, and their columns all sum to one.
But there's a particular subclass of them that he called orthostochastic,
today we call them unistochastic, that have very beautiful, interesting analytic properties.
From the standpoint of a pure mathematician, they have very neat properties.
The idea of building a physical stochastic process using these as the matrices
that contain all of your conditional probabilities,
that is new.
I mean, that wasn't an idea that people had proposed.
So you could just say the particular kind
of indivisible processes I'm interested in
are unistochastic processes.
And these are exactly the processes
that when you run them
through the stochastic quantum correspondence,
on the other side, you end up with unitary evolution.
Now you might go, that's a little dissatisfying
because that'll explain why evolution is unitary.
You had to assume a particularly special kind
of stochastic process, of an indivisible process
that it was unistochastic.
The technical definition is unistochastic process
is a process where all the conditional probabilities
are related in a very simple way
to the entries of Unitary Matrix.
That makes it seem a little bit too canned.
But there's a theorem that's proved in the first of the papers,
the Stochastic Quantum Correspondence paper,
that even if you don't start with a unistochastic process,
even if you start with just a completely boring,
ordinary, generic, indivisible stochastic process,
you do the change of representation to the Hilbert space picture.
You can always write that process as what's called a quantum channel.
A quantum channel is also known as a linear completely positive trace preserving map.
These are well studied in the quantum information literature.
It's not obvious that you can do this, but you can. It's a simple, yeah.
The mapping from the indivisible stochastic processes
to the Hilbert space picture can be represented by a quantum channel
or is a quantum channel?
So you start with your indivisible stochastic process,
which is a probabilistic.
It just says, given where the system is now,
this is the probability it'll be there later.
You run it through the stochastic quantum correspondence.
You now have density matrices and state vectors,
all this stuff show up.
And the time evolution is now carried out by what's called
a time evolution operator that takes your current state
and gives you your later state.
In general, this time evolution operator is not unitary.
Unless your original process was unistochastic.
In general, if it's not originally unistochastic, then the time evolution operator will not be unistochastic. In general, if it's not originally unistochastic,
then the time of the shepard will not be unistochastic.
But it turns out you can still write it as a quantum channel.
This time evolution operator that shows up on the other side
looks kind of weird,
but it turns out it can be written
as this very well studied thing called a quantum channel.
And quantum channels can be turned into unitary evolution
by another change of representation.
Borrowing a theorem by Stein-Spring in the 50s, the Stein-Spring Dilation Theorem,
by increasing the dimension of your Hilbert space in a bounded way,
for those who care, it's going from an n-dimensional Hilbert space
to an n-cubed dimensional Hilbert space at the most,
which corresponds to adding one or two extra degrees of freedom.
You can implement the evolution as a unitary evolution anyway.
And this is really cool because it means that even if you didn't start with a special kind of,
you start with a totally generic indivisible stochastic process,
all it's saying is, given the configuration of the system here,
here's the probability distribution of where it will be later.
When you run it through this stochastic quantum correspondence,
get a Hilbert space picture, you go, oh no, it's a Hilbert space picture,
and I see a lot of stuff, but the evolution is not given by a unitary operator
It's not given by the Schrodinger equation, you know, that's that's not great
It turns out with a simple change of representation one second change of representation
You can implement the evolution in a unitary way
But what's called the dilation of the Hilbert space and this gives finally an explanation of where unitarity comes from
Where that axiom that the time of Luci be unitary, it emerges from this set of transformations.
So it's really nice to be able to like explain where these things come from.
And now I can get to a question that came up from a couple of people in the comments.
It seems like we started with this indivisible stochastic process.
We end up on this other side,
and it kind of seems like one basis
for the Hilbert space is special, right?
So a Hilbert space is a vector space,
all of the objects, the state vectors are vectors,
and people may know that a vector is a mathematical object.
The simplest versions of vectors are arrows.
Let's take an arrow, an arrow pointing in some direction
in space with some direction and some length.
And if you draw a coordinate system, imagine an arrow in like, on a graph paper, right?
An arrow on graph paper, there's an x-axis and a y-axis,
and you can ask, how far do we have to go along the x-axis,
and how far along do we have to go along the y-axis
to get from the bottom of the arrow to the top of the arrow? And the distance along the x-axis and the distance along the y-axis, and how far along do we have to go along the y-axis to get from the bottom of the arrow to the top of the arrow.
And the distance along the x-axis and the distance along the y-axis are called the components
of the vector.
It's got this x component and this y component.
Those are the components of the vector.
But if I change my axes, if I like turn them a little bit, without changing the arrow,
the arrow is the same, but the axes have tilted, well, then I have different distances now. I have a new distance, the x' distance and the y' distance, the new distances.
So a single vector has different component representations depending which basis you use.
And the Hilbert spaces of quantum mechanics have this feature.
In quantum mechanics, the different bases are associated with different kinds of things you might want to measure.
So, for example, if your vector is pointing exactly
along a certain axis, it means that if you measure
that observable, you're definitely gonna get that result
and definitely no other results.
But if you measure a different observable,
one whose axes are tilted, now your vector is not pointing
exactly along that direction anymore.
It's got some component along one axis
and some component along another.
And the Born rule tells you how to take those components
and compute probabilities.
And so we discover is that even if one observable
has a definite result
because the arrow is pointing exactly along one axis,
a different observable with different axes will give probabilistic results.
And if you somehow change your arrow so it's pointing along one of those axes,
now the first observable will not have a definite result
and this is just the uncertainty principle.
That for certain pairs of observables when you know one with certainty,
when measurements are guaranteed to yield a definite result for one of them with certainty,
measurements are not guaranteed to yield definite results for others with certainty.
This is related to the ability to change bases.
You may wonder, is this basis dependence preserved?
This base independence that I can take a vector and write it on any basis I want.
But the answer is a lot like in Hamiltonian mechanics.
We start with a physical system with an actual position
and a particular momentum that has a clear meaning.
It's the momentum, it's like for some systems,
it's mass times velocity.
It's like a clear definition of momentum.
We formulated in this Hamiltonian phase space picture.
But then we can do all these weird canonical transformations
where we can change variables and change what we mean by Q
and change what we mean by P.
And all of a sudden it's like not clear
what the new Q means, what the new P means.
And you might go, well, Newtonian mechanics must be wrong
because Newtonian mechanics didn't have this independence.
Newtonian mechanics picked out a particular Q and P,
but Hamiltonian mechanics treats them all
like they're the same. There's this symmetry under changing your definition of Q and P, but Hamiltonian mechanics treats them all like they're the same.
There's this symmetry under changing your definition of Q and P.
This means that Newtonian mechanics is wrong because it doesn't have canonical transformation dependence.
That's very analogous to what's going on here.
It's true that you start with a particular system in the indivisible stochastic side of the picture.
The system has configurations. Those configurations correspond to a particular basis on the Hilbert space side.
Once you're on the Hilbert space side,
now you have the freedom to change your bases
however you want.
But because the theory is mathematic,
like it's just a theorem,
you can go from one picture to the other.
And this Hilbert space picture is mathematically
a representation of the first picture.
Just like in Hamiltonian classical mechanics,
you have the freedom to do these basis transformations,
they're totally available.
The one lingering question you might ask now is,
well, but can I measure other observable?
Like the ones that were tied to the configurations
are ones that will be very simple to represent
on the Hilbert space side.
They correspond to what we call diagonal operators.
What about all the other observables?
The ones that are not diagonal,
the ones that don't commute with the first set,
that correspond to these other bases, right? Those can also be measured. What's
going on with those? The answer is it's just like in bole mean mechanics. If you set your
system up and couple it to a measuring device, and the measured device could measure one
of the beables, one of the ones that tightly corresponds to the original configurations,
it will passively reveal what the system had
as that feature.
But if you change your measuring device,
just pick up a quantum textbook, look at measurement,
like how they formulate measurements
in terms of unitary transformations,
and just change your measuring device,
now it's gonna measure a different property,
measure one of the immergibles.
You run the exact same process,
measuring device will stochastically end up
in one of its measurement outcomes
with the correct probability given by the Born Rule.
It just comes out of the formalism.
But the thing it's measuring is not a fundamental feature
of the system.
It's measuring really like an emergent pattern
of the mutual dynamics of the system itself
and the measuring device measuring it.
Some properties you're passively revealing
what was already there,
those correspond to that special basis.
And properties that don't correspond to the special basis
can still be measured and they'll still produce
results on measuring devices.
And the measuring devices will still stochastically end up
in their correct readout configurations
with the correct probabilities.
And as far as the measuring device is concerned,
it has measured something just as real as for a beable.
But what it's really measured
is one of these emergent patterns, these emergibles.
And so from the point of view of like the outside world,
the emergibles are on the same footing as the beables.
Together the beables and the emergibles collectively form
the full, we call it a non-community of algebra
of observables for the quantum system.
So this deals with this question of basis dependence.
And this isn't new.
So another way to formulate quantum mechanics is the path integral formulation.
So people may be familiar that there's this sort of Hilbert space formulation and there's
this other formulation in which to predict probabilistically where the system will end
up you're supposed to somehow start with initial configuration
and write down every candidate trajectory the system could have.
All the trajectories, ones that do not satisfy the classical laws of physics,
every one you can imagine, assign each of them a special numerical factor,
add all the factors together as this very tricky integration
called a functional integral, and get a complex number
out that when you mod square it, do this particular operation on it, you get a probability.
And you can reformulate quantum mechanics at least at the level of making its predictions
in this way.
This path integral formulation goes back to Paul Dirac in a 1932 paper.
Oh, I didn't know that.
Paul Dirac was the first to introduce it.
All the way down to like you slice up the time interval into little bits and you introduce
complete sets.
It's a beautiful, beautiful paper.
He was trying to understand what is the role of the Lagrangian in these.
So before 1932, people had formulated quantum mechanics in the language of Hamiltonians,
in the language of Hamiltonian-Jacobi theory.
And Dirac was just very curious.
He wanted to know, does the Lagrangian formulation show up in quantum mechanics also?
And he found this very beautiful way to do it
using these sort of functional integrals.
But Dirac was just content to write it all out,
like formally state everything
and not turn it into a machine for calculating things.
10 years later, Richard Feynman picked it up
while he was a PhD student.
And this was like in 1942.
He was a PhD student. And this was like in 1942. He was a PhD student at Princeton,
also again a student of John Wheeler.
And he turned Dirac's formalism into an actual recipe
for calculating things.
And then a few years later,
he ended up publishing a review article,
talking in detail.
And we can post all of these to the YouTube video
if you want or people can see all these papers.
So a few years later he wrote a review article about all of this and he says at the beginning
there's nothing so far that I can do with this that cannot be done with ordinary methods.
You can imagine someone saying well then what good is it if it only makes the same predictions
as ordinary quantum mechanics and you can't do anything with functional integrals or path integrals that you couldn't do with earlier methods, what use is it? And by the way, it also picks out a basis. Because to do the path integral formulation, you have to pick a basis. For example, when you do path integrals for particles, usually what you do is you pick positions, you pick what's called the position basis and you everything with positions. You don't usually do the path integral in other bases.
You can, but any particular choice
of how you formulate the path integral picks that one basis.
So you could say, this is basis dependent.
You could say it doesn't do anything
that we couldn't already do.
It's a weird picture.
And it was for a long time.
And then eventually people discover
there were some calculations that were just too hard to do
in the traditional approach.
Today, if you wanna formulate a non-abelian gauge theory,
like a Yang-Mills theory, like QCD,
you're probably not gonna wanna do it
using the canonical Hilbert space approach.
You're probably gonna wanna do it
with a functional pathological approach.
Eventually, and it took many years,
I mean, from 1932 until when people really needed
these decades, were able to realize there were some things
that now we could do much more easily
with this new formalism.
So, of course, this gives me hope
that formulating quantum mechanics in a new way,
not the Hilbert space way, not the path integral way,
a new way, even if it doesn't obviously do anything
that you couldn't do otherwise, even if it doesn't obviously do anything that you couldn't do
otherwise, even if there aren't any obvious immediate applications as the sort of thing
that maybe 10, 15, who knows?
Feynman himself said that any good theorist should know a bunch of different ways to do
the same thing because when you formulate a theory in multiple different ways, you discover
different knobs you can turn, different things you could do that might have been harder to
imagine doing in one formulation that are easier to imagine doing in another formulation.
So that gets this sort of basis independence question.
We can get to some of the other comments and questions people have had, but let me pause
there and ask if you have questions before we go on.
So let's see if I understand this correctly and I can simplify this.
So Markovian, let's understand what Markovian means. This means your system, you look at the present
state and you can determine the next state.
Okay, when we say the present state we tend to think, oh Newtonian mechanics is
Markovian because you say, well let's specify the position and then maybe the
mass and then the velocity as well. Now. But then you're saying, well what is the
velocity? It's actually the
position just an infinitesimal time ago. So you could introduce a new variable
called velocity or you could just think of what the time is now and what the
time was ago, making it non-Markovian because it's now no longer the present.
This is equivalent, I believe, it's equivalent to Newtonian, the Newtonian
formulation, but in what we'reian, the Newtonian formulation, but
in what we're taught as the Newtonian formulation you've introduced something new.
So you're saying that look, in that similar manner, we can make an analogy here where
there's these indivisible stochastic processes, these little guys operating around here.
What exactly are they?
I'm going to ask you about that afterward.
But they're operating here and they're stochastic. What about all these, you're saying they correspond to quantum
mechanics, how? Quantum mechanics is linearity, unitarity, superposition,
interference. Are you suggesting that when we take this non-Markovian and we
make it Markovian, that just as in the Newtonian case we introduce something
new, that we're introducing something new, and those correspond to linearity, superposition, interference, and so on.
Yeah. Yeah. And complex structure and basis,
and the ability to change bases and all this stuff, they all, it's exactly right.
It's exactly right.
Okay. Okay.
So now the question that the audience has or had is,
how do you deal with interference experiments?
Also, we can get to it in order, if you like, Bell.
What does it say about the Bell's inequality?
And what are the beables exactly in this framework, this formulation?
Let me start with the interference experiment ones.
Let's start there.
So the simplest answer to that question is if there's a mathematical duality or representation
that takes you from an indivisible stochastic process to this Hilbert space picture,
and let me just quickly say, the mapping is not one-to-one, it's many-to-many.
A given indivisible stochastic process may have many Hilbert space representations,
and a given Hilbert space representation can represent many different indivisible stochastic processes,
but this is not new. The same relationship holds for Newtonian systems and Hamiltonian formulation. A single
Newtonian system can have multiple different Hamiltonian formulations and a single Hamiltonian
formulation can represent many different Newtonian systems. So this is not like a new thing.
Okay, but the point is that these indivisible systems do have this representation in this sort of Hilbert space picture.
And the representation is just math.
I mean, every Hilbert space picture can be regarded as an indivisible stochastic system in disguise and vice versa.
So any predictions you make with a Hilbert space picture are going to be preserved.
The explanation though is gonna be different.
So let's take the double slit experiment.
And this is actually gonna be helpful
because I think one question someone had was,
can I give a hello world example?
You know, computer programming,
the simplest program you read is
something that just prints hello world.
What's the simplest example I can give?
I'll give you a simple example.
Let's consider a double slit experiment
and we'll make it so simple.
We're gonna coarse grain it, that's the term of art.
We're going to coarse grain it to simplify it.
So instead of the particle being able to be just like anywhere,
we're going to coarse grain the description so that we're only talking about
is the particle in the upper part of the chamber or the lower part of the chamber.
Just upper and lower.
Basically turning the particle into a qubit, into a two quantum state system.
It's a two state system.
Upper chamber, lower chamber. The particle is upper chamber, lower chamber.
It then, in a, and we'll just imagine
that something is like sliding it forward
and just some like, just keep it super simple.
We're sliding the particle forward,
it encounters a wall,
and the wall's got an upper hole and a lower hole, right?
As walls do.
As wall, well, not all walls have upper holes
and lower holes, this particular wall
is an upper hole and a lower hole.
And then beyond the wall there is a projection, a detection screen, a screen where the particle
can land and we're going to coarse-screen that the particle can land in the upper part
or the lower part of the screen.
So this system is sufficiently simple that it actually encompasses many systems you might
deal with in quantum information.
Mach-Zehnder interferometers, there were some questions about Mach-Zehnder interferometers
and the Elitzer vitamin bomb tester example.
These are all based on a very similar kind of very simplified coarse-grained version
of the double slit experiment.
Now I'm not going to be able to do the calculations in the air.
When I teach my class on philosophy of quantum mechanics, I do go through the calculations
and I've got a nice write-up and if you want I can send you a draft of it.
It's not in the paper yet, but I can send through the calculations and I've got a nice write up and if you want I can send you a draft of it, it's not in paper yet,
but I can send you the draft and show you all the steps.
But even this example, right,
because we're filling in all the details now
in the traditional textbook approach,
you basically set it up and then you measure the end
and you don't really talk about what's going on
ontologically, physically in between.
I mean, you're writing like a wave equation,
you think in terms of waves,
but of course the moment you go beyond one particle
to 10 or 20, something of the wave picture
is now in a 20 dimensional space,
it doesn't make any intuitive sense.
We're doing it very differently.
We're actually gonna follow the particle
and write down the probabilities, write everything down.
And what you do when you do this is,
every time you run the experiment,
one particle going through the particle
lands in the upper chamber, upper part of the detection
screen or the lower part.
And you do this experiment many times. Every run of the experiment, the particle lands in just upper chamber, upper part of the detection screen or the lower part.
And you do this experiment many times.
Every run of the experiment,
the particle lands in just one spot,
either the upper part or the lower part.
Over many, many, many repetitions of the experiment,
you build up a histogram.
You build up like a distribution of landing sites.
And what you find is those landing sites
look just like the distribution you would have gotten
if there had been wave interference in the problem. But there's no wave in the problem. There's actually no wave in the problem. And the interference is an artifact of the indivisibility. Why? How can we so there's a limit to how much we can say about like, because indivisible processes are by their nature really unintuitive. Physical theories can be unintuitive. That's the thing about physical theories.
But in this simple case, at least,
we can actually shed some light
on where the interference is coming from
in this very simple example.
And why there isn't interference when we do this,
like with a Newtonian system.
In a Newtonian system where you're just like throwing rocks
at the wall, one at a time,
you don't over many landing sites
get this distribution, this pattern.
Why?
One way to understand it is that if you're throwing
the rocks and they're being thrown either deterministically
or if we want to even like let it be probabilistic,
we do probabilistically.
The way you would say it is you would say,
okay, well, I throw the rock,
it gets to the wall with the holes in it. Either it goes to the upper hole, or the lower hole.
Let's suppose it goes to the upper hole. Now that we know it's at the upper hole, let's start there,
and then use the laws of Newtonian mechanics to figure out where it goes. If it goes to the lower
hole, let's start there and use the laws of Newtonian mechanics to figure out where it goes.
But notice what I did there, I divided up the evolution.
I assume that Newtonian, that the laws given to you in the system are of the form that you can take the system at the intermediate location, at the holes,
and Newtonian mechanics gives you the laws for what happens next.
And when you do that, you get no interference. You get the standard pattern.
In the indivisible formulation, you're not given those laws.
The dynamical laws that describe where this particle is going
do not have the feature that when they get to the middle wall
and go through, I mean, the particle at any moment
is only in one place.
It's either in the upper hole or the lower hole, still true.
But once it's there, you can't say,
all right, let's suppose it's in the upper hole. Let's restart the evolution, and But once it's there, you can't say, all right, well, let's suppose it's in the upper hole.
Let's restart the evolution.
And then the theory doesn't give you a law for that.
There's simply no dynamical description
for starting at the holes and then saying what comes next.
It's not supplied in the laws.
That is the indivisible law that describes
the entire experiment from beginning to end
is more general.
There's simply a more general class of such indivisible laws
that fail to have this divisibility property at the holes.
If we demand that the laws be divisible at the holes,
we're singling out a subclass of indivisible processes.
Cause you could just imagine,
let me not consider the most general indivisible process.
Let's consider just indivisible processes
where you're given the laws from the beginning to the wall,
and you also have new laws that go from the wall to the screen.
If you limit yourself to just those indivisible processes,
you will see almost no interference.
There'll be a little bit left
because there's still some indivisibility,
but it will mostly be gone.
But if you don't limit yourself to those special cases,
you will in general see interference. and they will mostly be gone. But if you don't limit yourself to those special cases,
you will in general see interference.
So the failure of Markovianity at the holes,
the inability to restart the evolution
and have in your hands the laws for what comes next during the holes,
the failure to have that means that you can have the kinds of laws
that will lead to interference.
Now, there are modifications to this experiment.
For example, what if you look to see
what hole the particle goes through?
And we can implement this very simply
by adding another two state,
two configuration system near the holes.
Suppose I've got a little second particle,
little second device with two configurations.
And all it does is it stays off
in its initial off configuration if the particle goes to
the upper hole.
If the particle goes to the lower hole, it switches to on.
Okay?
That's all it does.
It just does these two things.
Sure.
You can model this very neatly.
You put it in.
You actually want to give this thing deterministic laws.
You want this to deterministically have the property that when the particle goes to the
upper hole, it always stays off.
I mean, it goes to the lower hole, it always changes to on.
You can implement that with a pretty easy set of equations,
give it that deterministic behavior.
Now, when you evolve the system, the interference goes away.
You can just, I mean, you just rerun the indivisible process
and you'll see that there's just no interference anymore.
What's really nice, and this is actually a really nice example of this, is if you throw
away the information that was in that little detector particle, the one that was detecting
it, or if the detector particle communicates with the outside world and that information
is now irretrievable and it's not accessible to us. And what we can do is classically marginalize. So marginalization is when you have a joint
probability distribution, it's a probability for like two things, and we sum over one of
the variables to drop it out of our awareness. This is a standard move in ordinary probability
theory. When you do that for the time evolving process, the time evolving process for the original particle suddenly has a division
event, a division that is now available at the walls.
And so what was an indivisible process is now divisible at the walls, thanks to the
detector particle that we have marginalized out.
And this is another way to see that divisibility is now restored at the walls and gives another
way to understand why the interference effects go away.
But there are a couple things to say about this.
Number one, this division event is related to decoherence.
I mean, the process by which we marginalize is just in stochastic language,
which in Hilbert space language we would call decoherence.
If you look at density matrices, you'll see that the division event, seen at the level
of the Hilbert space corresponds to off-diagelentries disappearing.
That's what happens in decoherence.
But the crucial thing here is there was no wave function here.
The particle always had one location as it was moving through the apparatus.
There was no superposition.
There's nothing
to collapse. So if the question is, isn't a division event just collapse? Well, no,
there was no wave function, the particle was not superposed between being in the upper
and lower parts of the chamber, nothing ever had to collapse. The division event is what
you would say on the stochastic side of the correspondence,
what decoherence is on the other side.
But because there was never a superposition, there's no need to now single out one outcome
or collapse anything or project anything.
So the measurement problem, this basic fact that in the textbook formulation, you bring
in a measuring device, it becomes entangled, you get a superposition, now you have all
these superposed possibilities.
And then somehow if somewhere in the superposition there is a measuring device, it becomes entangled, you get a superposition, now you have all these superposed possibilities, and then somehow if somewhere in the superposition there is
a measuring device, we're supposed to collapse it somehow, but why and what magical property
of measuring devices makes things collapse?
This is the measurement problem.
It just fails to happen in this picture.
There's never a need to get anything to collapse.
So I would say that's the simplest hello world example and has the side effect of also explaining
what's going on in the double slit
experiment and what interference is happening here.
So is the division event your version of what traditionally is thought of as collapse?
And are division events just what occurs when you integrate out over marginal probability?
That's right.
So you marginalize over whatever has read out the configuration of your system and then
your system now has what looks like divisible stochastic dynamics.
And this certainly plays a functional role in the theory
that is analogous to what decoherence
and then collapse plays in Hilbert's,
in the Dirac-Feyer axioms.
And certainly if you're gonna write this story
in Hilbert space language,
you would use decoherence and collapse to explain what's going on in Hilbert space picture.
But now we have underwritten those weird axioms that seem very ad hoc and mysterious with a very boring stochastic process where there is no wave function to collapse. And when the measurement happens, in this case just a readout from a little detector
bit, there's just a classical marginalization that causes the indivisible dynamics to have
an event when you can divide.
It's momentarily divisible and then the inference goes away.
So I want to bring up something that you just mentioned, which harkens back to earlier in
the conversation.
So you said in the Newtonian picture, you throw a ball and there are two slits, two holes,
and then you notice that it goes over the top one, you say, okay, let's just start the
evolution from here or the bottom one you go from there.
And you said, well, this doesn't apply in the quantum case or in the indivisible case
because you can't divide.
And what was interesting to me was you said, notice that so and so.
And what you said, which is notice that you said, hey, you can look at it now.
So in other words, you can divide your time.
I was thinking, okay, I didn't, as I was following you along, I didn't notice that I was making
that assumption.
And earlier in the conversation, you said, what philosophers are extremely trained in
is noticing implicit assumptions.
Yes. So this, even though, even though this is,
even though it's not, well let me read about Hegel because Hegel will inform my physics,
it's not that per se, even though that may be the case. I'm sure there's some inspiration that can be taken there.
It's also the thinking that Hegel had or the thinking that philosophers have
that you can apply.
Yeah, I mean it's important to know that philosophy is a huge discipline, right?
And, you know, there's ancient Greek philosophy,
there are all kinds of philosophies associated with different areas of the world historically.
And in the Western tradition, so to speak, there's early modern philosophy,
and then, you know, and then you develop this sort of weird divide
between the continental philosophy and analytic philosophy.
And then where does philosophy of science fit in?
Philosophy of science is probably closer to analytic philosophy in a lot of ways.
There's some dispute over exactly whether it's a part of analytic philosophy or not.
But like, a lot of the tool sets that one uses in philosophical physics
do seem to come from the analytic tradition in philosophy.
This is the part of philosophy associated with logic and metaphysics and philosophy
of mind and to some degree, depending on how you phrase things, philosophy of science.
So Hegel is sort of more associated with the continental tradition in philosophy, and there's
just been, I think, less interaction between the continental tradition and physics. I think what a lot of people
who maybe have had some exposure to philosophy, maybe not a lot, when they think about how
useful philosophy is to physics, I think they're often thinking about the parts of philosophy
that are not super connected to math and physics.
Sorry, how is fear and trembling going to have the implication about F equals MA?
Right. But I think a lot of those people would say Bersh and Russell, well, of course, Bersh
and Russell is a good thing to say about physics. He wrote a book on relativity and he's a mathematician,
but he was an analytic philosopher. He was like one of the towering analytic philosophers,
right? Quime and so, you, and the early people who developed quantum mechanics
were like deeply enmeshed, you know, in Vick and Sen.
I mean, there's a beautiful interview.
I would also recommend linking it.
It's from the 1970s by Pete and Buckley, who were doing an interview with the CBC.
It's an interview with Werner Heisenberg toward the end of his life.
It's a fantastic interview.
I mean, it's amazing to hear him talking and actually hear his voice and
hear him describing the early days of quantum theory and all the people he met.
And he spends a lot of time talking about philosophy.
He talks about his friendship with Wittgenstein.
He talks about how confused he was by the Tractatus, you know, and he's not unique.
I mean, he wrote a book called Physics and Philosophy.
And the book is filled with Kantian metaphysics, and that's the book where he has this chapter.
It's from 1958 where he introduces the Copenhagen interpretation where he like tries to formulate it.
We'll talk a little bit about that.
You know, and he's arguing with Schrodinger and Einstein,
and they're all arguing over who's the proper,
like who's more correctly representing Kantian philosophy.
And you had neo-Kantians like Greta Herman,
we talked about Greta Herman,
and all these people were talking with,
they were all connected with, you know.
So, you know, there was an incredible amount
of cross-pollination between the analytic philosophers
of the early 20th century and physicists.
Physicists themselves were thoroughly trained in philosophy.
There's another paper you should link to.
This is a paper by Don Howard.
It's a paper in Physics Today from 2005.
It's called Albert Einstein as a Philosopher of Science.
And it's all about just how much philosophy Einstein was
doing. He'd read Kant's Three Critiques by age 16.
Wow.
And...
That takes 16 years.
I mean, he did what he did, right? He was super interested in philosophy. When he went
to university, there were mandatory courses in philosophy of science, both at the university.
And then after he left university, he ran a philosophy of science reading club. He was
deeply immersed in Spinoza. He was very inspired by Ernst Mach. He even wrote an obituary for
Ernst Mach in which he talks about the importance of epistemology.
He says, the ablest students,
the ones who are most independent of thought
are the ones who took epistemology the most seriously.
Like this is 1916 when Mach died.
And he was a huge fan of Schopenhauer also.
And so I mean, and so was Schrodinger.
I mean, they all, you know,
Schrodinger's Schrodinger cat paper mean, they all, you know, Schrodinger's, Schrodinger's cat paper has sections with words
like epistemology in them, like in a physics paper, right?
You know, and Bohr was philosophic.
I mean, they were all like, so,
and I find this kind of striking
because there's this attitude today that like,
who needs philosophy?
But the people who gave us the biggest revolutions
in modern physics, quantum theory and relativity, were all either deeply strongly connected
to philosophers or they were interacting with philosophers or they were philosophers or
they were at least very thoroughly trained in philosophy. If you were trying to create
new breakthroughs in physics, wouldn't you take that as an example?
Now look, I understand that there are a lot of philosophers who are not trained in physics,
and people who are not trained in physics sometimes say things that are not very helpful for physics,
but the answer to this is very simple.
We need people who are thoroughly trained in physics and thoroughly trained in philosophy
so that they don't say nonsense things, so they actually
make meaningful contributions and help bring these two disciplines together.
Because I think that would fertilize the soil, so to speak.
I mean, so much of modern physics has its roots in ideas that were generated during
that incredibly formative period in the early half of the 20th century, the first half of
the 20th century.
And we're facing a lot of very deep problems today.
You know, Einstein, I think in a paper, I think it's mentioned in the Howard paper,
in 1936, Einstein wrote a paper called Physics and Reality, which he talked about how we're
living in a time in 1936 when there's so much we don't understand about nature and there's
so many deep questions we have.
And this is exactly the time you need philosophy.
You need to be steep in philosophy and think very philosophically about things, carefully
and rigorously using all these tools that I called philosophical physics, you know,
to avoid falling prey to slogans, to avoid falling prey to groupthink, to being able
to have an independence of mind.
One of the most famous things that Einstein wrote about philosophy is a letter he wrote
to a philosopher of science named Robert Thornton in 1944.
In which he said that when he meets many scientists of his time, he feels like he's
meeting someone who's seen many trees, but has never seen a forest.
And that a thorough training in philosophy gives people the kind of independence of mind
that distinguishes, you know, a mere artisan from a real pursuer of truth.
I think that makes a lot of sense.
And in my own experience, having taught courses here for a long time, the students who were
most philosophically curious, many of whom took philosophy courses, upper level philosophy
courses, often made the strongest physicists.
They were very careful in how they formulated things.
They knew the difference between deductive argument and an inductive argument and one that was neither.
So here's the thing that happens frequently.
So on the one hand, if you've got rigorous, credible premises and you follow them in a rigorous, logical, math-a-weighted
conclusion, so you've got a deductive argument, you've got a math proof.
We like those.
Science often doesn't take that form.
Sometimes you prove theorems in science, but a lot of the time we're making inductive arguments.
We start with credible premises, we call it the evidence, and then we use arguments that
are more or less rigorous.
They can't be maybe perfectly rigorous, and we arrive at a conclusion that is, strictly
speaking, stronger than the premises, something that is not necessarily entailed by the premises,
but is strongly supported by the premises.
We assign some evidentiary credence or probability.
We say, I'm this certain that my conclusions are true,
given the premises, which are very credible
and rigorous and reliable and my reasoning.
That's an inductive argument.
And in a lot of science, there's inductive arguments.
And then what we do is we take the conclusions,
and usually they're like a theory or prediction,
and we go out and measure them, and we get confirmation,
and we feel stronger that we've done a good job.
But there are a lot of arguments in some areas of physics
that are neither deductive or inductive,
where the premises are themselves wild speculations.
And you take these wild speculations,
and then you make arguments that are themselves wildly speculative, and you arrive at conclusions that are speculations. And you take these wild speculations and then you make arguments that are themselves
widely speculative and you arrive at conclusions that are speculations like on top of speculations
and speculations. And I just don't even know what to do with something like that. I don't
know how to follow those arguments. And I really don't think that someone who's like
thoroughly trained in philosophy is going to find those sorts of arguments very credible.
And I think especially at a time when in certain areas of physics we are very limited in our
experimental data and we're trying very hard to be very careful in our reasoning, this
is exactly the time when you want the kind of rigorous scrutinizing that you get from
a training in philosophy.
So again, this is like a second appeal.
For anyone who wants to make a big impact on physics, I think in terms of cost-benefit
ratio, you know, supporting people who do impact on physics, I think in terms of cost-benefit ratio,
you know, supporting people who do this kind of work I think would be particularly important.
Okay, so you said independence of thought is what philosophy trains you to have.
Also precision of thought. So Scott Aronson is someone that we brought up earlier, or that you brought up.
And Will Hahn here, also you can't see him but he's behind the cameras professor at Florida Atlantic University helped put on this event called MindFest and Scott
Aaronson was there and Scott Aaronson said okay well he didn't say this but the implication
was that most physicists most scientists think of philosophers as just engaging in this ill-defined
unfalsifiable nonsense that's incoherent. And he was saying when he was speaking,
he never had to be so precise in his speech
as he was around a philosopher.
Yeah, philosophy seminars.
So there are many people maybe who are watching
who've never been to a philosophy seminar before.
Maybe many people watching haven't been to a physics seminar.
That's okay.
I mean, people come to this.
I mean, you've got people who are brand new.
Maybe they're students, maybe they're in secondary school,
maybe earlier than secondary school, I don't know.
Or maybe they just got into different fields,
or maybe they're not in academia, and that's all great.
I mean, we're all contributing in some way to the world, hopefully.
But there are many people who maybe have been to physics seminars
and don't know what a philosophy seminar is like.
And I'm going to tell you, when you go to a philosophy seminar,
especially in philosophy of science or philosophy of physics or analytic philosophy,
the level of precision in your language, I mean, you have to be...
So the way they usually work is, so I...
Physics seminars usually work like this.
There's a speaker who talks for 45 minutes, maybe an hour,
depends on the length of the seminar.
People often interrupt, like the middle of the seminar,
they'll just...they make questions, they just sort of interrupt in the middle.
And someone who's maybe not very practiced,
who's giving the talk may get derailed,
and this can sometimes lead to problems,
but hopefully everyone's respectful in the questions
if there are any are kept brief.
And then at the very end of the seminar,
they all thank the speaker, they clap,
they thank the speaker, and then they say,
all right, we've got five minutes for questions,
any questions?
And, you know, most people don't, I mean, they can't,
I mean, there's just no time to really ask
a substantive question.
A lot of people feel very nervous because they're like,
oh, maybe there's a more senior person here,
a professor who wants to ask a question
and I don't want to interfere.
Or if there's only gonna be like time for one question,
I don't want to ask a question
that's gonna be like a bad question.
So people like often feel very nervous about asking questions
and there isn't really a good substantive dialogue.
In philosophy seminars, what often happens is, you have a talk, half hour, 45 minutes,
sometimes they go an hour, then there's a break.
People go and they take five minutes, they get refreshments, they come back, and then
there's like an hour of discussion, right?
And the discussion is often the most interesting part of the seminar, right?
And because there's like an hour, everybody asks questions, the students ask questions,
because no one is afraid they're asking the one question.
No one's afraid they're gonna look bad
because there's gonna be 10 more questions
after their question.
And people can really have a substantive discussion
and dialogue.
So that's great.
And I actually really like the culture of these seminars
because they're very welcoming.
A lot of times they'll say that students get to go first
because they really wanna prioritize students
asking questions.
But I will tell you, the expectation of the precision of your language is high, right?
If you say something that is not very carefully stated,
people will immediately say, I'm sorry, that's too vague,
and you like, precise-ify that.
So I think there's this attitude, I think, among some, you know, some scientists,
I wouldn't say there's a lot of scientists who are very familiar with this, but I think some who think that philosophers are just making stuff up
and we're very, very vague and we're just sort of, but if you've been to a philosophy
seminar, it's quite the opposite because you don't have the ability to rely on empirical
data.
You can't just say, well, I don't need to explain it, just look at the data.
You rely on the strength of your logical reasoning in a very significant way.
And I think that could be brought to bear in how we do physics more than we currently do it.
I don't know if you saw some of the lectures on Toh.
I hope to bring some of that to it.
So, like with Yang Hui, he was going on,
and he's like, I like this because, Kurt,
normally I have 45 minutes, but here I have two hours.
And so you can ask me questions
and we can get to all of it plus questions.
Well, I know you're sitting there waiting to catch me whenever I say something that's
not sufficiently sharp and rigorous.
And that's, it, so this reminds me a lot more of the philosophy seminars that I go to.
And that's really nice.
And look, there are many things I think that your program is doing, right?
I mean, you're informing audiences of interesting ideas from all directions, you're inspiring, but I think you're also
modeling a kind of dialogue that I think we need more of in academia, in science and philosophy
generally speaking.
Holding people accountable, having extended discussion, really getting to the questions,
really getting to the deep parts of people's ideas, not letting people
get away with slogans.
You know, and physics has a lot of slogans that people just, they hear someone very prominent
say and then they just repeat it, right?
I mean, like, really like taking this apart.
And this is just not like a general public service announcement because I know there
are a lot of people who are watching who are not scientists yet or maybe ever, maybe this
is just not the direction they want to pursue.
And maybe this is just some advice, right?
Obviously watching your podcast series is a great move, that's great.
But you know, people will sometimes tell folks,
well, you really can't contribute to science or philosophy or whatever
until you've done the requisite training.
You need to do an undergraduate program.
You need to read books.
You need to take coursework.
You need to get graded on them.
You need to do exams.
You need to actually learn all these techniques.
You need to learn how to do this.
In physics, it means learning Newtonian mechanics,
learning electromagnetism, learning thermodynamics,
learning quantum mechanics,
depending on what direction you go in.
You may learn astronomy or biophysics
or computational physics or high energy
or whatever it is that you will learn.
But like really do all of that.
Do all the problem sets, learn everything, take the time.
This is a years long process of rigorous training.
And if you wanna make contributions at the research level,
you'll most likely have to do some kind of graduate work
like a PhD program.
And many people don't know that, by the way,
that PhD programs,
at least in the United States in the sciences, are funded.
You don't pay to go to graduate school for a PhD.
It's important to know that. When I was, like, learning about science,
I thought you had to pay to do a PhD program, and I thought that was important.
But you actually get paid to be in a PhD program in the sciences.
It's important for people to know.
And a lot of people look at that, and they think it's instrumentable.
They're like, but I have an idea. I have a great idea.
I just want to contact a physicist and tell them my idea
and have them work on it.
And what you say is,
you really, really need to do the training first.
Cause there's just so much you have to learn.
It's like Picasso was this amazing artist,
but he had to master traditional techniques
before he could go and break them.
And I know a lot of people will say,
but if I learn all the techniques
and I spent all the years learning all of this,
won't I just become like everybody else? Won't I im I learn all the techniques and I spend all the years learning all of this, won't I just become like everybody else?
Won't I imbibe all the same conventional wisdom
and all the slogans?
That is a risk.
So what I would tell people,
if you want to embark on this journey,
and it is an amazing journey,
I mean, learning physics has been,
it's like, it's incredible.
If you love physics and you spend years learning it,
it's the most incredible thing you can imagine.
You do have to work a little bit not to get absorbed into sort of calcified conventional wisdom.
So while you're learning it, you just have to remind yourself every once in a while...
Of your original ideas?
Of your original ideas, sure. Sometimes you discover that the ideas will continue to work.
Sometimes you discover that they don't work, that's fine.
But you also just, every time you learn to do a technique, you learn how to calculate something, you learn
how to calculate the energy eigenvalues of a one-dimensional quantum mechanical system,
or you learn how to calculate scattering amplitudes in a quantum field theory or whatever, you
want to separate out the methodologies of learning, like how do I calculate something,
how do I do something, how do I model something, from the ideology? Cause people will say, they'll say,
well, you said you calculate it.
And what this means is that what's going on in nature is,
and that's the moment you wanna be like,
whoa, hang on a second, hang on a second.
I follow the calculation, it's empiric, whatever,
but you've now transcended this particular methodology
and now you're making a substantive statement
about metaphysics, about something that's out there.
That requires some rigorous scrutiny. That's where you require a little bit of skepticism.
And maintaining a foot in that sort of skeptical sphere as you go through, I think, is the best way to do it.
But that's just what it is to be a good philosopher.
Like, a good philosopher will see statements, sometimes very
provocative, sometimes very ambitious statements, maybe over ambitious
statements from any quarter could be from a scientist and say, hang on a
second, wait a second, not so fast, let's make sure that this extrapolation that
you're making here is really rigorous and really makes sense. And I'll give you an example.
So here's a concrete example.
This is all very sort of abstract.
Let me bring it down to earth.
This is a little bit of a sensitive topic, right?
Okay.
One of my favorite, like one of my idols in physics
is David Griffiths.
Many people in physics have read David Griffiths' books.
David Griffiths is legendary.
And I think the world of him.
I learned physics from his books.
I learned particle physics the first time from his book,
introduction to elementary particles,
quantum mechanics learned from his quantum mechanics book,
electromagnetism I learned from his electromagnetism book.
He has rightly earned a status of this sort of legendary figure in physics.
He's had a bigger impact on physics than almost anybody.
Right.
And I like his writing for the most part,
but he does have a kind of tone sometimes when he writes
that's very dismissive.
He'll sometimes just dismiss things.
And because he's so like actually kind of honest,
he'll often have a footnote where he'll say,
I'm being too strong, I shouldn't dismiss this.
Really, it's like this.
But a lot of people don't read the footnotes
or they get confused about them.
And I'll give you a very concrete example.
In his quantum mechanics book, he says in the beginning,
and so many students have read this,
he says there are three ways to think about quantum mechanics.
There's the realest way to think about quantum mechanics, which is just to say that before
we do a measurement, the thing we're measuring, it just exists.
Now we've already talked about how that's really not, that's like a, it's not really
a nuanced enough view.
There's the realist view.
The thing we're about to measure was already there.
The second is the orthodox view, which is the thing we're measuring wasn't there.
There was no pre-existing property.
The thing we were measuring didn't exist before we measured it.
Particles are nowhere until we measure them.
And then the third possibility is the agnostic position.
I'm not going to try to answer.
He says these are the three positions.
And this really like steamrolls a lot of nuance in this discussion.
And then he says, for a while, they were partisans of all three of these views.
And then not long ago, but that means 1964, which is actually a long time ago, a guy named
John Bell came along and proved a theorem and the theorem eliminated the agnostic position
as being possible and turned it into an experimental question
whether the realist position was true
or the orthodox position.
Experiments have now confirmed
that the orthodox position is correct.
And that's it.
He just says, that's it.
And then you like read the footnote.
In the footnote, he's like,
well, that's too strong a statement.
There are actually other theories that, you know,
but the words he uses is they tend to be
cumbersome and implausible, but never mind.
That particular phrasing is kind of funny because I worked on a project a couple of years ago
on whether magnetic forces could do work on particles.
It turns out they can if particles have intrinsic magnetic dipole moments.
And I, and Griffiths updated the fifth edition of his book on electromagnetism and included
a footnote mentioning this paper.
So I have a footnote.
But he says theories like the thing I wrote down, he says they tend to be cumbersome and
implausible but nevermind.
He uses exactly the same language for these things as he does for all these other persons
of quantum mechanics.
And given that these other formulations of quantum mechanics, I think, are good, and
people should work on them, they do tell us something, I think, now I feel like I'm a
good company, right?
We'll come back to some of that magnetic dipole stuff maybe a little bit later.
Now what's strange about this is that at the end of Griffith's book, he has a beautiful
afterword.
It's the final chapter, it's a chapter 12, where he goes through Bell's theorem in detail,
which is not a thing you commonly find in introductory quantum mechanics books.
And he has a lovingly good treatment of Bell's 1964 theorem.
It's the earliest version of Bell's theorem.
He does say a couple of things
that are a little bit dodgy there, but his treatment is actually really very good. And
he does make mention to some of these other approaches, but it's like at the end of the
book, a lot of students don't see that part of the book. And I just worry that students
reading this book will take the things he says in the main text, not the footnotes,
not the qualifications, not the other stuff that happened later, and they'll just repeat them. They'll just say, oh, I was wrong, realism is wrong,
David Griffiths has said that the orthodox approach is the only approach
and there's no point in pursuing this further.
And any other attempts to pursue anything else are just pursuing theories
that are cumbersome and implausible, but never mind.
That's the kind of thing that someone who's new in physics needs to be on the look at for.
You see statements like that that are really now metaphysical statements, and you need
to be like, well, hang on a second.
That doesn't quite follow from the methodologies that we're doing.
And I think it behooves someone writing a textbook, someone teaching physics, to be
as clear and careful as they can be about when they've stopped presenting something
that's calculational, methodological, model development, and they've now moved into
this is the way nature is as a result. We've learned that there is no fact of the matter
about anything we're measuring, that particles don't have anything before they're measured,
that there's no way to fix quantum mechanics, that these are statements that are not
supported by what we have, and I think people who are interested in doing a serious study in physics should know at
the beginning to be on the lookout for those kinds of statements.
I think that would be the piece of advice I would share.
Jacob, it's been a pleasure.
Three hours and we're not even halfway through.
Yes, Kurt, it turns out when you want to reformulate foundational theory in physics, it takes time.
It doesn't happen all at once.
Well it takes chunks of time.
Chunks of time and it's indivisible, but it looks like we have a good division event
coming up.
Okay, great.
Okay, so we're going to take a division event now, which means that you'll get part two
of this conversation, which is the third time that I'm speaking to Jacob in total.
The first time I've spoken to Jacob is on screen.
The second time is right here.
The third part will come out in a couple weeks from now.
And what we'll talk about are the Bell experiments or the Bell inequalities.
We're going to get to other questions people had.
People had questions about entanglements.
People had questions about causation in this approach.
And there were also questions about what relationship this approach has to problems in statistical
mechanics and to some of the other interpretations and formulations of quantum mechanics.
Right.
Also wave-particle duality.
What does that mean in this approach and also just traditionally?
Right.
Okay.
See you next time.
Yeah.
Definitely subscribe and ensure that you watch part two as we talk about what are the misconceptions
of the wave-particle duality.
Also, what are challenges of applying indivisible stochastic processes to quantum field theory?
Is gravity indeed quantum?
What about stochastic general relativity?
What about the misinterpretations of nonlocality and Bell's theorem?
Jacob also gives a new perspective on entanglement without wave function collapse and talks about the difficulty in defining causation at a fundamental
level. We also talk about the philosophy of probability and what the origin of probability
is in statistical mechanics. And of course, Jacob has his critiques of the many worlds
interpretation and then goes over open questions and future research in indivisible stochastic
processes. You do not want to miss this.
It's a banger episode.
Subscribe to get notified.
New update!
Started a sub stack.
Writings on there are currently about language and ill-defined concepts as well as some other
mathematical details.
Much more being written there.
This is content that isn't anywhere else.
It's not on theories of everything.
It's not on Patreon. Also, full transcripts will be placed there at some point in the
future. Several people ask me, hey Kurt, you've spoken to so many people in the fields of
theoretical physics, philosophy and consciousness. What are your thoughts? While I remain impartial
in interviews, this substack is a way to peer into my present deliberations on these topics.
Also, thank you to our partner, The Economist.
Firstly, thank you for watching, thank you for listening. If you haven't subscribed or clicked that like button, now is the time to do so.
Why? Because each subscribe, each like helps YouTube push this content to more people like yourself,
plus it helps out Kurt directly, aka me.
I also found out last year that external links count plenty toward the algorithm, which means
that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows
YouTube, hey, people are talking about this content outside of YouTube, which in turn
greatly aids the distribution on YouTube.
Thirdly, there's a remarkably active Discord and subreddit for Theories of Everything
where people explicate Toes, they disagree respectfully about theories, and build as
a community our own Toe.
Links to both are in the description.
Fourthly, you should know this podcast is on iTunes, it's on Spotify,
it's on all of the audio platforms. All you have to do is type in theories of everything
and you'll find it. Personally, I gain from rewatching lectures and podcasts. I also read
in the comments that hey, toll listeners also gain from replaying. So how about instead
you re-listen on those platforms like iTunes, Spotify, Google Podcasts, whichever podcast
catcher you use. And finally, if you'd like to support more conversations like iTunes, Spotify, Google Podcasts, whichever podcast catcher you use.
And finally, if you'd like to support more conversations like this, more content like
this, then do consider visiting patreon.com slash Kurt Jaimungal and donating with whatever
you like. There's also PayPal, there's also crypto, there's also just joining on YouTube.
Again, keep in mind, it's support from the sponsors and you that allow me to work on Toe full time.
You also get early access to ad-free episodes, whether it's audio or video.
It's audio in the case of Patreon, video in the case of YouTube.
For instance, this episode that you're listening to right now was released a few days earlier.
Every dollar helps far more than you think.
Either way, your viewership is generosity enough. Thank you so much.