Theories of Everything with Curt Jaimungal - The 300-Year-Old Physics Mistake No One Noticed
Episode Date: June 27, 2025As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Professor John Norton has spent decades dismantling the hidde...n assumptions in physics from Newton’s determinism to the myth of Landauer’s Principle. In this episode, he explains why causation may not be real, how classical physics breaks down, and why even Einstein got some things wrong. If you’re ready to rethink the foundations of science, this one’s essential. Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e Timestamps: 00:00 Introduction 03:37 Norton's Dome Explained 06:30 The Misunderstanding of Determinism 09:31 Thermodynamics and Infinite Systems 14:39 Implications for Quantum Mechanics 16:20 Revisiting Causation 18:15 Critique of Causal Metaphysics 20:21 The Utility of Causal Language 24:58 Exploring Thought Experiments 33:05 Landauer's Principle Discussion 49:48 Critique of Experimental Validation 52:25 Consequences for Maxwell's Demon 1:13:34 Einstein's Critiques of Quantum Mechanics 1:28:16 The Nature of Scientific Discovery 1:42:56 Inductive Inferences in Science Links Mentioned: • A Primer on Determinism (book): https://amzn.to/45Jn3b4 • John Norton’s papers: https://scholar.google.com/citations?user=UDteMFoAAAAJ • Causation as Folk Science (paper): https://sites.pitt.edu/~jdnorton/papers/003004.pdf • Lipschitz continuity (wiki): https://en.wikipedia.org/wiki/Lipschitz_continuity • The Dome: An Unexpectedly Simple Failure of Determinism (paper): https://philsci-archive.pitt.edu/2943/1/Norton.pdf • Norton’s Dome (wiki): https://en.wikipedia.org/wiki/Norton%27s_dome • Approximation and Idealization (paper): https://sites.pitt.edu/~jdnorton/papers/Ideal_Approx_final.pdf • On the Quantum Theory of Radiation (paper): https://www.informationphilosopher.com/solutions/scientists/einstein/1917_Radiation.pdf • Making Things Happen (book): https://ccc.inaoep.mx/~esucar/Clases-mgc/Making-Things-Happen-A-Theory-of-Causal-Explanation.pdf • Causation in Physics (wiki): https://plato.stanford.edu/entries/causation-physics/ • Laboratory of the Mind (paper): https://www.academia.edu/2644953/REVIEW_James_R_Brown_Laboratory_of_the_Mind • Roger Penrose on TOE: https://youtu.be/sGm505TFMbU • Ted Jacobson on TOE: https://youtu.be/3mhctWlXyV8 • The Thermodynamics of Computation (paper): https://sites.cc.gatech.edu/computing/nano/documents/Bennett%20-%20The%20Thermodynamics%20Of%20Computation.pdf • What’s Actually Possible? (article): https://curtjaimungal.substack.com/p/the-unexamined-in-principle • On a Decrease of Entropy in a Thermodynamic System (paper): https://fab.cba.mit.edu/classes/862.22/notes/computation/Szilard-1929.pdf • Landauer’s principle and thermodynamics (article): https://www.nature.com/articles/nature10872 • The Logical Inconsistency of Old Quantum Theory of Black Body Radiation (paper): https://sites.pitt.edu/~jdnorton/papers/Inconsistency_OQT.pdf SUPPORT: - Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join - Support me on Patreon: https://patreon.com/curtjaimungal - Support me on Crypto: https://commerce.coinbase.com/checkout/de803625-87d3-4300-ab6d-85d4258834a9 - Support me on PayPal: https://www.paypal.com/donate?hosted_button_id=XUBHNMFXUX5S4 SOCIALS: - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs #science Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Whether it's a family member, friend or furry companion joining your summer road trip,
enjoy the peace of mind that comes with Volvo's legendary safety.
During Volvo Discover Days, enjoy limited time savings as you make plans to cruise through Muskoka or down Toronto's bustling streets.
From now until June 30th, lease a 2025 Volvo XC60 from 1.74% and save up to $4,000. You can get the $25,000 Volvo XC60 from 1.74% and save up to $4,000.
Conditions apply.
Visit your GTA Volvo retailer or go to volvocars.ca for full details.
FanDuel Casino's exclusive live dealer studio has your chance at the number one feeling,
winning, which beats even the 27th best feeling saying, I do.
Who wants this last parachute?
I do.
Enjoy the number one feeling, winning in an exciting live dealer studio,
exclusively on FanDuel Casino,
where winning is undefeated.
19 plus and physically located in Ontario.
Gambling problem? Call 1-866-531-2600
or visit connectsontario.ca.
Please play responsibly.
That had produced so much fuss with somehow shocking.
This literature has been teetering on the edge of nonsense for a hundred years
Professor John Norton of the University of Pittsburgh has spent decades systematically dismantling sacred assumptions of physics
Norton's dome for instance demonstrates fundamental indeterminism in Newtonian physics itself now
You may be thinking of quantum uncertainty, but I'm talking about classical physics,
which is breaking down in terms of unique predictivity.
Beyond determinism, Norton critiques notions of causation itself.
Physicists routinely invoke causal language, but what if causation isn't fundamental?
Even further, Norton's critique extends to thermodynamics.
Landauer's principle, for instance, has guided decades of research into computing limits, and some even use it as the physical basis of Wheeler's it-from-bit.
Norton demonstrates this principle misunderstands thermodynamics and entropy, both of which we talk about in extensive detail.
We then cap it off with Einstein's contributions to old quantum theory and Einstein's disagreements
with the new quantum theory.
A special thank you to The Economist for sponsoring this video.
I thought that The Economist was just something CEOs read to stay up to date on world trends,
and that's true.
However, that's not only true.
What I've found more than useful is their coverage of math, of physics, of philosophy,
of AI, especially how
something is perceived by countries and how it impacts markets. Among weekly global affairs
magazines, The Economist is praised for its non-partisan reporting and being fact-driven.
This is something that's extremely important to me. It's something that I appreciate. I personally
love their coverage of other topics that aren't just news based as well.
For instance, the Economist had an interview with some of the people behind DeepSeek the
week DeepSeek launched.
No one else had that.
The Economist has a fantastic article on the recent DESI dark energy data, and it surpasses,
in my opinion, Scientific American's coverage.
The Economist's commitment to rigorous journalism means that you get a clear picture of the
world's most significant developments.
It covers culture, finance, economics, business, international affairs, Britain, Europe, the
Middle East, Africa, China, Asia, the Americas, and yes, the USA.
Whether it's the latest in scientific innovation or the shifting landscape of global politics,
The Economist provides comprehensive coverage that goes beyond the headlines.
If you're passionate about expanding your knowledge and gaining a deeper understanding
of the forces that shape our world, I highly recommend subscribing to The Economist.
It's an investment into your intellectual growth, one that you won't regret.
I don't regret it.
As a listener of Toe, you get a special 20% off discount.
Now you can enjoy The Economist and all it has to offer for less.
Head over to their website, www.economist.com slash toe to get started.
Make sure to use that link, that's www.com slash toe to get that discount.
Thanks for tuning in.
And now back to the exploration of the mysteries of the universe with John Norton.
Alright, Professor John Norton, you're a legend in the physics scene and the philosophy of
physics scene.
So it's an honor to be with you here.
Oh, thank you very much.
It's very kind of you. You're known for Norton's Dome, for indeterminism and systematizing material induction,
your views on thought experiments, the history of Einstein, and disproving Landauer's Principle.
We'll attempt to get to all of these today.
Now, before we get to these, let's pick one. Norton's Dome.
Why don't you tell me, how did you arrive at that construction?
What were you trying to show?
Were you trying to be contentious?
Were you trying to disprove a colleague?
Did something just not make sense?
Like walk me through leading to Norton's Dome.
It was actually completely trivial and that it produced so much fuss with somehow shocking.
So here's the background.
In the late 1980s, my colleague John Ehrman wrote a book, A Primer on Determinism, in
which he pointed out that indeterminism was actually rampant throughout physics.
And one of the places where it's quite rampant is in Newtonian physics when you have
systems with infinitely many degrees of freedom.
So if you have infinitely many masses bouncing around in various ways, their behavior is
going to be generically indeterministic.
So John and I were teaching a graduate seminar on causation and determinism And I think that afternoon or the next day, I was committed to giving a section on
determinism. And I was going to present the idea that Newtonian physics is generically
indeterministic when you have infinitely many degrees of freedom. Well, what about the case
of finitely many degrees of freedom? I was going to say, well, when you only have finitely many
degrees of freedom, then you just always get determin well, when you only have finitely many degrees of freedom,
then you just always get determinism, everything's fine. And then I thought, I'll be saying that in front of a bunch of smart graduate students.
You know what's going to happen next. So I said, I better have a look to see if there are
counter examples. So, you know, Elishaitz condition guarantees unique solutions for differential equations.
I looked up standard counter examples
to a Lipschitz condition.
I took one of those standard counter examples
and said, how do you realize it physically?
And the answer was quite simple.
You have this dome shape, a very particular shape.
You would put a mass point at the top
that can move frictionlessly.
And the conditions violate Lipschitz condition. particular shape, you would put a mass point at the top that can move frictionlessly, and
the conditions violate Lipschitz condition, and so the particle can spontaneously set
itself into motion.
And the mathematics is very simple, it's two or three lines, and there it is.
So I used that in teaching, the students didn't seem terribly impressed.
I was writing a paper on causation at the time, and I wanted to point out that
the idea that Newtonian physics has always been
deterministic was actually a mistake because
the theory itself is not intrinsically deterministic,
so I included the dome in section three.
Almost immediately, I started getting emails from
people correcting my mistake and I realized,
oh, there's something more going on here.
That's the story.
So what's the something more that's going on?
What's going on is that the idea that Newtonian physics is deterministic is so deeply entrenched
in the psyche of many physicists that it somehow it somehow seems that I'm at some sort
of an epistate if I would be saying anything otherwise that I must have made a mistake
and they have an obligation to discover what the mistake is.
That was the character. They weren't hostile, the response that was the character.
They weren't hostile, the response that I was getting.
They were all very friendly, but friendly of the form of,
Dear Professor Norton, I saw your analysis of this dome.
I just want to point out you're making a terrible mistake here.
And then something follows, which never works.
Let's make this clear for people.
So there are different types of continuity.
Usually we'll say that a function is continuous, but there are various types like absolute
continuity, uniform, and then there's lift shits, which then is used in ODE classes to
show that there are unique solutions.
Now, if you remove this lift just continuity condition, then you get non unique solutions.
So multiple solutions. And I'm not sure I believe Lifshitz is necessary, but not sufficient
for non-uniqueness. You can. I think it's sufficient, but not necessary. But if
you find very simple systems that violate the condition of
Lifshitz continuity, then mean, the mathematics of the dome
is just a very simple example.
If you have the first derivative of a function
varies with the square root of the function,
that already violates the Lipschitz condition at the origin
when the function has zero value.
I mean, it's as simple as that.
And that's the example instantiated in the dome.
I went to the second derivative so I could use Newton's f equals ma,
but I think it already happens with the first derivative.
Just d dx equals square root of x and solve that when x is equal to zero,
you already have non-unique solutions.
I think going from memory that works.
Okay. So then what are people supposed to
imagine as a consequence of this?
Are you saying that Newtonian physics thus needs to
assume Lipschitz in order to prove this uniqueness?
And thus, if you're trying to
say Newtonian physics is deterministic,
you're already inserting that determinism.
You're not concluding that deterministic.
That's exactly right. Whether a particular Newtonian system is deterministic or not
is something to be discovered, not stipulated. And I'll mention again the important case. If
you have infinitely many systems interacting, then you get indeterminism generically.
Why does this matter?
Well, it's going to matter in
the infinite case when you look at
something like the thermodynamic limit.
So this is a case that I've calculated.
We'd like to think of a very simple Newtonian model for
a crystal consists of a whole bunch of mass points
that are connected together by springs,
and they're thermally agitated and so they're wobbling about.
Now, the idea is that as the number of mass points gets larger,
as this lattice gets larger and larger,
its behavior becomes closer and closer to
a system that is going to
behave thermodynamically in the ways that we expect.
You're going to get the Boltzmann distributions going to come out and so on.
But you need to look at a very large lattice.
So it's standard to say if you take the infinite limit,
that's when you get thermodynamics back.
Well, you have to be very careful about how you take that infinite limit.
If taken the infinite limit just means I will consider
crystal lattices
of arbitrarily large size, always finite but arbitrarily large in size, then the sequence
of lattices that you're considering will eventually stabilize out to have nice thermodynamic properties.
But if you mean I'm going to consider an infinite lattice and then investigate its properties,
you'll discover that the lattice dynamics have become indeterministic.
I've not kept this secret.
It's in a paper I wrote, published in 2011, 2012, called Approximation and Idealization.
That's one of the main points of the paper.
It just says, be careful taking infinite limits.
You can really get into trouble.
There are other types of continuity as well.
The underlying space is continuous.
The function itself is continuous and
the function operates on a domain,
and that domain is space-time.
Now, I know we're dealing in Newtonian physics,
so maybe not space-time, but it doesn't matter.
We say some manifold.
Now, is it your contention that the manifold itself
is also going to ultimately be discontinuous?
Do you have a intuition there that is going to be discretized,
or do you think that you can zoom in all the way
and it looks like Rn?
Well, the example of the dome, the dome surface is an ordinary
Euclidean surface. And it does have a curvature singularity at
the at the apex. But curvature singularities at a point and
nothing extraordinary in idealized Newtonian systems.
Think about the sharp edge of a tabletop. Right now, the a point and nothing extraordinary in idealized Newtonian systems.
Think about the sharp edge of a tabletop.
It's horizontal and the vertical and they meet and we don't have any trouble shooting
a particle across the horizontal surface.
It then comes to the curvature singularity at the edge and then shoots off in a parabolic
arc.
It's the standard sort of idealization that we talk about.
The singularity at the sharp edge of the tabletop
is one order worse than the singularity
at the apex of the dome.
At the apex of the dome, it's singularity in the curvature.
At the apex, the singularity at the sharp edge of a tabletop is a singularity in the
first, in the tangents.
Right?
You know, the tangents move, jump discontinuously when you go over the edge.
So many ordinary Newtonian systems are deterministic, right?
And we're entirely used to that.
They always work out that way.
Is it so surprising that if we go to extreme cases that we don't normally look at in ordinary
life that we end up with something a little different?
The case of the dome is not something we could ever realize in real life because it requires
multiple violations of quantum mechanics. You've got to put the point,
the mass point has to be located at rest exactly at the apex.
You need to have a surface that has exactly the right properties.
The more interesting case is when you have infinitely many masses.
That's the idealization that people will take more seriously.
Why doesn't the infinitely many masses also contradict quantum mechanics?
Newtonian theory contradicts quantum mechanics and its foundations.
So yes, it does, as does every Newtonian analysis.
So I'm not sure what's worrying you here.
This for me has been the perpetual puzzle.
I think the dome is just a rather ordinary piece of Newtonian physics.
There's nothing very special about it.
It just happens to have this odd property.
But then, some people I talk to just say,
yeah, what's the big deal?
Other people were saying,
there's something deeply troubling about this, and I just don't
know what that is.
So it turns out that some Newtonian systems, in these cases, rather exotic ones that could
never be realized, are deterministic.
What are the implications for quantum mechanics and relativity?
I think that was behind one of your remarks.
Right.
Not very much because they're different theories.
Relativity theory and quantum mechanics are very different theories.
They turn out to be indeterministic in their own ways.
In the case of quantum mechanics, the standard interpretation is indeterministic.
I think that's just the beginning of a long discussion.
If you're a bohemian, you won't think that, but that's another story.
And in order to realize determinism in relativity, you need a Cauchy surface.
You need all the nice conditions.
If you don't have a Cauchy surface, then you can't even state the conditions of determinism,
which are the present fixes the future, but you don't have a present, so you can't have any fixing.
I think if there's a moral, the only moral is the following.
Be careful about what you assume about the world.
Don't go into physics assuming antecedently that you have a wisdom that transcends what
the empirical science will tell you.
If you want to see what goes wrong, if you do that differently, think about what happened
when quantum mechanics appeared in the mid-1920s.
It became very apparent then that the theory was going to be indeterministic.
Up until then, everyone had simply assumed that for a system to be causally well behaved,
it had to be deterministic.
Then quantum mechanics comes along and it's indeterministic and there was this tremendous
outpouring of anxiety.
Causality is lost was the plea.
We would now say determinism.
They then said causality.
But in retrospect, it was simply an artifact of 19th century thinking.
In the 19th century, they had identified causation with determinism.
So for the world to be well ordered causally,
then the world had to be deterministic.
Quantum mechanics says it's not deterministic.
Oh no, we've lost an absolute fundamental.
Well, now you've just learned something new about the world.
So let's talk about this graduate seminar then on causation.
Did anything else controversial come out and what is causation?
Well, as you know, I've written fairly extensively about this.
I have a particular critique of causation.
It is a critique of causal metaphysics. It is not a critique of causation. It is a critique of causal metaphysics.
It is not a critique of causation per se.
To be very clear here,
I am quite comfortable with the idea that things interact with
each other and connect with each other in
all fascinating and interesting ways.
Voltages drive electric currents and gradients of free energy produce thermodynamic
effects and so on and so on and so on all the way through here.
You can go to any science and you'll find all sorts of claims of how this causes that.
My critique is the following.
Causal metaphysics seeks to do something that is antecedent to these empirical investigations.
A causal metaphysician says, we cannot talk about causality
empirically until we have sat down and done
some conceptual work and figured out what causation is.
And once we have done that, then we
understand what causation is.
And then scientists can
come along and do the cleanup operation of figuring out how this causal principle that
they come up with is going to be instantiated in the particular sciences.
So the general run of a causal metaphysician is saying, I know what causation is, it's
this, right?
So I'm just, your job is just to show me how that works in the world.
And that is just a completely failed enterprise. The difficulty is that metaphysicians have not
been able to come up with any principle of causation that has any critical content.
And that also succeeds in the world. We have thousands of years of failure at that particular enterprise.
So I'm rejecting the causal positions project completely.
And then, okay, so the question that becomes, well, what is the place of causal talk in
science?
Why is it so pervasive?
So why do scientists care about causation if there's no metaphysics underlying it? It's simply a matter of labeling.
What happens is we notice that there are all sorts of processes that we find comfortable
to describe causally.
So take Einstein's famous A and B coefficient analysis of stimulated emission.
The idea is that if you have an excited atom
in a radiation field where the frequency of radiation
is at the right frequency,
that will stimulate an emission,
it will stimulate the excited atom
to drop back to a lower state.
I would like to say that causes it to do so.
I have no objection as long as you realize
you're just declaring how you intend
to use a word. And so my general claim is that when we have causal talk anywhere in
science is actually a veiled definition. It is simply someone who is saying, Oh, I find
it very convenient. I find it pragmatically useful to describe this process using the
word causation.
What they are not doing,
although they might realize this,
but what they're not doing is saying,
oh, I have discovered the instantiation of
some deep metaphysical truth that lies antecedent prior to any science.
They haven't discovered that at all.
What are the advantages in using causal language
in various places?
It can almost immediately be psychologically helpful.
It's very helpful when I think of Einstein's A and B
coefficient paper to say, oh, so the external radiation field
is stimulating an emission.
It is causing an emission.
And that's how lasers work.
That's the way we think about lasers.
It's certainly very, very helpful.
Otherwise, you just have a bunch of equations
which gives you probabilities of various transitions.
Or in the case of Jim Woodward's interventionist account of causation,
he says that a causal process arises when we have
two variables that are related by some connection,
often probabilistic, but not necessarily if you read his account fairly carefully.
If an intervention on one of them is associated with a change in the other,
then we have a causal relationship.
I just regard that as a definition,
but it's an immensely useful
definition because if you tell me that this causes that, I now know that if I interfere on this,
then I will produce an effect on that. So if you tell me that certain medical interventions
will improve the health of the population, then I've learned something enormously useful.
So if we abandon that causation is somehow fundamental or refers to something that has
an essence, then is there anything that is in fundamental physics that's lost?
So for instance, is there any theorem in quantum mechanics like Bell's theorem that then loses
its power because Bell's theorem implicitly has a notion of causality in it?
I don't know.
I don't think so, no.
I looked into this.
I didn't inventory of all the places where the term causation appears in physics.
And I found that almost invariably, the term causation denoted one of two things.
Either it was talking directly about the fact that we're in a Minkowski space time
or at least something, or at least the space time that have a light cone structure.
Right.
And so we talk about the causal structure of space time.
We're actually talking about the light cone structure of space time.
structure of space time, we're actually talking about the light cone structure of space time. Or the other one was that propagations of physical processes are confined to lie on
or within the light cone.
And that seemed to exhaust almost all of the causal talk that I could find.
I can't swear that I picked up every single case, but that pretty much covered everything.
Notice what you're looking for here.
You're looking for a sense of loss.
You never had it in the first place.
The effort of causal metaphysicians is to do a priori physics.
If they're providing you some kind of empirical fact about the world, they are trying to do
it prior to experience.
If we've learned one thing about thousands of years of scientific investigation is that
really doesn't work.
The world is far more creative than our imaginations.
We always get into trouble when we try and guess ahead of time how things have to be.
And if it's a causal metaphysics, there is a striking example of that.
This is not to say that we have lost some sense of how
things connect together. Space-time has a light cone structure, call it
causal structure, that's fine with me. Ordinary propagations, right, I
confined to it, that's fine with me. Where's the loss? So there are other
counterfactual accounts of causation. Do you reject those?
Um, no, they're just definitions.
No Frills delivers.
Get groceries delivered to your door from No Frills with PC Express.
Shop online and get $15 in PC optimum points on your first five orders.
Shop now at nofrills.ca. Nothing wrong with that.
This calls that because if I hadn't done this, that wouldn't have happened.
Wouldn't have happened.
Fine.
You just told me how you plan to use a word.
Hi, everyone.
Hope you're enjoying today's episode.
If you're hungry for deeper dives into physics, AI, consciousness, philosophy, along with
my personal reflections, you'll find it all on my sub stack.
Subscribers get first access to new episodes, new posts as well, behind the scenes insights,
and the chance to be a part of a thriving community of like-minded pilgrimers.
By joining, you'll directly be supporting my work and helping keep these conversations at the cutting edge
So click the link on screen here hit subscribe and let's keep pushing the boundaries of knowledge together
Thank you and enjoy the show just so you know if you're listening. It's C U R T J A I M U N G A L dot org
Kurtjaimangal.org
Okay, so let's get to thought experiments.
Sure.
So what is the standard view on thought experiments,
and then where do you stand on that view?
I don't know that there's a standard view,
but I can tell you there's a long-standing debate.
This goes back to the 1980s when the literature on thought experiments exploded.
There were essentially two extremes in the literature on thought experiments exploded. There were essentially two
extremes in our understanding of thought experiments. One extreme is a completely deflation
review that just says that thought experiments are ordinary argumentation. They don't do anything
that ordinary argumentation cannot do. They just do it in a rather pretty and picturesque way.
cannot do, they just do it in a rather pretty and picturesque way. The other extreme is it says, no, there's something more going on.
There's some magical power that our capacity to do thought experiments is realized.
And of course, the question is to articulate what that magical power is.
And the clearest articulation came from my colleague,
Jim Brown, at Toronto.
He said, we can understand some thought experiments
to be platonic in character.
A really good thought experiment of just the right type
literally opens the window, right,
onto Plato's heaven, where we can actually see
the laws of nature. He supports that with the experience that we have with the good thought experiment.
There's this wonderful aha moment when suddenly you see it,
and that's the moment of platonic perception. He's wrong, of course. I've spoken to James, James Robert Brown.
Yeah. Yeah. Yeah.
I've spoken to him. So one of
the great thought experiments is that
how is it that we could tell a priori that
something should fall at the same rate of different masses?
We can't. Do you mean to tell me that a priori,
Aristotle's account of the motion of bodies was false
a priori?
No, that you could have a world in which things, in which you have a force needed to keep things
moving, right?
Well, I'm not articulating my view.
I'm articulating James's view from
when I interviewed him, if I'm recalling correctly. It went something like this.
If heavy objects fall faster, then dropping, say, a heavy bag of marbles,
okay, so comprises 300 marbles, next to a single marble means this single marble
will fall slower, but then you look, the bag is just
filled with many marbles, so those marbles each individually should be falling at a rate similar to, if not equivalent
to this marble and then you have a contradiction, so thus they all must fall at the same rate.
That seems powerful, so tell me what your views are on that.
It's very simple.
Why were you convinced by what you just said?
Why were you convinced that the marble, the single marble and the bag of marbles have
to fall at the same rate?
There was an argument there.
Yeah.
That's thought experiment.
It's an argument.
That's all I'm saying.
You just ran an argument.
Yeah.
But I thought thought experiments are arguments, no?
Or am I saying a view that's controversial by saying that?
You're agreeing completely with me.
What you didn't have is any extra piece that Jim would want
where Plato's world of form somehow ended into things.
You just look, every time someone,
I mean, that exchange that we had now
is what happens all the time.
All right, someone has a thought experiment,
they run through the thought experiment,
I'm listening to them go through the thought experiment,
and I'm hearing, okay, this is just an argument.
It's a very simple straightforward argument.
I can refute Galileo's.
You realize that this is not Jim's example,
this is one of the great classics in history of science.
It goes back to Galileo, blah, blah, blah, blah.
But Galileo had, I think,
a musket ball and a cannon ball or something,
and then connected them with the thread.
But you just ran an argument and that's all thought experiments are.
But they're picturesque. I mean, they're compelling because you get this lovely mental picture and so it's easy for you to run through.
But if it's simply purely a picture, I don't think it has any compelling force. There has
to be an argument there. So for example, can I prove the possibility of a perpetual motion machine by imagining one?
I visualize it. It's a big brass gadget. It's got valves and there's steam coming out and so on and
Oh look, the wheel just keeps spinning and producing endless amounts of power
Right just imagining it doesn't doesn't do anything
All right
There has to be that argument there or you don't have a cogent thought experiment.
And I say that's all that's ever going on.
Jim and I have been at this debate for 40 years now.
It's a little striking for me to say that.
Yeah, because what you're saying sounds sound and ordinary.
So I don't understand what Jim would be objecting to.
Because even with this articulation, my articulation of this argument is an argument.
I'm saying like if there's this, then you have this, then you have that.
Their contradiction, therefore the premises can't be true.
So it isn't just picture something.
Now you have it.
You're in the same position as I am.
This is my view.
We talked about the dome early run.
I don't understand why people are troubled by it.
I articulate the, you know, it's the argument view of thought experiments, I articulate
it.
And I'm thinking, well, that's kind of obvious.
I wrote this paper, I think in 1986, I thought, well, this is a bit of a doubt of a paper.
You know, I'm just saying something that's so completely obvious.
But then you discover there are all these people who want to take issue with you and you're trying to figure out why. It's completely straightforward.
Well Jim's a friend of yours and you've spoken to him like you've said for decades now.
Oh yeah, yeah, yeah, we get along.
So why don't you tell me what he would say other than you have to connect to a platonic
world. I imagine that's not his sole point. Well, he runs lots of examples.
I think it's, I, I'd have to refresh my memory on his writing.
I'm a little nervous about trying to channel him, but I think the thing.
Jim, if you're watching, I apologize for getting it wrong.
Jim, if you're watching, I apologize for getting it wrong.
I think the thing for him is
this moment of understanding that somehow seems to
surpass just ordinary argumentation.
He likes doing philosophy of mathematics.
He's got an example where you can
sum numbers one, two, three, four, five,
and you've got a little stack of blocks,
and you can look at the stack of blocks,
and suddenly you see, oh,
it's going to be five plus six divided by two.
You can just see the way that works.
You just suddenly see it
instantly without apparently having to think about it.
Those are the sorts of examples he likes.
I see.
I just think they're arguments still because I say to him,
well, I didn't see it.
How does it work? Then he explains it to me and he gives me an argument.
Okay. Yeah. This moment of understanding sounds similar to Penrose when he's
articulating the Lucas argument or his version of the Lucas argument with Gödel implying that the mind
isn't computational.
I don't know that argument well enough.
I know all of it, but I don't know the details.
Okay, so how about let's get to something that you know inside and out, Landauer's principle.
Why don't you outline what Landauer's principle is and then what your precise statement is
either that landowners principle is false or it needs to be modified.
The
the argument that
or the project that Landauer had was a very practical one. One of the things that we notice in computing devices is that they always
produce heat. And that heat of course is work that's been degraded.
It is a cost for computation.
It's been a long-standing problem.
We always need to cool down our computing processes.
I don't know if you remember the Cray computers going back many years,
but they would sit in,
if I recall correctly,
in vats of Freon in order to.
The generation of heat in
a computing device is a big deal.
The question he was addressing is,
how far can we go before we have reached a limit,
beyond which we cannot go any further?
In other words, how far can we reduce
the amount of heat that's being
generated in computing systems.
The calculation can be done in terms of entropy,
how much entropy is a computing device creating.
If you think of the entropy as sitting in an isothermal heat bath,
then the entropy creation is going to correspond to the heat passed
to the environment divided by the temperature of the heat bath.
That'll give you a first pass at how much the entropy is. to the heat passed to the environment divided by the temperature of the heat path that'll
give you a first pass at how much the entropy is.
Now, his argument, as embellished and developed by Charles Bennett, is that the logic of the
process being implemented determines the minimum amount of heat generation. If the process is logically reversible,
something like a bit flip,
then in principle, you can execute that with
minimal heat generation, with minimal entropy creation.
If however, it is a logically irreversible operation,
the classic case being erasure,
reversible operation, the classic case being erasure, then necessarily there's going to be
a certain amount of heat generation that's going to
correspond to the Shannon information
that you calculate for the two states.
So if you've got two states,
zero and one, probably p and one minus p on the two of those,
you calculate the Shannon entropy,
add a Boltzmann k to it,
and then you know how much entropy will be
created when you erase it.
Now, what's wrong with that?
What's wrong with that is just a very basic fact of
the thermodynamics of systems at the molecular scale. You cannot do
anything at molecular scales without creating entropy. So something as simple as a bit flip,
you can't flip a bit without having some driving force to push the bit from one state to another.
So very simplest case is you might have a charge and you want to move it from one location
to the other.
You're only going to be able to move it from one location to the other if you have some
kind of electric driving force that will push it.
And what is that driving force working against?
Remember we're at molecular scales and at molecular scales, that individual charge has
its own thermal energy.
It's bouncing around, right?
And so you have to confine it.
And in the process of confining it, you compress it over to one particular part, right?
You're going to be doing work on it.
That work is going to be lost as heat.
This is an extremely general result.
This simply is Boltzmann's SSK log W.
The best you can ever do at any process at molecular scales,
is to have a probability of success of completion.
Boltzmann's W tells you the probability of success of completion,
and the S associated with it tells you how much entropy you're going to have to create.
So if you don't, if you don't confine the charge very much, right, then it has its own
thermal energy can jump out.
Right.
And so you have a probability that the charge is going to go back to the original state.
So you can have a, but because you didn't confine it very much, you haven't created
much entropy, but if you canine it a lot, so you really force that charge deeply into some kind of
potential well, then you'll have a good probability of success, but there'll be a lot of heat
generated, a lot of entropy created.
So the bottom line is the following.
The amount of heat that will be generated in molecular scale processes is not determined
by the logic.
It's simply determined by the number of steps that you want to complete and the probability
of completion that you determine for each step.
Again, this seems so elementary.
I've been arguing this for, I don't know, a dozen years now.
I just don't understand why the Landau principle talk continues.
If you're interested in the question of, yeah, what's the minimum heat generation that you
can have in any kind of molecular scale process, computational or not, it doesn't matter.
Ask how many steps are there in the process and what's the probability of completion that
I want for each step and s is k log w, you'll tell you the answer.
It's done.
So, ordinarily in the calculation of Landau's principle, they use a principle of indifference
to put 50-50% odds for the zero and the one, and you're saying that the probabilities need
to be physically
dynamical?
Yeah, I don't.
Yeah, my recollection in Landau's original paper was he talked about computing systems
and the frequency in which they might be in different states.
But go ahead.
So let me try and summarize.
If you try and form a lower bound based on logic, well, you shouldn't.
You should look at the precise implementation or the procedure.
If you do this, you'd find that
the minimum should be higher than k log 2.
Yes. Absolutely higher than k log 2.
If you just want a single process,
I've done the calculations,
I can't remember what the exact numbers are now,
but to get a really modest probability of completion,
I don't know, 90% or something,
I can't remember the exact numbers,
you will certainly create more entropy than the K log two.
It's 0.69 K that is the one bit erasure case.
that is the one bit erasure case. Remember correctly, if you want, I think, 95% probability of success, you create 3K
of entropy, something like that.
But that's only one step.
Remember, in a computing device, many, many, many steps, right?
There isn't just one step.
You've got all these steps chained together, and every single one of them is going to be
dissipative.
This is just a completely basic fact of molecular scale physics.
It doesn't take massive, complicated, fancy derivations.
The whole thing is done in two lines.
You just write down SSK log W or if you can find different expressions of it,
if you go into Gibbs formalism,
SSK log W will be expressed in terms of free energies.
They're all essentially the same result.
This is interesting. I'm currently writing an article.
Maybe it's published already.
I'll place it on screen if it's already out.
It's about this word in principle, in principle arguments.
So my contention is that when most people just use that word, they use it loosely and
you need to scrutinize what kind of in principle are they invoking.
So are they referring to epistemological modalities or normal logical or metaphysical or logical
possibility or something else entirely.
And even with these categories, there are frequent ambiguities and doubtfulness within.
So what you're suggesting aligns with this.
People invoke in my interpretation of what you've said, Landauer's principle.
They're also employing, well, let's just idealize this scenario.
Let's say it's an, it's an in principle argument.
But then even in such cases, you have to
be careful and consider, okay, what are the practical
implementations? Yeah, I'll say more than that. It's an
inconsistent application of the idea of in principle. I think
you know a bit of the literature here. It goes back
to Szilard's 1929 paper in which he introduced the Szilard
engine.
This was a version of the Maxwell Demon.
The idea was that you had
a one-molecule gas that would bounce around inside a chamber.
You would insert a partition,
trapping the gas on one side,
and then you would isothermally expand it,
thereby taking heat from the environment and converting it into work.
Now the key thing to understand about that is that the phenomenon that Szilard was looking
at is a thermal fluctuation.
This was the literature in which he was writing thermal fluctuations going back to Szilard
and Einstein and Brownian motion and so on. And the fundamental question that was being asked is,
if you look at thermal fluctuations,
to what extent can they reverse
the second law of thermodynamics?
So if you look at Brownian motion, for example,
think about the Brownian motion in a fluid
when the Brownian particle goes up and down.
When it's going up, heat from the environment is being converted into some microscopic notion
of work because it's being elevated in the gravitational field.
So Poincare remarked that in this sort of system, we see through our microscope a Maxwell
demon in action.
So the question then became, is it possible to accumulate all of these microscopic violations
of the second law of thermodynamics in order to produce a macroscopic violation of the
second law of thermodynamics?
And that was a serious project that was undertaken in the first decade of the 20th century.
Smolikowski came up with the answer.
And the answer is yes, you get fluctuations that you might try and exploit.
But every time you try and exploit those fluctuations, you will use other processes that have their
own thermal fluctuations that will reverse everything.
So this is the example of the Molokowski trapdoor. So
Let's now go back to this LR engine the
The single molecule bouncing backwards and forwards is a case of a dramatic density fluctuation in a gas
It's the most extreme case when you have larger numbers of molecules
The fluctuations are very small as you the number, then the fluctuations become large
in relation to the total energy of the gas.
And so, Szilard's question was,
can we somehow exploit those fluctuations
and add them up to get a violation of the second law?
And the trouble is, when people analyze that,
they don't account for all the fluctuations that are in the apparatus that they're using.
So think about the way the apparatus works.
You start out with the gas,
the one molecule gas bouncing around,
you put in a partition.
The mere fact of putting in a partition is itself a thermodynamic process.
If that partition is very light, it's going to have its energy of a thermodynamic process. If that partition is very light,
it's going to have its energy of a half kT.
You have to suppress that energy to get the damn thing to stick.
That's going to be creating entropy.
If you make it very massive so that the amount of kT,
half kT is not going to produce much motion,
then you need frictionally to damp it.
That's going to produce, you know, so it stops moving.
It doesn't bounce out.
Yeah.
You know, the short answer is these, the analysis of the still odd engine
from still odds time up to the present simply ignores the totality of
fluctuation phenomena that have to be suppressed in order to get the process to go through.
So it is, to go back to your original point, it is a selective and incorrect use of, in
principle, idealization.
You're idealizing away half of the fluctuations, but not the other half.
And then you're claiming a result.
If you're going to try and exploit fluctuations,
you have to treat them consistently
and look at the fluctuations throughout the system.
If you just pick one particular subset of the fluctuations,
you're going to get nonsense results.
And so, I mean, you probably sense frustration in my voice.
This literature has been teetering
on the edge of nonsense for hundred years.
What is kind of selective treatment of fluctuations is just as i was just a question get completely bogus results.
The trouble is that every time a formula peel on pay appears there's a tendency and natural reaction this is all we have to be like me.
That must be something that can't No, it must not be.
The conditions for a P log P to be associated with heat in the way the Clausius says requires
that that P come about in a very particular way.
And the mere fact that I don't know whether, I have a coin, I put it in my pocket, I don't
know whether it's heads up or tails up That isn't that isn't the right way for there to be a thermodynamic entropy of a log to associated with the coin
But that's the fallacy that's being committed over and over and over again
So there it is
There was even a nature article that says that they've experimentally validated land house principle
that says that they've experimentally validated Landau's principle. Oh, my.
Yeah, again, they're doing exactly what they shouldn't be doing.
What they showed is that you, I don't remember the details now,
what they showed is that you have a little tiny particle,
a colloid or something,
that's free to move around like a Brownian particle.
And if you compress it by moving a barrier in,
you do it slowly so you can get a reversible effect here,
then you pass heat of K log two to the environment.
Well, of course, this has been standard in thermodynamics for over 100,
I don't know how many years.
This is just the basic thermodynamics of ideal gases.
I did a lot of work on Einstein.
It's abundantly clear in Einstein's work on Brownian motion that he understands this perfectly well.
It is quite fundamental.
If that experiment had failed,
then we would have to rethink the thermodynamics of ideal gases.
What's wrong with the experiment?
Well, they've just looked at how much heat gets generated when you compress a one molecule
gas in effect.
It's actually a particle, but it's close enough to being a one molecule gas in its degrees
of freedom.
If you want to say that with now instantiated Landau's principle, and that's the lower limit,
well that experiment doesn't show it.
What about all the entropy that was created in all the other bits of apparatus that were
being used?
It's a fluctuation phenomenon that you're looking at.
What about all the fluctuations that were suppressed in order that you could move your
partition inwards?
That's all got to be part of the calculation or you simply don't have a result.
Of course, they didn't calculate any of that.
I certainly accept the result.
A two to one isothermal compression of a single molecule in an ideal gas will pass a KT log
two with heat to the environment.
The same thing will happen if you're in a fluid,
you have a single Brownian particle.
That Brownian particle is going to behave like a one-molecule gas.
This was the brilliance, by the way,
of Einstein's analysis of Brownian motion.
He realized that you could treat
Brownian particles in the same ways you treat molecules.
It was a very beautiful analysis.
Yeah, I do want to get to Einstein's views on old quantum theory versus new quantum theory.
We'll get to that shortly.
It sounds like what you're saying is nature's,
the article that I've shown,
or maybe it'll be on screen again right now,
is not validating Landauer's principle.
This is something that was predicted before Einstein died,
and Landauer came with the principle in 1960s or so.
It's worse than that.
It is an easy consequence
of the standard thermodynamics of ideal gases.
It's undergraduate physics stuff.
It's lovely that we've done the very particular experiment and seen the result,
but boy, it had to be right.
If they had any other result coming out and it wasn't the result of some kind of procedural
error, it would have been traumatic for statistical physics
because that is so fundamental
that if you just have a single component
like a molecule, a one molecule gas
and you compress it two to one,
you're going to, isothermally,
you're going to pass a KT log two of heat,
reversibly by the way.
What if someone says, okay, so what?
So what if Landau's principle isn't the minimum bound?
I mean, I can say Kurt's principle is set the minimum bound to zero and then I'm still
correct if you show that something's higher because, hey, my minimum hasn't been violated.
Remember, remember the idea, the idea is that we can understand the minimum amount of heat
generation in a computing device by looking at the logic,
right? Of the, of the process is being implemented.
So if we want to minimize heat generation,
then what we need to do is look carefully at the logical processes and minimize
any irreversible irreversibility in the logic.
That is just mistaken and will mislead you. That's the wrong answer.
The right answer is, uh, what matters is how many steps you are expecting to
complete, whether it's a computing system or any other system, whatever, and
the degree of probability of completion.
And you need to understand that if you're serious about reducing the amount of
faith, the paying attention to the logic being implemented in the computational device is really not
going to help you.
It's how many steps.
It matters.
The implementation matters massively.
At Desjardins Insurance, we know that when you are a building contractor, your company's
foundation needs to be strong.
That's why our agents go the extra mile to understand your
business and provide tailored solutions for all its unique needs. You put your
heart into your company so we put our heart into making sure it's protected.
Get insurance that's really big on care. Find an agent today at
Desjardins.com slash business coverage.
So is something now allowed that we previously thought was disallowed because of this analysis,
your analysis, or is something now disallowed that was previously thought to be allowed?
Like, what is the consequence, the practical consequence of this?
The practical consequences is what I just said.
If you want to minimize the amount of heat generation in your computing systems, pay
attention to how many steps you've got and the probability of completion.
That's what you should be looking at.
And you also believe that this distracts researchers from simpler, more general solutions to
Maxwell's Demon like Louisville's theorem.
Oh yeah. Oh yeah. No. Yes. this is one of the papers that I wrote.
I do my research.
You did, thank you.
The idea that notions of information and computation
are going to help us understand
why Maxwell-Diemann must fail, right?
That has so distracted everyone.
We spend all our time arguing about it.
So for a long time, John Irwin and I first wrote papers on this,
I wrote them by myself. I kept saying,
no, these ideas aren't helping us.
It doesn't work. We don't learn why Maxwell-Diemann must fail.
We don't know that it must fail from these considerations.
We spent all our time thinking about that.
We're just hugely distracted by that.
Then one day, I was sitting on the bus coming into the office and I thought to myself, you know,
maybe I should ask the question, is an actual demon possible? Forget about all this information stuff.
And within five minutes, I mean in the course of a short bus ride, I realized, oh god, the
Leeuvel theorem just prohibits it. That's all.
If you're assuming that the daemon is to be implemented
within classical physics,
you can see with essentially no calculation at all
that the Liouville theorem is going to block it.
So I published that somewhere in the paper.
And then after a while I thought,
this is not a really decisive argument because nothing at the scale
that we're concerned about is actually classical.
It's all quantum mechanical.
I asked, is there an analog of the Liouville theorem in quantum mechanics?
Yes, there's an analog in quantum mechanics, and you can run an exactly analogous
argument.
And so I've got another paper in which I show, it's got two columns, you've got the classical
and that was on one side, quantum analysis on the other, and they just match up perfectly.
So yeah, we know that a Maxwell demon is impossible in so far as those versions that the Leeuvel
theorem apply.
And that explains why with all the nanoscale physics that we've been doing, no one's produced
a Maxwell demon.
And before we move on to Einstein's views, people are terribly interested in entropy.
And you mentioned a couple different definitions of entropy like Boltzmann and Shannon.
So there are a variety of entropies.
Can they be arranged such that one is a subset of the other, like Boltzmann is a special
case of Shannon?
Or are there entropy notions that are incompatible?
And why are they all called entropy if they're incompatible?
So if they are indeed some that are incompatible.
So why don't you outline what is entropy supposed
to be quantifying and then the different definitions
and their relations?
The basic idea of thermodynamic entropy is articulated
well by Clausius in his early papers.
I think it was at 1865 or something,
I don't remember the original paper's date.
The idea is that that will tell you
which thermodynamic processes will move forward spontaneously.
Now, the notions of entropy that appear in thermodynamics adhere well to that.
So Boltzmann's notion of entropy,
SSK log W is going to tell you which processes move forward.
This was the rule that I told you before.
If you want a process to advance,
you want to have an end state that has
a higher probability than the starting state.
SSK log W then tells you that the entropy of
the end state is going to be greater that the entropy of the end state is going to be
greater than the entropy of the initial state, right?
And then that notion of entropy, when you start to move into equilibrium systems, is
going to mesh nicely with the notion that Clausius developed.
In the Gibbs formalism, it's more complicated. In the Gibbs formalism, you can connect the Gibbs entropy,
the p log p with thermodynamic entropy
by giving an analysis that both Gibbs gives and Einstein also
gives in one of his early papers where
you look at a thermodynamically reversible process
and you idealize it and you can match up all the quantities.
Then there's Shannon entropy.
The sort of entropy that appears in information theory is a parameter for a
probability distribution and that's what it is. It's a measure of how
smoothed out, of how uniform the distribution is. The highest entropy
arises when you have a uniform probability distribution and as the probability distribution becomes more peaked,
then you're going to have lower and lower entropy.
It's just a different thing.
I mean, there are connections.
I mean, take probabilities.
There are many ways that probabilities appear in usage in the world.
I don't know that I want to nestle one inside the other,
but I'm quite comfortable that Boltzmann's notion of entropy,
and the Gibbs notion of entropy,
and the Clausius notion all fit together very nicely.
Now, there are complications because when you move into quantum mechanics,
there's the von Neumann entropy.
There's a literature saying,
well, this doesn't exactly match up and I'll defer on that
because we're now getting into very delicate territory.
The delicate territory is we don't know how to interpret,
at least I don't know how to interpret the at least I don't know how to interpret, the density operators that appear
in quantum statistical physics.
I don't know.
When you give them a matrix form
and you have a nice diagonal with p's that add up to one,
are they probabilities or what are they?
And if you can't answer that question
and you don't really know what p log p is,
which is going to be
the entropy. So anyway, not my area. I've deferred to other people who write on this
because I think we have walked directly into the measurement problem of quantum mechanics here.
So I don't think anyone really knows how to handle this other than pragmatically.
Great. Well, we can turn this into your area by talking about Einstein. Okay.
So Einstein has some criticisms of new quantum mechanics and its statistical interpretations.
And then I believe you mentioned that Einstein's fundamental contributions to old quantum theory have been forgotten because of these new criticisms.
So firstly, let's talk about the criticisms to get them out of the way, please.
And then let's talk about
his contributions to the old theory.
Well, I think his objections are very widely known.
He simply did not believe
in the indeterminism of quantum mechanics.
He argued that there was some,
he was arguing for some sort of a hidden variable theory.
The sort of hidden variable theory that he was arguing for, I don't think is anything
like the sort that we're thinking of.
You might think now of the Bohm theory as a kind of hidden variable theory.
Of course, Einstein was encouraging to Bohm, but it's pretty clear that wasn't his theory.
Einstein's hope was that his unified field theory
would somehow return this hidden variable theory.
I think you know the basic layout of
the hidden variable of Einstein's unified field theory.
The program was pretty straightforward.
He'd found that you could represent gravity in the same structure as
the inertial properties of space and time,
and in the metric field.
I'm not using geometric language here because he didn't.
If you're curious on that,
I just wrote a long paper on this explaining.
Just a bit of a digression because-
Please.
When you first get a class nowadays in general,
it'sivity and you learn about the Schwarzschild metric
and you learn about the Schwarzschild radius,
one of the first things you're told is,
but don't make the mistake of
thinking that that's a singularity.
I know the formula blows up,
but it's just a pure artifact of coordinates.
Don't make that mistake and it's
sort of you're wanted don't make that mistake. And it's sort of, you know, you're warned it's a silly
novice mistake, but why, why is it talked about so much?
Well, who made the mistake?
Answer prior to about 1950, everybody.
Right.
Um, Einstein was very clear that he regarded, uh, that he regarded the
schwarzschild radius as singular and he convinced everybody else of that as well. Now, when I say everybody else, I don't mean trivial figures. I mean the world's
greatest mathematician of the time, Hilbert, and I mean the world's greatest geometer of the time,
Felix Klein. They all agreed with him. And Hermann Weyl, they all agree. What on earth was going on? You know,
I do a lot of work in history. I'm fascinated by history of physics. And I can only just
tell you very briefly what the answer is. There are multiple ways of treating general
relativity mathematically. The geometrical approach that we now use is, I believe, the
right approach and the correct way to do things and the one that gives us the best and most productive results.
I don't want to in any way detract from that.
Einstein disliked the geometric approach completely.
He preferred a kind of algebraic, analytic approach, which was all dependent on very
particular expressions and their behaviors and their transformation properties.
In the context of that approach,
it makes sense that he would come to the conclusions
that he did.
Now, he wasn't coming to those conclusions
in ignorance of the possibility of another analysis.
It was Lemaitre who had already discovered
that you could transform away those fractional singularity.
And also, Figelix Klein had pointed out that the so-called mass horizon in the decider
spacetime could be transformed away.
He knew all of that, but still knowing that, he said, I don't like this geometrical approach.
I don't take it seriously.
We have to approach it analytically.
If you want to get a sense of how someone could possibly think that,
look at the way that Einstein's 1917 cosmology was introduced.
He wanted to have a spherical geometry for space.
Where does he get the spherical geometry from space where he
wants to get the line element for a spherical geometry?
But he says, imagine a four-dimensional Euclidean space
with a three-dimensional sphere embedded inside,
and look at the geometry that is induced
on the three-dimensional sphere,
and bang, there you get the nice line element.
But now, do you take this geometrical picture
of a four-dimensional space inducing a geometry
on the three-dimensional space? Do you take that seriously? Do you really think there's a four-dimensional space there? space inducing, right? Um, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in,
in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in,
in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in,
in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in,
in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in, in. In other words, it's a kind of easy way for people,
for novices to learn things, but don't take it seriously.
I think Dennett would call those intuition pumps.
Yeah, maybe.
So ass bridges were intuition pumps.
Yes.
I suppose he didn't use that expression,
but I guess that would fit.
I had to speak for him.
Okay, so those are his objections and the EPR thought experiment is,
I think it's transparently trying to argue that there's more to the system
than the standard quantum mechanics allows.
In other words, it's in the title.
What is it?
Is quantum mechanical description complete?
Something I can't remember.
There's a criterion of reality.
If you can predict with certainty properties of some system without interfering with it,
then the system has those properties.
This is the EPR argument, everyone knows it.
What were his contributions to quantum mechanics?
Really quite massive. I think the major one was the light quantum of 1905, a completely extraordinary idea.
When you look at Einstein's Annus Morabilis, his year of miracles in 1905, everything that
he's doing there, accepting that, is a completion of 19th century physics.
So we can just go down the
list. His argument for the reality of atoms, Brownian motion, is completing the Maxwell-Boltzmann
tradition of statistical physics that had been well developed in the 19th century but
was making a great deal of resistance because there were no new phenomena that needed atoms.
If you understand special relativity, you realize that the basic content is implicit
in Maxwell-Lawrence electrodynamics.
Lawrence had discovered, in effect, the Lawrence group, mathematically articulated, better
by Poincare.
And once you have the Lawrence group and you understand how to think about it, you realize
there's ineffective kinematics there in space and time. Einstein is excavating them saying,
oh, look, there's a kinematics of space and time building.
This is the big discovery of the 19th century electrodynamics,
that space and time is actually special logistic,
equals MC squared.
It's already there in special cases in electrodynamics.
Amongst all of this, the huge discovery
in the 19th century was the wave theory of light.
They are electromagnetic waves,
the Maxwell-Lorentz theory.
Then in 1905, Einstein says,
no, wait a minute.
In some thermodynamic sense,
heat radiation has a particular character.
Now, one of the things in a thermodynamic sense, heat radiation has a particular character.
Now, if you, you know, I've, one of the things that has fascinated me for a long time is
how Einstein made his discoveries.
He didn't have anything that everyone else around him didn't also have.
He basically had a pen and a paper and journals to read.
He did very little experimentation, wasn't terribly good at it.
Yeah. So what was different about how he came up with his discoveries?
Well, in this particular case,
the key thing about the results of 1905 is that he
could see significance in empirical results that other people couldn't see.
So let me give you the example with the light quantum.
If you try and understand what he did with the light quantum as a correction to electromagnetic
theory, it's unintelligible.
How could this possibly be?
We have Young's two-slit experiment.
We have all of the massive electro, electrodynamics.
How's it, how is it possible that, that I can come along and say, Oh no, wait a
minute.
Um, you know, there are particles there.
Um, and he talks about the photoelectric effect doesn't sample.
How could it be?
Well, what you're doing is you're, you're not putting the
discovery in the right context.
The discovery lived in Einstein's work in thermodynamics.
In the years leading up to 1905, Einstein was already working in thermodynamics.
He was trying to understand the microscopic or the, I want to say molecular, I guess,
molecular scale properties of matter.
What he recognized was that the molecular scale properties of matter get imprinted on their thermodynamic properties.
So the classic example, the simplest example is this.
If you have a system whose pressure and temperature and volume conform with the ideal gas law,
then you know that its molecular constitution consists of localized points of matter bouncing
into each other but independently moving otherwise.
All right?
I mean, that's where PV equals any NIT comes from.
You model the gas as a whole bunch of molecules that move independently one another, but they
bounce off the walls and they bounce into each other.
All right?
The key thing is that PV equals NIT at the thermodynamics scale is a signature of that
constitution.
This, by the way, is why osmotic pressure obeys the ideal gas law.
When you first learn this in a thermodynamics class, it's why the hell should a dilute salt
solution exert a pressure that's the same as the ideal
gas law?
Well, because it's dilute, the salt molecules or the salt ions are moving around like independent
molecules.
Okay, so what does Einstein do?
He's looking at the latest results on the thermodynamics of heat radiation. And what he recognizes in that thermodynamics is that same signature of a particular constitution.
And in particular, he realizes that if you take the Planck distribution, which had been
empirically established by the experiments of Blumer and Pringheim and Boland in 1900,
and you wrote the entropy as a function of the volume, right?
You got that the entropy of high frequency heat radiation, right?
He's now actually looking in the Vien regime.
So Blumer and Pringheim don't come into this, but nevermind.
If you look at the, in the Vien regime, you got that the entropy of heat
radiation varies with the logarithm of volume. And that is the same, right?
That's the same as the ideal gas law.
So Einstein says, oh, look, here we have the fingerprint, the thermodynamic fingerprint
of the molecular constitution.
And just as you can calculate the size of molecules once you know how big Boltzmann's
constant is and you've got the ideal gas law. So you can calculate the size of the energy particles that are giving you ssk log w.
And of course what comes out of that is that the size of the little localized energy bundles,
is given, it depends on the frequency and it's what we now call Planck's constant times the frequency.
That's the big argument. He gives a very simple derivation of what we now call Boltzmann's principle,
SSK log W, is actually Einstein's principle.
He calls it Boltzmann's principle in this paper,
and he gives a very simple derivation of it.
Then he says, this is now instantiated when you add in the various conditions that apply here.
You get SSK as
Entry goes with the logarithm of the volume. It's I think one of the most beautiful most extraordinary
of Einstein's contributions I mean there are many more I'll just mention others that they're important now the next thing that comes up is the following
He's established that there's a particulate character, right?
But he's only established that by looking at the VIN regime
in the blackbody spectrum.
What happens if you look at the total regime
going all the way down to the lower frequency end, right?
Well, if you give a similar analysis
of the thermal properties,
in particular, you look at the fluctuations
of radiation,
pressure, and energy.
You discover that the expression that you
derive for the fluctuations in
heat radiation is the sum of two terms.
One term has a particular character
and the other has a wave character.
They are arithmetically added together.
This is the origin of wave-particle duality.
This is where it first appears that radiation has this dual wave and particle character.
And so it keeps going like this. And I just mentioned, of course, the A&B coefficients paper with basis of lasers.
This was 1916 and 17. And then, of course, those Einstein statistics in the early 1920s.
So the idea of the light quantum was greatly resisted.
Bohr did not like it one bit, and Einstein, it was regarded as a heterodoxy for 20 years
until the Compton effect.
It was the Compton effect that finally drove home the idea to physicists that Einstein's light quantum was in fact a good description
of what was happening with heat radiation or radiation in general, electromagnetic
radiation.
So, Professor, why don't we end this on what insights from your research into the beauty
of how Einstein thought differently than his peers?
Because as you mentioned, Einstein had access to the same data as his peers.
What insights exist that you've gleaned
that can be applied to young researchers today,
such that a young researcher can watch this and say,
okay, I should do more of that.
I think a lot of it is good fortune.
Let me say a couple of things.
One thing that I don't think works is the following. There's this idea
that you have to be young and in your twenties to make a great discovery, that there's something about youth.
What's happened is we have a correlation, but not a causal connection.
The process that seems to be at work is the following.
When a new science opens up,
when a new science opens up,
that's where the new discoveries are going to be made.
The established figures are working on
the old sciences that they have put together.
So they keep working on those.
The new figures come along and they're asking,
where's something new happening? Oh, it's over there. So they're going to those. The new figures come along and they're asking, where's something new happening?
Oh, it's over there.
So they're going to work in the new science.
And that's where the new discoveries are made.
And that's why they're made more commonly by younger people.
So don't feel bad that you're young and you
haven't made an Einstein discovery yet.
It's got nothing to do with your age.
But also don't feel bad that you're old.
Yes, exactly. In fact,
this is one of the things that I follow in my own research.
This is just a side thing,
but I mentioned before I get to the other point I wanted to make.
Someone pointed out to me quite early in my career that
when you enter a new field,
most of the important novel ideas you have will come to you
pretty early and then you won't get much more.
I think that's right.
So how do you exploit that?
Answer, you keep jumping around.
Right, exactly.
So if you've done your homework and you've looked at
the papers that I published, I'm all over the place.
We've just looked at a few of the things that I've done in philosophy of physics.
I've written a whole bunch on inductive inference.
I'm writing a book on empiricism at the moment.
It's all over the place because every time I go into a new field,
I'll have a new thought.
If it's genuinely new, I'll publish it.
So don't be afraid to jump around.
This is one of the traps for young physicists.
This is why you should be a philosopher of physics and not a physicist.
Because if you're a philosopher, if you're a physicist, you're trapped by the
need to keep grant money going, which means you have to develop an expertise
of sufficient caliber to enable you to keep the grant money going, which means and to keep your lab going and to keep your graduate students going.
And so you can't escape. Philosophers of physics are supported by teaching.
And we can switch on a dime. I can change my mind tomorrow about what I'm working on,
work on something else. As long as I keep teaching my classes, I'm supported.
Okay, now let's get back to Einstein.
What did we learn from Einstein?
Einstein had a remarkable ability to look at results and see the significance in the
empirical results that nobody else could see.
I've already mentioned that with light quantum,
he could see the signature of
distributed atoms there in special relativity.
He could see that the Lorentz group was
actually a kinematics of space and time.
All of this is empirically in the theory,
it has this property.
Lorentz, Poincare, they fully understood the mathematics.
They just didn't see it.
This was Einstein's, and he used this over and over again,
this was Einstein's magical power
that he could read in experimental results.
Things started to change with general relativity.
It had the same origin there.
He recognized that the fact that all bodies,
the Galileo result, that all bodies fall with the same acceleration had to be implemented exactly
and perfectly. Now, when people like Poincare and Minkowski were relativizing theories of gravity,
as they tried to do, they discovered that that law was broken in second order quantities.
You would get a V on C squared dependency on the sideways velocity.
So things that were moving with velocity sideways would not fall at the same rate as something
that was falling vertically straight down.
This Einstein tells us just bothered him massively.
He just didn't see that that would be the right, that that could possibly be right.
You can think of other cases and understand why that would be so.
Might that mean that a kinetic gas would fall slower when it's hotter because there's a
lot of sideways motion?
Maybe, maybe not.
It turns out not to be that simple.
What does Einstein do?
He says, well, we need to construct
a theory in which that result is preserved.
It's so important.
How did he construct that theory?
Well, with the principle of equivalence.
If you have two bodies,
one at rest and one moving inertially to the side,
and then you view that from an accelerating
frame of reference, then the resting body will fall and so will the body that has sideways
motion but they will remain at exactly the same altitudes.
So Einstein says that's the way a gravitational field has to be.
So let's ask what sorts of theories of gravity come out of that.
And since he's working in a Minkowski space time, he very rapidly gets rapidly help for
five years, he gets to the idea of a semi Romanian space time, he moves from a Minkowski
space time to a semi Romanian space time.
So okay, so that's the thing you need to have, there has to be a match.
This is now the Jaramol. There has to be a match between the problems that are right
for the picking and your particular talent and expertise. Now, how does that work out
with Einstein? Well, Einstein then moved on to his unified field theory and he stopped
using that facility. He started saying, I'm going to find the simplest possible rules that we can,
that we can have for physics.
And from, you know, from the mid twenties onwards, when he was doggedly
pursuing his unified field theory, he just never produced anything that
we know that we know actually works.
He was, he was no longer well matched to the problem.
If you ask who was well matched to the problem. If you ask who was well matched
to the problem, well, when quantum mechanics came along, it was just crazy. You had to
be someone who could tolerate bizarre contradictions and manage with them. Who could do that? Who
could do that better than anybody else? Answer, Niels Bohr.
The Bohr theory, the atom of 1913, is just crazy.
I mean, you come along and you say, I'm just going to turn off electrodynamics.
I'm just going to assume electrons can orbit without raging. Completely crazy.
And so he had this ability to just say, I know it's crazy, but what's the quote?
Is it crazy enough?
I don't think it's Bohr.
I think maybe that was Pally or someone.
That was terrific because he could actually produce this theory, the Bohr-Zommerfeld theory
of the atom, that led directly up to what happened in the 1920s. Of course, just as with Einstein,
then Ball's facility to tolerate silliness and contradiction became a massive liability,
because he then produced this inchoate idea of complementarity, for which I don't think
there's any precise sense.
And he somehow managed to convince a whole generation of physicists to take this silly idea seriously.
It took a long time for people to get past the incoherence of Ball's ideas.
And I can see you flinch there because there's a sub-community in philosophy,
physics who hang onto the idea that Ball had some kind of deep and profound insight.
Now, we bifurcate, I'm clearly in the school that thinks,
no, I've no doubt that Ball had strong,
powerful intuitions that he could communicate to other people,
but they are at their core incoherent.
Anyway, so the moral is if you're a Rungo starting out,
just do the, you know, work on what interests you, look for places where you can see
further than other people can see. That's your secret skill. When I talk to philosophers of
science and we're trying to figure out, you know, where they should work. I, I often ask them this question.
I say, can you remember when you've been in a discussion group and everyone gets tangled up over something and you're sitting there thinking, I don't get it.
It's perfectly clear and perfectly obvious what's going on.
I can see straight through this.
Ah, there's your magical power.
The difficulty is that because you could see it so clearly, you think it's
trivial and you think it's easy.
Right.
And so you tend not to value it rather.
You look at someone who can do something that you absolutely can't do and you're
in amazement and you want to be them big mistake.
Ah, they're good at it.
You aren't right. Do the things. They're good at it. You aren't. Right? Do the things that you're
good at. Do the things where you see your way through clearly faster than other people
do. And that's where you'll make the breakthroughs. Anyway, look, that's the advice I give people.
And it's as good as they paid me for it. So free advice is only as good as what you paid.
I love that.
Okay.
So most of the time we'll look at gymnasts and we'll just be wowed and we'll think, okay, I should do that because that's difficult, but then there are
other tasks.
That's exactly right.
I mean, it's taken me a long time.
So I've, I've, I've worked hard on exactly where I have a skill.
Um, it's not math.
I'm not very good at the mathematics.
I can do mathematics competently, but I don't have the sort of beautiful insight that a
good mathematician can have.
But my background is chemical engineering.
I can tolerate the kind of vagueness that engineers thrive in.
I can survive when the situation is unclear.
Do you thrive? Do you not just survive when the situation is unclear. Do you thrive? Do you not just survive when the situation is unclear?
Do you actually prefer that and do better in it than in situations where it's clearer?
Oh yes, absolutely. I'll give you an example of that.
I wrote a paper recently on the nature of thermodynamically reversible processes.
I think that's roundly misunderstood all the way through here.
And it's not a question of mathematics. The mathematicians, Caratheadori,
going back to the Goettingen group, they gave a beautifully
mathematized version, but they missed the essential point over what's really
going on with thermodynamically reversible processes. I can see that, you know, one
of the things that chemical engineers have to be good at is thermodynamics,
because processes in chemical plants are all thermodynamic processes.
So I was taught thermodynamics from scratch four times in my engineering degree, and it
was only on the third time that suddenly I got it.
I can still remember there was this moment when I realized, oh hell, it's all about thermodynamic
reversible processes.
That's the key concept.
If you don't get that,
and so I just mentioned to you,
maybe this will be helpful to you.
A standard mistaken view amongst physicists is that
a thermodynamic reversible process is just a really slow process.
No, here's a really slow process.
Get a balloon and inflate it, and then put a tiny little pin
hole in it.
That balloon is going to deflate as slowly as you like,
just by making the hole as small as possible.
But that is an irreversible expansion of the gas.
That is entropy increase.
Now, a thermodynamic reversible process
has to be one where you have a near-perfect balance
of driving forces.
The forces that are pushing the process forward have to be balanced almost perfectly exactly by
the processes that are pushing it back. Now that runs automatically into trouble because if you then,
notice I had to use weasel terms, almost exactly, almost perfectly. Well, there's a reason for that if you if the bat if the forces balance exactly nothing happens.
When you have a perfect equilibrium of all driving forces no change happens.
I'd say you have to have some sort of an imbalance if you have an imbalance right now then you have an entry creating process so how we do think of these things.
an entry creating process. So how are we to think of these things? Well, there are ways of doing it. And that's what the paper is about. It includes a historical
survey of everything I could find people to written on this. But that comes out of
out of a kind of engineering thinking that I learned to make my peace with these,
these these ideas.
This is interesting. You learn thermodynamics three times from scratch,
four times in order to truly four times. Okay, great. Because I was going to say something that relies on the number
four. Okay, I wonder if this is a general rule, because it's common to hear that one
has to learn quantum field theory four times from scratch before one groks it. And I just
applied that to QFT. I didn't apply that to computer science or to stat mech. But I'm
wondering if maybe it's the case,
and you seem to validate the thermodynamic case.
Yeah, yeah, no, I think that's right.
Maybe it's the case in general.
Now, what does it mean to learn something from scratch again?
Because you could just take one course, thermodynamics one,
and then you take thermodynamics two the next year,
and then they reteach you the fundamentals.
Or you could take thermodynamics one,
take a year off, retake the same course.
Tell us what exactly does it mean from scratch?
I'll give you my experience with thermodynamics.
Chemical engineers have an odd place in engineering
because we don't just do one engineering, we have to have control of all of the different branches of engineering.
In a chemical plant,
I have to understand the chemical processes.
I have to have some understanding of the mechanical engineering,
of the structures,
of the pressure vessels that are being used.
I have to have some understanding of the electrical system that's being used. I have to have some understanding of the electrical system that's being used.
And also chemical engineers are often involved in finance. So we had courses in discounted
cash flow. We had courses in operations research.
You're torturing yourself. This is so messy.
Yeah. Yeah. So we had to be a Jack of all trades and I enjoyed that immensely.
So we went to different departments.
All right.
So we, we learned thermodynamics in the physics because you need to know physics.
So you go to the physics department, you learn thermodynamics there.
Then you go to an engineering school, right?
Because you have to know the engineering and they teach you thermodynamics as well.
Then you go to the chemistry department or chemical engineers. We need to know chemistry and they teach you thermodynamics as well. Then you go to the chemistry department, where chemical engineers, we need to know chemistry,
they teach you thermodynamics there.
Then you come back to chemical engineering,
and then they've got their own version.
If you think across all of those different groups,
they all have different ways of representing things.
For example, the way a physicist will talk
about thermodynamics is going to involve
you know, Clausius entropy, blah blah blah blah blah, and so on.
When you go to a chemistry department, the interesting thermodynamics is the thermodynamics
of chemical reactions.
So it's going to be things like fugacities and so on.
What is it that drives a chemical reaction forward?
It is going to be an increase of entropy,
but how do you represent the entropy
so it is applicable to the chemical process?
Or if you're in an engineering school,
the thing that really matters is the efficiency of engines.
So what's the best efficiency you can get out of an auto cycle
in a gasoline engine?
All right now all of it's all from dynamics that they're being applied in so many different
So many different ways all the way all the way across the board and it's getting all those different perspectives now
the thing about thermodynamics is that
There's an intrinsic beauty to it
But a massive incompleteness
Because what thermodynamics actually talks about is never the complete theory There's an intrinsic beauty to it, but a massive incompleteness.
Because what thermodynamics actually talks about is never the complete theory.
You need to have, in addition to the basic thermodynamic concepts, a theory of the matter
that's being involved.
You need to understand the mechanics of fluid flow.
You need to understand if you're doing thermodynamics of quantum systems, you need to understand the peculiar quantum mechanics of those particular systems.
So one of the questions that I got interested in for a while is,
what's the maximum efficiency of a solar cell?
Well, they are heat engines. They're taking in heat radiation and producing electricity,
but that's very much a quantum mechanical process that's doing it or something like, Oh, what do you call these, uh, these cells that, um, uh,
Peltier junctions, you know, have you ever played with the Peltier junction?
You, you, um, you, you, you connect them out to a battery.
You put your hands on either side.
One side gets hot.
The other side gets cold.
What, what, uh, uh, what's going on there?
And so, and so there are many different ways in.
Now, I know only a little of quantum field theory,
but my impression is that it has a very similar sort of character.
You know, there are basic ideas.
You need to know the Hamiltonians or the Lagrangians.
But then you might be looking, for example example at Feynman diagrams
and scattering processes and so on.
I see.
Or you might be looking at quark confinement,
or you might go algebraic,
you might have a course from one of
the mathematicians who will get you to read Street and Whiteman.
But what you're doing is you're approaching the one phenomenon in the world
with many different theoretical devices. And it's only when you get a grasp on how all
of these are bearing down that you see the commonality. I think quantum field theory
is an especially difficult case. It is justly reputed to be a very difficult theory to learn.
Uh, I think that's right because well, I mean, part of it is you, you know, you
start to try and compute Feynman diagrams and very quickly you realize you've got a
lifetime, the minute goes ahead of you.
And so do you really want to, you really want to get into that?
And then you've only launched scattering theory, right?
And then there's all this stuff about renormalization and why do I make sense of that and the renormalization?
Oh, by the way, when I started studying the renormalization group, it looked more like
engineering to me than anything I've seen in fundamental physics before.
It really got my engineer juices going.
I thought, boy, that's how we do things in chemical engineering.
Sorry.
Yes.
Well, I was going to say I very much like
this idea of approaching something from
multiple points of view in order to understand it.
One analogy is that you could take a look at a cone,
and if the light is shown from above,
it just looks like a circle.
If it's from the side, it just looks like a triangle.
If it's from an obtuse angle,
then it looks like an ice cream cone,
like there's a little bit of a bulge there.
It takes you a while to understand
the three-dimensional structure there.
Yeah.
All you have access to are the projections.
So to move around and that also jives with
your previous answer of, well,
it's something I thought of as well,
that maybe it's not mere youth that enables creativity.
It's instead the entry into a field that fosters that innovation.
So Schrodinger was 40 or 50 when he began contributing to biology. It's instead the entry into a field that fosters that innovation.
So Schrodinger was 40 or 50 when he began contributing to biology.
Maybe it's just he had that foray into the unfamiliar that enabled the contributions.
Yeah.
I noticed this in philosophy as well.
People look at some major work of philosophy and they say, well, that's, you know, that
that answer to the problem is easy.
Right?
I don't really understand what the fuss is.
Well, the fuss is not the answer.
It's the question.
The creativity in philosophy is framing things
so that an analysis is possible.
And if you do that, you're creating your field.
And because you're the first person there, you can jump on what is likely the correct answer almost immediately.
And so you kind of win the day. I mean, this is what I feel happened with the stuff I did
with thought experiments. I mean, I just got very insistent on arguing that there's an
epistemic problem here.
How is it possible for thought experiments to give us novel knowledge of the world?
I made that the framing.
I called that epistemic problem of thought experiments or the empirical problem.
I can't remember which one of those two.
Once you ask it very pointedly,
and then you're very rigorous in giving an answer, it's easy.
Yeah, okay. It's the obvious answer.
But you got there first and people say, what's the big deal? Well, the big deal is I knew the right question to ask.
It's the same thing with causation, right? I knew the right question to ask. Of course,
of course, causal metaphysicians aren't happy with me, but yeah, that's their problem.
Hmm. Is there any epistemic gain that can come from thought experiments that cannot come from formal deductions? Yes. You've narrowed things down by saying formal deductions.
By argument, I have a much looser and more general idea.
I mean, informal argumentation.
That certainly includes inductive inference.
You'll find in some of the most famous thought experiments,
a lot of inductive inference. And you'll find in some of the most famous thought experiments, a lot of inductive inference going on. You know, Einstein's magnet conductor thought experiment, I'll just say
in the abstract what the point is. Some of the key steps in thought experiments are inductive
inferences. You produce an effect in a particular case, and then you say, and this is general,
right? It's an inductive inference where you generalize from the one case.
But because the particular case is so compelling, people are willing to go along with the inductive
inference, which might be good or it might be bad.
All right.
Oh, we, so we, we saw it in the, uh, in Einstein's, um, uh, principle of equivalence.
Now we, we have all bodies will fall the same in the uniform accelerating frame of reference.
That's a gravitational field.
And then Einstein says, and everything else will go the same as well.
That's one hell of an inductive inference at that point.
We've only got the effect for falling bodies.
We haven't got the effect for light propagation.
But it's going to work for light too.
It's going to work for everything, he says.
But you happily generalize.
You're going to say that all gravity is like that.
It isn't just uniform acceleration.
It's gravitational fields that are inhomogeneous.
There's lots of inductive inference going on here.
Yes, now your work on material induction,
if I recall correctly, is against this.
It's more like saying there are local ways that we can do induction, but you can't globally
apply them.
It's not as if there's a one size fits all induction.
Yeah.
Yeah.
Correct.
Yeah.
So this comes out of the fact that I'm a science lover, right?
And I love science.
I love history of science.
And I want to be able to say that our best science is somehow privileged over other endeavors.
And it is privileged for empirical reasons.
It's because it is well supported by the evidence and the character of that support is inductive.
I did not find accounts of inductive inference in the philosophy of science literature that
were able to sustain that
result.
But we find just a fragmentation of many different accounts and you kind of go doctor shopping.
You find some particular example and you want to say, well, why is this a good use of evidence?
Will you shop around until you found the account of inductive inference that fits it?
Then you slap it on.
No, we need a single account that is to be applied everywhere.
And what after it took me a while to see this, but after a lot of probing, what I realized
is that there are no universal rules of inductive inference, right?
That apply everywhere.
That's the uniformity that you're talking about.
Rather what you have are inductive systems that apply locally and they are specifically
warranted by facts.
So why don't you give an example?
Okay.
The simplest example, one that I use in chapter one of the book, is Marie Curie prepares a
tenth of a gram of radium chloride.
It's the only sample of radium chloride in any laboratory in the world in 1903.
She looks at its crystallographic properties and declares, radium chloride has such and such a crystallographic properties.
I think Monoclinic was the way we would say it,
but she says it's the same as barium chloride.
Now, if you think about that in terms of other accounts of inductive inference,
what would it be?
Well, it could be an enumerative induction.
This A is B, therefore all A's are B. Boy, that's a bad form to use because almost every
occasion when this A is B, all A's are not B. So this sample of radium chloride was prepared
by Marie Curie.
It's not going to be true.
University, the sample of radium chloride is in Paris. They won't all be. This sample of radium chloride is a by Marie Curie. It's not going to be true. You know, if I see the sample of radium chloride is in Paris, they won't all be.
The sample of radium chloride is a 10th of a gram.
They won't all be a 10th of a gram.
And so, or all swans are black or all swans are white.
So the idea that, that, that you can authorize that, that inference, um, by, uh, by
looking at, uh, at, at a general rule just doesn't work.
So the chief wasn't doing that.
Why is she so secure in making the inference that it was so secure, it was even unremarkable?
Well, the answer is factual investigation of the nature of crystals all the way through
the 19th century.
People had looked at what sorts of forms do crystals have.
This was work in atomic theory, this was work in mathematics, this is one of the places
where the theory of discrete finite groups got underway.
And it turns out that if you build up lattices, they fall into one of six or seven families
depending on how you count them.
So if you find a crystalline substance that falls into one of those families, then you
know that many more of those samples will fall in that one particular family.
And so you can make the generalization.
Now it is inductive.
It's a little bit risky because there are some substances that are dimorphic or polymorphic,
which means that they have forms that exist in multiple different families.
The familiar case of polymorphism doesn't exactly map onto here, but it's the case of
carbon.
It can be a diamond or it can be graphite, but there are many other cases of minerals
that have this.
So what was justifying her inference was with facts about crystalline substances, hard won
through the course of the 19th century, very difficult facts to learn because to characterize
these families took a tremendous amount of work.
And it got regularized as a thing called OUIS principle after one of the early
starters. So the fact is OUIS principle. And so the argument of the material theory of
induction is it's all like that. Whenever someone's doing an inductive inference, if
it's cogent and you want to ask why is this an appropriate inference,
the answer is going to come back to effect.
Now this is going to apply specifically also to people using probabilistic inferences inductively.
If you're going to use probabilities, the way I argue it out is the following.
There is no default that every time you're uncertain about something,
you can responsibly represent the uncertainty by probability.
You can't do that.
You have a positive obligation to demonstrate
that a probabilistic representation is appropriate to the case in hand.
For example, in population genetics,
So for example, in population genetics, you know, you know, you know, typically what you will do is you say this particular instance has been has been randomly sampled from the population.
So if we're going to do DNA typing, and you want to say, Oh, yes, it's very, very probable that,
you know, that this perpetrator has a blood sample that matches the blood found at the site.
I'm perfectly happy with those probabilities, but it is essential that the probabilities
are anchored by some fact.
The fact is that we can treat the case as if the person was randomly sampled.
If that isn't the case, if you can't treat that suspect as being randomly sampled from the population,
then all bets are off.
Who knows, they might have been planted in some way.
They might have been planted.
You can figure out all sorts of ways it could come unstuck.
Now what happens when you don't do this seriously?
Well, you run into all sorts of silly arguments that don't work.
Have you seen the simulation argument?
Yes.
The one that says that we are very probably a simulation?
Yeah, yeah, tell me about that.
That's a spectacular example where we're using probabilities without any factual, you know.
So the way it works is the following.
We end up with a position where we convince ourselves somehow that there are very many
possibilities for the way our experience of the world could come about.
And the idea, and we somehow convince ourselves, I think these arguments already are pretty
shaky but I'm looking at a particular fallacy.
We convince ourselves that there are vastly many ways that our experiences could come
about if we were computer simulations,
and relatively fewer cases in which they could come
about if the world is truly as it seems.
Let's just take that as a starting point.
I think it's already dubious that we got there.
Now we ask the question now, what's the next step?
Well, the next step is to say, we have no idea which is ours.
And I would say you stop at that point.
You have no idea which is ours.
But wait a minute, but I'm going to say, oh no, I'm going to represent my uncertainty
by probability.
Right.
And when I represent my uncertainty by probability, I find that the vast mass of the probability
ends up on the computer simulation case and only a very small amount ends up.
Well, what's the fallacy?
Well, the fallacy is you have no factual grounding for that probability.
You have just let it fall from the sky and the result is simply an artifact of
a misapplied inductive logic.
That's it's, it's as simple as that.
It's an egregious, um, uh, fallacy, but you, you need something like a material
theory to tell you if, if instead you say, Oh, I'm going to use the principle
of indifference and I can use probabilities.
Well, you're going to be in big trouble because the principle of indifference
contradicts
probabilities in cases of
Genuine and extreme ignorance. Mm-hmm. All right, and that's and this is a case of genuine and extreme
ignorance
also in other simple cases like with a die and you just color two of them blue and then
the rest of them red, and you could say, okay, well, is it going to be red or is it going
to be blue?
Well, we're indifferent, and so it's 50-50, but that's not exactly...
Yeah, this goes back to Keynes.
You'll find Keynes in his, I think it's called Credism Probability, early 1920s.
He has all the classic examples there.
Professor, thank you for spending so long with me. early 1920s. He has all the classic examples there.
Professor, thank you for spending so long with me. It's been a blast. Well, thank you. I've enjoyed talking to you. You've got a really wonderful podcast. There's
something subtle. You know the questions to ask.
I've received several messages, emails, and comments from professors saying that they
recommend theories of everything to their students and that's fantastic.
If you're a professor or a lecturer and there's a particular standout episode that your students
can benefit from, please do share and as always, feel free to contact me.
New update!
Started a sub stack.
Writings on there are currently about language and ill-defined concepts as well as some other
mathematical details. Much more being written there. This is content that isn't anywhere else. It's not on theories
of everything. It's not on Patreon. Also, full transcripts will be placed there at some
point in the future.
Several people ask me, hey Kurt, you've spoken to so many people in the fields of theoretical
physics, philosophy, and consciousness. What are your thoughts? While I remain impartial in interviews, this substack is a way to peer into my present
deliberations on these topics. Also, thank you to our partner, The Economist.
Firstly, thank you for watching, thank you for listening. If you haven't
subscribed or clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps YouTube push this content to more
people like yourself, plus it helps out Kurt directly, aka me. I also found out last year
that external links count plenty toward the algorithm, which means that whenever you share
on Twitter, say on Facebook or even on reddit, etc
It shows YouTube. Hey people are talking about this content outside of YouTube
Which in turn greatly aids the distribution on YouTube thirdly, you should know this podcast is on iTunes. It's on Spotify
It's on all of the audio platforms
All you have to do is type in theories of everything and you'll find it. Personally, I gained from rewatching lectures and podcasts.
I also read in the comments that, hey, toe listeners also gain from replaying.
So how about instead you re-listen on those platforms like iTunes, Spotify, Google Podcasts,
whichever podcast catcher you use.
And finally, if you'd like to support more conversations like this, more content like
this, then do consider visiting patreon.com
slash Kurt Jaimungal and donating with whatever you like. There's also PayPal, there's also
crypto, there's also just joining on YouTube. Again, keep in mind it's support from the
sponsors and you that allow me to work on toe full time. You also get early access to
ad free episodes, whether it's audio or video, it's audio in the case of Patreon, video in
the case of YouTube. For for instance this episode that you're
listening to right now was released a few days earlier every dollar helps far
more than you think either way your viewership is generosity enough thank
you so much