Theories of Everything with Curt Jaimungal - Jonathan Gorard: Quantum Gravity & Wolfram Physics Project
Episode Date: March 29, 2024In today's episode Jonathan Gorard joins Theories of Everything to delve into the foundational principles of the Wolfram Physics Project. Additionally, we explore its connections to category theory, q...uantum gravity, and the significance of the observer in physics. Please consider signing up for TOEmail at https://www.curtjaimungal.org  Support TOE: - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - TOE Merch: https://tinyurl.com/TOEmerch  Follow TOE: - *NEW* Get my 'Top 10 TOEs' PDF + Weekly Personal Updates: https://www.curtjaimungal.org - Instagram: https://www.instagram.com/theoriesofeverythingpod - TikTok: https://www.tiktok.com/@theoriesofeverything_ - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything Â
Transcript
Discussion (0)
In a sense, we know that black holes or in the Big Bang or something, that's probably an abstraction that loses usefulness
and eventually will be superseded by something more foundational.
Our universe seems to be neither maximally simple nor is it kind of maximally complicated.
There's some regularity, but it's not completely logically trivial.
It's not like every little particle follows its own set of laws,
but it's also not like we can just reduce everything to one logical tautology.
like we can just reduce everything to one logical tautology. Jonathan Gerard is a researcher in mathematical physics at Princeton University and in my
opinion is the starkness and the brains behind the rigor at the Wolframs Physics Project.
Today's conversation is quite detailed as we go into the meticulous technicalities as
if this were a conversation between two friends behind closed doors.
In this discussion we elucidate the core principles and claims of the Wolframs physics
project, we distinguish them from the surrounding hype.
Specifically, we explore potential connections between category theory and quantum gravity,
we also delve into refining truth and representations, the pros and the perils of peer review, and
furthermore, we highlight the differences between Jonathan and Stephen Wolfram particularly in the context of computational and
consciousness related aspects. You should also know that there are three
interviews with Stephen Wolfram on this channel each is linked in the
description. In it we detail the Wolfram's physics project with Stephen
Wolfram himself and why he thinks it's a potential candidate for a theory of
everything. My name is Kurt Jaimungal. For those of you who are unfamiliar, this is a channel called
Theories of Everything, where we explore theories of everything in the physics sense,
using my background in mathematical physics from the University of Toronto,
but as well as explore other large grand questions. What is consciousness?
Where does it come from? What is reality? What defines truth? What is free will and do we have it?
Of course increasingly we've been exploring artificial intelligence and its potential relationship to the fundamental laws
Also the string theory video that Jonathan mentions is called the iceberg of string theory
And I recommend you check it out
It took approximately two months of writing four months of of editing, with four editors, four rewrites, fourteen shoots, and there are seven layers.
It's the most effort that's gone into any single theories of everything video.
It's a rabbit hole of the math of string theory geared toward the graduate level.
There's nothing else like it.
If that sounds interesting to you, then check out the channel or hit subscribe to get notified.
Enjoy this episode with Jonathan Gerard.
So Jonathan, what is the Wolframs Physics Project
and what's your role in it?
That's a really good question, Kurt.
So I guess, I don't know,
there are various people involved
and I think you'll get slightly different answers
or perhaps very different answers depending on who you ask.
I'm someone who, you know,
I think think we first
launched the physics project back in April 2020. We kind of we lent hard on this billing
of it's a project to find the fundamental theory of physics. That was not really how
I viewed it at the time and it's become even less how I view it over time.
Interesting.
You know, I'm just I'm saying this as a kind of prelude to clarify that what you're about to hear is my own perspective
on it and it will probably differ quite a lot from the perspective given by some other
members of the project.
So essentially, my view is that the Wolfram Physics Project is an attempt to answer a kind
of counterfactual history question.
Back in the 17th century, Newton, Leibniz, a little bit earlier people like Descartes,
Galileo, they kind of set the stage for modern theoretical mathematical physics and more
broadly for our kind of modern conception of how the exact sciences work.
Essentially the idea was,
rather than just describing phenomena
in these kind of philosophical terms,
you could actually construct kind
of robust quantitative models of what natural systems do.
And that was enabled by a particular piece
of mathematical technology, or a particular piece
of cognitive technology, which was calculus, which
later became mathematical analysis
and the basis of differential geometry and all the kind of machinery of modern mathematical physics.
So Newton, Leibniz, building off earlier work by people like Archimedes and so on, they
built up this formalism of calculus that sort of enabled modern physics.
And arguably, that choice of formalism, that choice to base physical models on essentially analytic, calculus-based mathematical formalisms has had an impact on our physical intuition.
It involves thinking about things in terms of smooth analytic functions, in terms of continuously varying gradients of quantities.
It necessitates us formalizing notions like space and time in terms of, you know, smooth manifolds or real numbers.
It involves, you know, thinking about things like energy and momenta as being continuously
varying quosities.
And those are, of course, extremely good idealizations of what's really happening.
But I think there's always a danger whenever you have a model like that, that you start
to kind of believe in the ontological validity of the model.
And so for a lot of physicists, I feel like, you know, it's kind of seeped in and percolated our intuition
to the extent that we actually think that space is a, you know, smooth Riemannian manifold.
We think that energy is a kind of real valued function rather than these just being idealizations
of some potentially quite different, you know, underlying reality. Okay. Now, fast forward
about 300 years and you have people like Alan Turing
and Alonzo Church and Kurt Gödel in the early 20th century building up the beginnings of
what became theoretical computer science, right, as an offshoot of mathematical logic.
There were people interested in the question of, you know, what is mathematics? What is
mathematical proof? You know, what are mathematical theorems? And that kind of necessitated them
building this really quite different mathematical formalism,
which initially had different manifestations,
it had Turing machines, lambda calculus,
general recursive functions, et cetera,
which then gradually got unified
thanks to things like the Turing thesis.
But so now, so in a way, again,
at least the way I like to think about it is,
the sort of stuff that Newton and Leibniz and people were doing in the 1600s, that gave, you know, with analysis, that gave
you a systematic way of understanding sort of an exploring continuous mathematical structures.
What Turing and Church and Gödel and people did in the early 20th century with computability
theory gave one a systematic way of understanding discrete mathematical structures. You know,
the kinds of things that could be represented by simple computations and simple
programs.
Now, by that point, as I say, you know, calculus, these sort of calculus-based approaches had
had a 300-year head start in terms of the exact sciences.
And it took a little while before people started thinking, hmm, actually, you know, maybe we
could use these formalisms from computability theory to construct models of natural phenomena,
to construct, you know, scientific models and models for things like fundamental physics.
But of course that necessitates being a quite radical departure in how we think about physical
laws, right? Suddenly you have to deviate from thinking about space as some smooth,
continuous structure and start thinking about it in terms of some discrete combinatorial
structure like a kind of network or a graph. It necess you're moving away from thinking about dynamics in terms of continuous partial
differential equations and thinking about it in terms of kind of discrete time step
updates like say the kinds that you can represent using network rewriting rules.
And so, you know, a lot of physicists who are kind of trained in the traditional mathematical
formalism find this quite counterintuitive because as I say, those ideas from mathematical
analysis have seeped so far into our intuitions that we think that's actually how the universe
works rather than just thinking of it as being a model.
And so the slightly poetic way that I like to think about what the physics project is
doing is we're trying to address this kind of counterfactual history question of what
would have happened if Turing was born 300 years before Newton, not the other way around. In other words, if we had,
if discrete mathematical approaches based on computability theory had a 300 year head start
in the foundation to a natural science over continuous mathematical structures based on
analysis. That's my kind of zoomed out picture of what it is that we're trying to do.
Ah ha.
So, okay, that's, there's a lot more that can be said about that, of course, and I'm sure we'll
discuss more of it later. But that's at least my kind of, that's my big picture summary of what I think the physics
project is about. It's about trying to reconstruct the foundations of physics, not in terms of, you know,
Lorentzian manifolds and continuous space times, but in terms of things like graphs, hypergraphs,
hypergraphy writing, causal networks, and the kinds of
discrete structures that could be represented in a very explicit,
computable way. There are some nice connections there, by the way, to
things like the constructivist foundations of mathematics that arose
in the 20th century as well. And we'll likely talk about that later, too.
In terms of my own role within it, so, you it, Stephen Wolfram, who I know has appeared on TOE a number
of times and has been by far the single most energetic evangelist of these ideas for a
very long time.
He wrote back in 2002 this book, A New Kind of Science, in which he first postulated the
beginnings of these ideas about maybe it's
useful to think of fundamental physics in terms of network automata and things like
that.
And had some initial hints towards, okay, here's how we might be able to get general
relativity, beginnings of quantum mechanics, those kinds of things out of those systems.
But then those ideas basically lay dormant for a long time.
NKS had this kind of maelstrom of attention for a couple of years,
and then mostly, at least physicists mostly ignored it, at least my impression.
Where I, as a teenager, I read NKS and I, like many people,
found certain aspects of the way the book is written a little bit off-putting,
but I thought that there were many core ideas in it that were really, really quite foundationally important.
And one of them was this idea about fundamental physics. And so for a while I kind of advocated, like, we should be trying to build physics on these kind of computable models, if only just to see what that leads us. And so I started to do some initial work
in those directions, nothing particularly profound.
But also I would repeatedly badger Stephen
maybe every year or so and say,
we should go and actually try and do
a more serious investigation of these things.
And then finally-
Sorry, just a moment.
You said that you would be working on these
prior to going to the Wolfram School, the summer school.
Yes, yeah, exactly.
So I went to the Wolfram Summer School in 2017
as a consequence of my interest in these models.
So I'd already been doing a little bit of my own work
on this, the stuff, trying in large part, in a sense,
to rediscover what Stephen had already done.
He had these big claims in NKS about being able to derive, you know, Einstein equations
and things from these, from these graph rewriting models.
But the details were never included in the book.
And I tried to ask Stephen about them and he kind of said,
oh, I can't really remember how I did that now.
And so I spent quite a lot of time trying to,
trying to kind of reconstruct that.
And that eventually ended up, you know,
that was the thing that, that resulted in me, you know, attending the summer school and then, uh, and then being kind of pulled into
Stephen's orbit.
Uh huh.
And is it your understanding that Stephen actually did have a proof?
He just wasn't able to recall it like Firmat or just was too small of a space to publish
or that he thinks he was able to prove it, but the tools weren't available at the time.
And you think back like maybe he had a sketch, but it wasn't.
Well, it's Leonardo sketch versus the Mona Lisa.
Right, right.
I, um, I think the, I think the Leonardo sketch versus the Mona Lisa analogy is
probably the right one.
I, so my, my suspicion based on what I know of the history of that book and also
based on what I have Stephen's personality is that Stephen had proved it to his own
satisfaction, but probably not to the satisfaction of anyone else, right? So, um, I think, you know, many of us are like this, right?
Like if you encounter some problem, or you know, some phenomenon you don't really understand and
you go away and you try and understand how it works or you try and prove some result about it and
eventually you convince yourself that it can be done, or that you convince yourself
that there is an explanation, but you don't necessarily tie together all the details to
the point where you could actually publish it and make it understandable to other people.
But kind of to your own intellectual satisfaction, it's like, oh yeah, now I'm at least convinced
that that can work.
My impression is that that's basically, that's essentially where the kind of physics project
formalism ended
up in 2002.
That Stephen thought about it for a while, had some research assistants look at it,
and eventually they convinced themselves, yes, it would be possible to derive Einstein
equations from these kinds of formalisms.
From what I've seen of the material that was put together and so on, I don't think anyone
actually traced that proof with complete mathematical precision. Eventually in 2019, Stephen, myself, and Max Pisganov,
we decided for various reasons that it was kind of
the right time for us to do this project in a serious way.
Stephen had some new ideas about how we could simplify
the formalism a little bit.
I'd made some recent progress in kind of understanding
the mathematical underpinnings of it.
Max had just finished writing some really quite nicely optimized low-level C++ code
for enumerating these hypergraph systems really efficiently.
And so we decided, okay, if we're not going to do it now, it's never going to happen.
And so that was then the beginnings of the physics project.
And so now I'm less actively involved in the project as a branding entity, but I'm less actively involved in the project as a kind of branding entity, but I'm still
kind of actively working on the formalism and still trying to push ahead in various
mathematical directions, trying to kind of concretify the foundations of what we're doing
and make connections to existing areas of mathematical physics.
I see, I see.
So I also noticed a similar problem as yourself across society. So across history that people entwine this prevalent application with some ontological status.
So what I mean by that is you'll have a tool which is ubiquitous and usefulness. And then
you start to think that there's some reality synonymous with that. So another example would
be an ancient poet who would see the power of poetry and think
that what lies at the fundament is narrative pieces.
Or a mystic who sees consciousness everywhere, almost by definition, and then believes consciousness
must lie at the root of reality.
And some people, Max Tegmark would be an example of this, find that math is so powerful, it
must be what reality is. So it's also not clear to me whether computation is another such
fashionable instance of a tool being so powerful that we mistake its
effectiveness with its substantiveness. And I understand that Stephen may think
differently. I understand that you may think differently. So please explain.
That's a fantastic point and I suspect from at least from what you've said, I think our views may be quite similar
on this that I'm reminded of this meme that circulated on Twitter a little while ago about
people saying, you know, immediately after the invention of kind of writing systems and
narrative structure, everyone goes, ah, yes, the universe, you know, the cosmos must be
a book, right?
And then, you know, immediately after the invention of mathematics, ah, yes, the cosmos
must be made of mathematics. And then it's, you know, immediately after the invention of mathematics, ah, yes, the cosmos must be made of mathematics.
And then it's, you know, immediately after the invention of the computer,
ah, yes, the cosmos must be a computer.
Um, so yeah, I think that, you know, that's, that's, it's a folly that we've fallen
into throughout all of human history.
And so yeah, my feeling about this is always that, you know, we, we build
models using the kind of ambient technology of our time.
And when I say technology, I don't just mean, you know, nuts and bolts technology.
I also mean kind of thinking technology, right?
So you know, there are kind of ambient ideas and processes that we have access to.
And we use those as a kind of raw substrate for making models of the world.
So you know, it's unsurprising that when people like Descartes and Newton built models of the cosmos, you
know, of the solar system and so on, they described them in terms of clockwork by analogies
to clockwork mechanisms, right? And, you know, Descartes even sort of more or less directly
wrote that he thought that, you know, the solar system was a piece of clockwork. Whether
he actually thought that in an ontological sense or whether it was just a kind of poetic
metaphor, I don't completely know. But, you know, it's sort of obvious that that would happen, right?
Because, you know, the 15th century, 16th century, that was sort of the height of clockwork technology in ambient society.
And so, you know, we live right now in arguably the zenith of kind of computational technology.
And so again, it's completely unsurprising that we build models of the cosmos based largely on,
or based largely
or partly on computational ideas.
Yeah, I agree.
I think it would be a folly.
And I think you're right.
This is maybe one area where perhaps Stephen and I differ slightly in our kind of philosophical
conception.
I personally feel like it's folly to say, therefore, you know, the universe must be
a computer, right? Or that, you know, that, yeah, my feeling about it is
the strongest we can say is that, you know,
modeling the universe as a Turing machine
is a useful scientific model
and it's a useful thinking tool by which to reason
through kind of various problems.
I think it's, yeah, I would be uncomfortable
endowing it with any greater ontological significance than that.
That being said, of course, there are also lots of examples where people have made the opposite mistake.
The classic example is people say Hendrik Lorentz, who basically invented the whole formalism of special relativity,
but he said, oh no, no, this is just a mathematical trick.
He discovered the right form of the time dilation and the length contraction,
but he said that this is just some coordinate change. It doesn't have any physical effect.
It's just a formalism. And then really the contribution of Einstein was to say, no, it's
not just a formalism. This is an actual physical effect. And here's how we might be able to
measure it. And so, yeah, you, I'm just trying to, I'm trying to indicate that there's, you
have to thread a delicate needle there.
So you mentioned Turing, and there's another approach called constructor theory, which
generalizes Turing machines, or universal Turing machines, to universal constructors.
So-called universal constructors.
So I'd like you to explain what those are to the degree that you have studied it, and
then its relationship to what you work on at the Wolframs Physics Project.
And by the way, string theory, loop quantum gravity, they have these succinct names, but
WPP doesn't have a graspable, apprehensible name, at least not to me to be able to echo
that.
So is there one that you all use internally to refer to it? Okay, so on that, yeah, I'm not a fan of the naming of the Wolfram Physics Project or indeed
even the Wolfram model, which is a slightly more succinct version. In a lot of what I've
written I describe it, I use the term hypergraph dynamics or sometimes hypergraphy writing
dynamics because I think that's a more descriptive title for what it really is.
But no, I agree. I think as a branding exercise, there's still more work that needs to be done.
So for the sake of us speaking more quickly, we'll say the HD model.
So in this HD model, what is its relationship to...
What was the category? No, it wasn't category.
It was...
Constructor theory.
Constructor, right.
Okay, so what is the HD model's relationship to constructor theory?
Although that's an interesting Freudian slip because I think basically the relationship
is category theory, right?
So yeah, okay.
So, I mean, with the proviso that, you know, again, I know that you've had Chiara Maleto
on TOE before, right?
So I'm certainly not an expert on constructor theory.
I've read some of Chiara's and David
Deutsch's papers on these topics. But so as you say, I can give an explanation to the extent that
I understand it. So with, you know, as I understand it, the central idea with constructor theory is
rather than describing physical laws in terms of kind of, you know, equations of motion, right? So
in the traditional conception of physics, we would say, you've got some initial state
of a system, you have some equations of motion that describe the dynamics of how it evolves,
and then it evolves down to some final state.
The idea with constructed theory is you say rather than formulating stuff in terms of
equations of motion, you formulate things in terms of what classes of transformations
are and are not permitted. And I think one of the classic examples
that I think Deutch uses in one of his early papers, and I know that Chiara has done additional
work on, is the second law of thermodynamics. I need you to do the first law of thermodynamics,
right? So, thermodynamic laws are not really expressible in terms of equations of motion,
or at least not in a very direct way. They're really saying quite global statements about
what classes of physical transformations are and are not possible, right?
They're saying you cannot build a perpetual motion machine of the first kind or the second kind or whatever, right?
That there is no valid
procedure that takes you from this class of initial states to this class of final states that, you know,
reduce global entropy or that, you know, create free energy or whatever, right?
And that's a really quite different way of conceptualizing the laws of physics.
So Constructor Theory, as I understand it,
is a way of applying that to physics as a whole,
to saying we formalize physical laws
not in terms of initial states and equations of motion,
but in terms of initial substrates, final substrates,
and constructors, which are these general processes
that I guess one can think of as
being like generalizations of catalysts. It's really a grand generalization of the theory
of catalysis in chemistry. You're describing things in terms of this enables this process
to happen, which allows this class of transformations between these classes of substrates or something.
Now you brought up, inadvertently, you brought up this question of category theory or this
concept of category theory.
And I have to be a little bit careful with what I say here because I know that the few
people I know who work in constructed theory say that what they're doing is not really
category theory.
But I would argue it has some quite, in terms of the philosophical conception of it, it
has some quite, in terms of the philosophical conception of it, it has some quite remarkable
similarities. So to pivot momentarily to talk about the duality between set theory and category
theory as foundations for mathematics. So with, you know, since the late 19th century,
early 20th century, it's been the kind of vogue to build mathematical foundations based on set theory, based on things like Zemileo Frankel's set
theory or Hilbert Bernays Gödel's set theory and other things, where
your fundamental object is a set, some collection of stuff,
which then you can apply various operations to, and the idea is you build
mathematical structures out of sets. Now, set theory is a model of mathematics
that depends very heavily on internal structure. So for instance, in the standard axioms of
set theory, you have things like the axiom of extensionality that essentially says two
sets are equivalent or two sets are identical if they have the same elements. So it involves
you identifying sets based on looking inside them
and seeing what's inside.
But there's another way that you can think
about mathematical structure, which is you say,
I'm not going to allow myself to look inside this object.
I'm just going to treat it as some atomic thing.
And instead I'm going to give it an identity
based on how it relates to all other objects
of the same type.
So what transformations can I...
So, you know, to give a concrete example, right, suppose I've got some topological space.
So one of the kind of set theoretic view is, okay, that topological space is a set of points.
It's a collection of points that have a topology defined on them.
The kind of more category theoretic view would be to say actually that topological space is defined as the collection of continuous
transformations that can be applied to it. So that space can be continuously
deformed into some class of other spaces and that class of other spaces that it
can be deformed into is what identifies the space you started from. And so
that's a so and you can define that without ever having to talk about
points or you know what was inside it or any in fact that's a, so, and you can define that without ever having to talk about points
or, you know, what was inside it or any, in fact, there's a whole, there's this whole
generalization of topology called pointless topology or locale theory, which is all about
doing topology without an a priori notion of points. So, uh, in a way it necessitates
this conceptual shift from a, an internal structure view to a kind of process theoretic
view.
And so that was a viewpoint that was really advocated by the pioneers of category theory, as Samuel Allenberg and Saunders MacLean, and also some other
people who were working in topology, like Jean-Pierre Serre and Alexander Grotendieck
and others.
There was a kind of radically different way to conceptualize the foundations of
mathematics.
Sorry to interrupt.
Just as a point for the audience, you mentioned the word duality between
sets and categories. Now, do you mean that in a literal sense, or just morally there's
a duality because category theorists make a huge fuss that what they're dealing with
aren't always like small categories are sets, but or can be thought of as sets, but not
categories as such. Right, right. Okay. Yeah. And I shouldn't have said, I mean, yes, no.
The short answer is no, I don't mean duality in any formal sense.
And in particular, it's a dangerous word to use around category theorists
because it means something very precise. It means that dual concepts are ones that are equivalent
up to reversal of the direction of morphisms. I certainly don't mean that. I meant duality in the sense that so there is a precise sense
in which set theory and category theory are equivalent, are equivalently valid foundations
for mathematics. And that precise sense is and hopefully I mean, we can go deep in the
weeds if you want. It's, it's,
that's, you know, we'll, we'll see where the conversation goes. But the, but the basic
idea is there's a, there's a fee as a branch of category theory called elementary topos
theory, which is all about using category theory as a foundation for logic and mathematics.
And the idea there is, so in, from a category theoretic perspective, sets are just, they just form one particular category.
There is a category called set, which is objects are sets and whose transformations,
whose morphisms are set-valued functions. And so then you might say, well, you know,
why is set so important? Like what's so great about set that we build all mathematics on that?
It's just one random category in the space of all possible categories. So elementary topos
theory is all about asking what are the essential properties of set
that make it a quote unquote good place to do mathematics?
And can we abstract those out and figure out
some much more general class of mathematical structures,
some more general class of categories
within which, internal to which
we can build mathematical structures?
And that gives us the idea of an elementary topos. I'm saying elementary because there's a slightly different idea called a growth
and deep topos that's related but not quite equivalent and whatever. But generally when
logicians say topos, they mean elementary topos. So yeah, there's a particular kind
of category which has these technical conditions that it has all finite limits and it possesses
a sub-object classifier or equivalently a power object. But basically what it means is that it has the minimal algebraic
structure that sets have, that you can do analogs of things like set intersections,
set unions, that you can take power sets, you can do subsets. And it kind of detects
for you a much larger class of mathematical structures, these elementary top hosses, which
have those same features. And so then the argument goes, well, therefore you can build
mathematics internal to any of those top hosses, and the mathematical structures that you get
out are in some deep sense isomorphic to the ones that you would have got if you built
mathematics based on set. So that's the precise meaning of,
I guess, what I was saying. That in a sense, there are these set theoretic foundations,
there are these category theoretic foundations that come from topos theory, and there is some
deep sense in which it doesn't matter which one you use, right? That somehow the theorems you prove
are equivalent up to some notion of isomorphism in the two cases.
Yes, and now the relationship between constructor theory and HD, which is the hypergraph dynamics
or Wolfram's physics project for people who are just tuning in.
Right, right. So, yes, the excursion to talk about category theory is, in a sense,
my reason for bringing that up is because I think that same conceptual shift that I was describing, where you go from thinking about internal structure to thinking about
kind of process theories, that's been applied to many other areas. It's been applied, say,
in quantum mechanics, right? So where there's in the traditional conception, you'd say quantum
states are fundamental, and you know, you have Hilbert spaces that are spaces of quantum
states, and then you have operators that, you know, transform those Hilbert spaces,
but they're somehow secondary. Then there's this rather different, and that's the kind of von Neumann Dirac picture,
then there's this rather different formalization of the foundations of quantum mechanics that's
due to Samson Abramski and Bob Kocher, which is categorical quantum mechanics, where the idea is
you say actually the spaces of states, those are secondary and what really matters are quantum
processes. What matters are the transformations from one space of states to another. And you describe quantum mechanics purely in terms
of the algebra of those processes. And there are many other examples of that. I mean, things
like functional programming versus imperative programming or lambda calculus versus Turing
machines. In a sense that these are all instances of thinking about things in terms of processes and functions rather than in terms of states and sets.
I view constructor theory as being the kind of processes and functions version of physics,
whereas traditional mathematical physics is the kind of sets and structures version of
physics.
In a sense, the hypergraph dynamics view slash Wolfram Model view, however you want to describe
it, is one that nicely synthesizes
both cases. Because in the hypergraph dynamics case, you have both the internal structure
that you have an actual hypergraph and you can look inside it and you can talk about
vertices and nodes and things like that, vertices and edges and so on. But you also have a kind
of process algebra because you have this multi-way system where
you know, so I apply lots of different transformations to the hypergraph and I don't just get a single
transformation path, I get this whole tree or directed acyclic graph of different transformation
paths.
And so then I have, so in a sense you can imagine defining an algebra and we've done
this in other work where you know, you have a kind a rule for how you compose different edges in the multi-way
system both sequentially and in parallel. And you get this nice algebraic structure
that happens to have a category theoretic interpretation. And so in a way, the pure
hypergraph view, that's a set theory structural view. The pure multi-way system view, that's
a pure process theoryfree theoretic view.
And then one of the kind of really interesting ideas
that comes out of thinking about physics in those terms
is that general relativity and quantum mechanics
emerge from those two limiting cases.
Right?
So in a sense, if you neglect all considerations of the multi-way system
and you just care about the internal structure of the hypergraph
and the causal graph, and you define a kind of discrete differential
geometric theory over those, what you get in some appropriate limit is general relativity
for some class of cases. On the other hand, if you neglect all considerations of the internal
structure of the hypergraph and you care only about the process algebra of the multi-way
system, what you get is categorical quantum mechanics. You get a symmetric monoidal category that has the same algebraic structure as the
category of finite dimensional Herbert spaces in quantum mechanics. And so in a sense, the
traditional physics, which is very structural, gives you one limit, gives you the general
relativistic limit. The kind of more constructive theoretic view, which is more process theoretic, more category oriented, gives you another limit, gives you the quantum
mechanics limit.
Yeah.
And do you need a daggersymmetric, monoidal category or is the symmetric monoidal enough?
You do need it to be daggersymmetric.
That's a very important point.
So I'm going to assume not all of your followers or listeners are card-carrying category theorists.
So just as a very quick summary of what
Kurt means by dagisymmetric.
So actually, maybe we should say what
we mean by symmetric monoidal.
So if you have a category, if you just think of it
as some collection of simple processes,
like in the multi-way system case,
it's just individual rewrites of a hypergraph,
then you can compose those things together
sequentially. You can apply rewrite one, then rewrite two, and you get some result. There's
also a case where you can do that in any category. There are also cases where you can apply them
in parallel. You can do rewrite one and rewrite two simultaneously, and in a multi-way system
that's always going to be possible. And then you get what's called a monoidal category,
or actually a symmetric monoidal category, if it doesn't matter which way around you compose them.
And that kind of generalizes the tensor product structure in quantum mechanics.
And then you can also have what's called a dagger structure.
And so the dagger structure is a thing that generalizes the Hermitian adjoint operation
in quantum mechanics, the thing that gives you time reversal.
So in that case, then you have some operation that you can take a rewrite and you can reverse it.
And for hypergraphy writing rules, there's a guarantee that you can always do that.
There's yet another level of structure, which is what's called a compact closed structure,
which means that you can essentially do the analogue of taking duels of spaces.
So for those people who know about quantum mechanics, that's the generalization
of exchanging bras for kets and vice versa. And again, you can do that in the case of
hypergraphs. There's a natural duality operation because you can, for any hypergraph, you can
construct a dual hypergraph whose vertex set is the hyper edge set of the old set of the
old hypergraph and whose hyper edge set is the incident structure of those hyper edges in the new case. And that gives you a duality that satisfies the axioms of
compact closure.
And yeah, in a sense, the key idea behind categorical quantum mechanics is that if you
have one of these dagger structures, you have a compact closed structure, you have a symmetric
binoidal structure, and they're all compatible, then what you've got is, again, by analogy to topos theory, some mathematical structure which is, you know, in which you can build
a theory that is isomorphic to quantum mechanics. And that's what we have in the case of not
the way systems.
So when we spoke approximately three years ago, I believe we had a zoom meeting, it could
have been a phone call. I recall that you were saying that you were working maybe the year prior on something where your operators, your measurements
don't have to be self-adjoint. And the reason was self-adjointness is there because we want real
eigenvalues. And that just means for people who are listening, you want to measure something that's
real, not imaginary. What is an imaginary tick that usually comes
down to ticks or not ticks? And the measurement device.
But then I recall you said that you were working on constructing quantum mechanics with observables
that weren't... So self-adjointness is required... Sorry, self-adjointness implies real eigenvalues,
but there were other ways of getting real eigenvalues that aren't self-adjoint. I don't know if I misunderstood what you said or if I'm recapitulating incorrectly,
but please spell out that research if this rings a bell to you.
So your memory is far better than mine. So that sounds like a very accurate summary of
something I would have said, but I actually have no memory of saying it. So, um, but yes, no, um, yes. So
th and to be clear, uh, that's by no means my idea. So, uh, there's, there's a field
called PT symmetric quantum mechanics and sometimes known as non-humicent quantum mechanics,
uh, which have various, uh, developers. Carl Bender is one of them. I think there's a guy
called Jonathan Brody. Oh, that's interesting. I can't remember. Carl Bender. one of them. I think there's a guy called Jonathan Brody. Oh, that's interesting. Oh, Dorje Brody.
I can't remember.
Carl Bender, so I just spoke to him
about a couple months ago, coincidentally.
Oh, well, you should have asked him this question.
He's the expert, right?
So, yes, so Bender and Brody and others,
Dorje Brody, I don't know why there's another person.
Maybe Jonathan Keating is involved somehow.
Sure.
But anyway, so it's been a little while since I thought about this, as you can tell.
But so yes, there's a generalization of standard, you know, unitary Hermitian quantum mechanics.
So yeah, as Kurt mentioned, so in the standard mathematical formalism of quantum mechanics,
your measurements seem to be Hermitian.
So when you take the adjoint operation, you get the same, when you take the adjoint of
the operator, you get the same
result. And your evolution operators are assumed to be unitary, so that when you take the adjoint,
you get the time reversal of the result. And in a sense, that's the key difference between
evolution and measurement in standard formalism. And we know that, yeah, if your Hamiltonian is Hermitian, the thing that appears in the Schrodinger equation,
if that's a Hermitian operator, then the solution to the Schrodinger equation that gives you the evolutionary operator
is guaranteed to be unitary. And also the eigenvalues of the measurement operator, which is,
which is, Kurt said, are in a sense the, you know, those are your measurement outcomes,
those are guaranteed to be real. That's a sufficient
condition, hermiticity, but it's not a necessary one so that you can have non-hermitian, uh,
measurement operators that still give you real eigenvalues. And where you don't get a unitary
evolution operator, uh, in the algebraic sense, but you get what is often called, I think in these
papers is referred to as kind of physical unitarity. So unitarity means a bunch of things, right? So in the algebraic sense, but you get what is often called, I think in these papers it's
referred to as kind of physical unitarity. So, unitarity means a bunch of things, right?
So algebraically, as I say, it means that when you apply the adjoint operator, you get
the time reversal. And therefore, if you take a unitary evolution operator and it's adjoint,
you get the identity matrix or the identity operator. So as soon as you have non-Hermitian Hamiltonians,
that ceases to be true. And also you end up with probabilities. So in the interpretation
where your quantum amplitudes are really kind of related to probabilities, where you take
the absolute value of the amplitude squared and that gives you the probability, now as
soon as you have non-unitary evolution operators, your probabilities are
not guaranteed to sum to one. So that looks on the surface like it's completely hopeless.
But actually, as I say, you can still get real measurement outcomes. The interpretation
of the norm squareds of the amplitudes as being probabilities, that's simply an interpretation.
It's not mandated by the formalism. And what people like Bender and Brody showed was that
you could still have a consistent theory where you have parity time symmetry. So you still
have a time symmetric theory of quantum mechanics. It's still invariant under parity transformations.
And it's still possible, even when you apply one of these non-unitary evolution operators
to some initial state, it's still always possible to reconstruct what the initial state was from the final state. I mean, that's really what time symmetry means. And so it was widely believed,
I think for a long time, that if you didn't have amplitudes whose normal squared sum to one,
then you wouldn't be able to do that. And what Bender and Brody showed was that, no, you can,
you just have to be, you still have restrictions,
but they're just weaker than the restrictions
we thought existed.
So I was probably bringing that up because at the time,
well, okay, two reasons.
One was, it turns out there are these nice connections,
which I was a little bit obsessed with a few years back,
between PT symmetric quantum mechanics
and the Riemann hypothesis.
So a colleague of mine, a former colleague of mine from from Wolfram Research, Paul Abbot,
was the person who actually told me about this.
Who and so the idea there is there's this thing called the.
OK, let me get this right. So there's a thing.
There's a thing called the Hilbert-Pollier conjecture, which is the conjecture that right.
We have which I think is reasonably
well known. I would like at least some people, people in our kind of area have often heard
about. Yeah, which is the idea that somehow the non-trivial zeros of the Riemann zeta
function should be related to the, to the, to the eigen spectrum of some manifestly self
adjoint operator.
And so it's somehow a connection between the analytic number theory of
you know the zeta function and the kind of foundation the operator theoretic foundations of quantum mechanics.
And then there's the thing called the Berry-Keating Hamiltonian.
So Mike Berry and Jonathan Keating constructed a case of what they conjectured to be a Hilbert-Polya type Hamiltonian. So in other words, a Hamiltonian
where if you could prove that it was manifestly self-adjoint, it would be equivalent to proving
the Riemann hypothesis. The problem is that Hamiltonian is actually not self-adjoint.
It's not Hermitian in the traditional sense, but it is Hermitian in this pt-symmetric sense.
So it's not algebraically Hermitian. It's not equal to its own adjoint, but it is Hermitian in this pt-symmetric sense. So it's not algebraically
Hermitian, it's not equal to its own adjoint, but it's still a valid Hamiltonian for parity
time symmetric quantum mechanics. And so by trying to think about the Riemann hypothesis
in terms of quantum formalism, you end up being inevitably drawn into thinking about
non-Hermitian foundations
and these kind of p-t symmetric formulations. So that's how I first learned about this.
I suspect I was talking about it at the time, partly because I was just interested in that
connection. It turns out that the spectrum of these operators are related not just to
the Riemann zeta function, but also to what's called the Hovitz zeta function and several
other objects in analytic number theory.
But also at the time, this has turned out to be false, but at the time I thought that
the version of quantum mechanics that we would end up with from these multi-way systems would
be a pt-symmetric formalism for quantum mechanics, not standard quantum mechanics. As it turns
out, actually there's a way you can do it where you get standard quantum mechanics complete
with proper hermiticity and unitarity, so you don't really need to worry about that but at the time
I was quite nervous that we weren't going to get that but we were going to get some weird non-hermitian
version of quantum mechanics and we'd have to worry about that. Do you end up getting both or just one?
So there is a construction where you can get I mean like what I want to stress is that there's
no you know there's no, you know, there's
no canonical, if you just give it a multi-way system and you said derive quantum mechanics,
right, there's no canonical way to do that.
Okay.
The approach that we ended up taking was to show that, as I say, that there's this algebraic
structure that has this daggersymmetric compact closed monoidal category feature. And therefore,
you can get standard quantum mechanics, because standard quantum
mechanics is what's developed kind of internal to that category.
But in order to do that, we had to make a whole bunch of really quite arbitrary choices.
And so I strongly suspect that there are other ways, there are ways that you could define
an algebraic structure that is essentially a non-Hermitian pt-symmetric formulation.
I don't personally know the way to do it.
So just as an aside, a pedagogical aside for the people who aren't mathematicians or physicists,
they hear terms like closed, compact, symmetric, monoidal, dagger, unitary, adjoint, and they're
wondering why are we using these words to describe physical processes?
And the reason is because the mathematicians got there first. So physicists are trying to describe something and then they see
that there's some tools invented by other people, goes by other names and
then they end up applying in the physical situations. But when the
physicist gets there first, they're often intuitive names. Momentum, spin up, spin
down. It's actually, it makes more sense. So just in case people are wondering
this terminology is needlessly complex. Well, it can be to the outsider, but as you become familiar with them, you
just realize historically, if you want to communicate to mathematicians and vice versa,
then just use whatever terms were invented first.
I would say there's the opposite problem as well, right? I mean, there are cases where
physicists discovered concepts first that have basically been subsumed into mathematics
and the physical names don't really make any sense in the mathematical context. There are things
like physicists, because of general relativity, were really the first people to seriously
think about and formalise notions like torsion in differential manifolds and concepts like
metric affine connections. The standard connection that you define on a manifold with torsion
is the spin
connection, so named because it was originally used in these metric affine theories where
you have a spin tensor that describes the spin of particles.
So now there are these ideas in algebraic and differential geometry called spin connections
and spin holonomies and nothing to do with spin, nothing to do with particle physics,
but it's just been, it's the relic of the kind of physical origins of the subject.
There are several cases of that too.
Yeah, I haven't announced this and I'm not sure if I'll end up doing this.
I've been writing a script for myself on words that I dislike in physics and math.
Sometimes they'll say something like, what's the Kullback, what is it called? The Kullback?
Kullback-Weibler Diversions?
Kullback-Weibler Diversions.
Okay, if you just say that, it doesn't mean anything. You have to know what it's defined as. What is it called? The callback? Callback libeler? Yeah, the callback libeler divergence. Okay.
If you just say that, it doesn't mean anything.
You have to know what it's defined as.
So calling something the earth movers distance
is much more intuitive.
And then I have this whole list of words that I say,
okay, it's so foolish to call it this.
Why don't you just call it by its descriptive name?
And then I suggest some descriptive names.
And there's another class of foolish names to myself. why don't you just call it by its descriptive name and then I suggest some descriptive names and
there's another class of
Foolish names to myself torsion is one of them, but it's not because it's a bad name. It's because it's used in different senses
On an elliptic curve, there's torsion but has nothing to do with the torsion in differential geometry
Which as far as I can tell maybe you can tell me the difference here that in cohomology
there's torsion, where if you are using the field of the integers and then you go to the reals, if they're not equivalent, then you say it has torsion.
Yes, yes.
But it's not the same as the differential geometric torsion as far as I can tell.
I think that's true.
Yeah.
So I think that confusion has arisen because it's one of these examples of like, you know, independent evolution.
So there was a notion of torsion
that appeared in group theory
that then because of that got subsumed into,
as you say, things like homology theory and cohomology theory.
So in group theory,
a group is defined as being torsion
if it has only finite generators,
generates a finite order.
So the generators of a group,
the things that you multiply,
you exponentiate to get all elements of the group,
if the group is generated only by generators of finite order,
then you say it's a torsion group.
And you can talk about torsion subgroups,
or you could talk about the torsion part of a group. And so yeah, it appears a lot in the theory of elliptic
curves because, you know, there are things like the, the model Vay theorem that are talking
about, you know, you, when you, when you take rational points on elliptic curves, you can
ask about how large is the torsion part, how large is the non-torsion part. And there are
things like Birch-Swinner and Dyer-Conjector that are all about relating those ideas. But
then yeah, then quite independently, there was a notion of torsion that appeared
in differential geometry that, as you know,
is that, you know, it's just, essentially,
it's a measure of, you know, I have points
x and y, how much, how different is the distance
from x to y and the difference from y to x?
And the name there
comes from the fact that in the classical kind of
Gaussian theory of geometry of surfaces,
it's
really, it's the concept that gives you the
torsion of a curve, right? You know, how much the curve is twisting. Yeah, as far as I know,
those two names are unrelated. And as you say, there are these awkward areas like homology
theory where it's partly about spaces and it's partly about groups. And so it's kind
of unclear which one you're talking about.
This is a great point to linger on here, particularly about torsion,
because I have a video that is controversially titled that Gravity is not Curvature.
For some context, here's the String Theory iceberg video that's being referenced,
where I talk about gravity is not curvature. The link is in the description.
If you listen to this podcast, you'll hear me say often that it's not so clear gravity is
merely the curvature of spacetime. Yes, you heard that right.
You can formulate the exact predictions of general relativity, but with a model of zero
curvature with torsion, non-zero torsion, that's Einstein-Cartan.
You can also assume that there's no curvature and there's no torsion, but there is something
called non-matricity.
That's something called symmetric teleparallel gravity.
Something else I like to explore are higher spin gravitons.
That is controversially titled that gravity is not curvature. It's just the saying that
there are alternative formulations with torsion or non-matricity. For people who don't know,
general relativity is formulated as gravity is curvature of spacetime. But you can get
equivalent predictions if you don't think of curvature. You can think of zero curvature,
but the presence of so-called torsion or zero curvature and zero torsion, but the presence of so-called non-matricity.
Okay, these are seen as equivalent formulations, but I'm wondering if the Wolframs physics project or the hyper graph dynamical approach actually identifies one of them as being more canonical. So, unfortunately, I think, at least based on stuff that I've done, I think the answer is no.
And also, I think it actually makes the problem worse.
I mean, if you're concerned by the fact that there's this apparent arbitrary freedom of
do you choose to fix the contortion tensor
or the non-matricity tensor or the curvature tensor or whatever.
Thinking about things in terms of hypergraphs, you actually get yet another free parameter
which is dimension.
In a hypergraph setting, again, I know you've had Stephen on here before and I know that
he's covered a lot of these ideas, so I'll just very briefly summarize.
So, you know, hypergraphs have no a priori notion of dimension.
They have no a priori notion of curvature.
You can calculate those things subject to certain assumptions where you say, I'm going
to look at, I take a node and I look at all nodes adjacent to it and all nodes adjacent
to those nodes and so on.
I grow out some ball and I ask, what is the scaling factor of that ball as a function
of radius? I can take logarithmic differences, that gives me the exponent, that
exponent is like a Hausdorff dimension. Then I can ask what's the correction, is that giving
me some leading order term in the expansion? What are the correction terms? Those correction
terms give me projections of the Riemann tensor. That's just using the analogy to classical
differential geometry.
But the point is that to get the curvature terms as we do in say the derivation of general relativity, you have to assume that the hypergraph
is kind of uniform dimensional. Even to be able to take that Taylor expansion, you have
to assume that the dimension is uniform. So then an obvious question is what happens if
you relax that assumption? And the answer is, well, you get a theory that is equivalent to general relativity
in the kind of observational sense. But now you can have fixed curvature, fixed contortion,
fixed non-metricity, but you also have, you just have variable dimension. And so, you
know, the point is that in the, in the expansion for that volume element, the dimension gives
you an exponential, it gives you the kind of leading order exponential
term. The Ricci scalar curvature gives you a quadratic correction to that and then you
have higher order corrections. So because of this very basic mathematical fact that
if you're zoomed in really far, it's very hard to distinguish an exponential curve from
a quadratic curve, right? You kind of have to zoom out and see it very globally before you can really tell the difference between the two.
And so, in a sense, what that translates to is if you're looking only at the microstructure of spacetime,
there's no way for you to systematically distinguish between a small change in dimension and a very large change in curvature.
So, if you had a region of spacetime that was kind of rather than being four dimensional was you know 4.001 dimensional.
But we were to kind of measure it as though it were four dimensional, it would manifest
to us as a curvature change.
It would be indistinguishably indistinguishable from curvature change.
So what I would say is that in the hypergraph dynamics view, yeah, you again have this arbitrary
of, you know, you have to fix, you have to make choices of connections which fix torsion and non-matricity and so on.
But you have this additional problem that you also have to make choices about trade-offs
between curvature and dimension.
So let's go back to category theory for just a moment.
When I was speaking to Wolfram about that, Stephen Wolfram, he said that he's not a fan
of category theory because he believes it circumvents computational irreducibility.
I said why? He said, well, because you go from A to B, yes, then you can go from B to
C, but then you also have an arrow that goes directly from A to C. But when I was thinking
about it, that's only the case if you think that each mapping takes a time step. But when
I look at category theory, I don't see it as any time step. At least I don't. I see
it as just this timeless creation.
So please tell me your thoughts.
Right.
Okay.
Well, so, uh, I'm, I'm in the fortunate position of having written quite a long
paper on exactly this problem.
Okay.
Um, so, uh, there's a paper that I wrote back in 2022 called a functorial
perspective on multi-computational irreducibility, which is all about exactly
this idea that, yeah, so as you say, category theory, as it's ordinarily conceived, is just
a kind of algebraic theory that has no notion of, there's nothing computational about it,
right? There's no notion of time step. There's no statement made about, you know, what's
the computational complexity of any given morphism. But then an obvious question is, well, okay, is there a version of category theory
which does care about those things, a kind of resource-limited version, or some version where
individual morphisms are kind of tagged with computational complexity information?
And it turns out the answer is yes, and it has some very nice connections to
not just categorical quantum mechanics, but also things like functorial quantum field theory.
But also, it gives you a new... I think Stephen is wrong in that statement that
it doesn't care about computational irreducibility, because actually it gives you a very clean
way of thinking about computational irreducibility. So what I mean by that is... So computational
irreducibility, this idea that there are some computations that you kind of can't shortcut in some fundamental sense.
As far as I know, I was the first person to actually give a formal definition of that
in a paper back in 2018 or something.
Sorry, a formal definition of computational irreducibility?
Of computational irreducibility.
Nothing very profound, but just essentially you say, I've got some Turing machine that
maps me from this state to that state.
Does there exist a Turing machine of the same signature
that gets me to the same output state with, with fewer
applications of the transition function?
And so that, I mean, I needed that for another result that I
was proving, but I, having looked in the literature, I
don't, I'm not aware of anyone previously who'd formalize
that definition, but
sorry, I don't mean to cut you off.
So please just remember where you are.
Okay.
Because it's my understanding of that Wolfram said that rule 30 something like that.
Maybe you would recall it more vividly because it's in his book.
Rule 30 is computationally irreducible.
I've always wondered how do you prove that?
Now I imagine that he proved it or maybe it's one of those Wolfram proofs or proofs to himself.
But in order for him to prove it, even to himself, he would have had to have a definition of it.
Right.
Okay.
So, so important.
Okay.
There's an important point that, so rule 30 is not proved to be
computational irreducible.
And in fact, uh, there's a prize.
So if you go to, I think it's rule30prize.org.
Uh, I'm ostensibly on the prize committee.
Uh, there's, there's a prize that Wolfram put out back in 2018 to there's actually three prizes,
none of which have been claimed.
Each one is $10,000 and one of which is prove that rule 30 is computationally irreducible.
And so yeah, it's unproven.
And in fact, there's really only one up to some notion of equivalence.
There's really only one of the elementary cellular automata in NKS that's been proven to be computationally irreducible in any realistic sense.
And that's rule 110. And that was proved by showing that it's capable of doing universal computation, that it's capable of, that it's a Turing complete rule.
And so intuitively you can kind of say, well, if it's Turing complete, then questions about termination are going to be undecidable and therefore it has to be irreducible.
But it's a kind of slightly hand wavy thing.
But yeah, so in a way, it's an interesting question.
Can you prove that something is computationally irreducible without proving that it's universal?
And of course, as you say, for that you would need a formal definition of irreducibility.
Okay. And now going back to your paper on functoriality and computational irreducibility, you were able to formalize this.
Yes, sorry. Yes. So what I was saying was, yes, so there was this existing formal definition of computational irreducibility.
But I then realized that if you think about it from a category theoretic standpoint, there's actually a much nicer definition, a much less kind of ad hoc definition.
Which is as follows. So imagine a version of category theory where your morphisms, as I say, are tagged with computational complexity information.
So each morphism has a little integer associated to it. So you know, you fix some model of computation, you fix Turing machines, and you say,
each morphism I'm going to tag with an integer that tells me how many operations was needed to compute this object from that object.
In other words, how many applications
of the transition function of the Turing machine
do I need to apply?
So now if I compose two of those morphisms together,
I get some composite.
And that composite is also going to have
some computational complexity information.
And that computational complexity information, it's going to satisfy some version of the
triangle inequality, right?
So if it takes some number of steps to go from X to Y and some number of steps to go
from Y to Z, I can't go from X to Z in fewer computational steps that it would have taken
to go from X to Y or from Y to Z. So it's going to at least satisfy the axioms of something
like a metric space.
There's some kind of triangle inequality there.
But then you could consider the case where the complexities are just additive, right?
Where to get from X to Z, it takes the same number of steps as it takes to go from X to Y,
plus the number of steps it takes to go from Y to Z.
And that's precisely the case where the computation is irreducible, right?
Because it's saying you can't shortcut the
process of going from x to z, which then means you could
define the reducibility, the case of computational
reusability as being the case where this algebra is sub at
where the algebra of complexities is sub additive
under the operation of morphism composition. And there's a way
that you can formulate this as so you take a you take your
initial category, and you take a category whose objects
are essentially integers and discrete intervals between integers and then you have a functor
that maps each object in one category to an object in another, each morphism in one to
a morphism of the other and then the composition operation in the second category is just discrete
unions of these intervals and then you can ask whether the second category is just discrete unions of these intervals.
And then you can ask whether the cardinality of those intervals is discreetly additive or discreetly sub-additive under morphism composition.
And that gives you a way of formalizing computational irreducibility.
And the really lovely thing about that is that not only can you then measure irreducibility and reducibility in terms of deformation of this functor. Uh-huh.
But it also generalizes to the case of multi-way systems.
You can formalize notions of multi-computational irreducibility by essentially just equipping
these categories with a monoidal structure, with a tensor product structure.
Uh-huh.
So my understanding of computational irreducibility is either that a system has it or it doesn't,
but it sounds like you're able to formulate an index so that this system is more irreducible than another, like you can actually give a degree to it.
Exactly, exactly. So yeah, so there's a kind of, there's a limit case where it's exactly additive,
and anything that's less than that, you know, where the complexities are exactly additive,
that's kind of maximally irreducible, but anything less than that is sort of partially reducible,
but not necessarily fully reducible.
Now, are there any interesting cases of something
that is completely reducible,
like has zero on the index of computational irreducibility?
Is there anything interesting?
Even trivial is interesting, actually.
Yes, I mean...
Well, okay, so any computation
that doesn't change your data structure, there's just a
repetition of the identity operation is going to have that property.
I'm not sure I can necessarily prove this.
I don't think there are any examples other than that.
I think any example other than that must have at least some minimal amount of irreducibility. But yes, I mean, this also gets into a bigger
question that actually relates to some things I'm working on at the moment, which is exactly
how you equivalence objects in this kind of perspective, right? Because even to say it's
a trivial case, right, where I'm just applying this, I'm applying some identity operation, I'm getting the same object, you have to have some way of saying
that it is the same object.
And that's actually, I mean, that sounds like a simple thing, but it's actually quite a
philosophically thorny issue, right?
Because in a very simple case, you could say, well, okay, so sorry, first thing to say is
everything we're talking about at the moment, this is all internal to this category, which in the paper I call comp, this category
whose objects are in a sense elementary data structures and whose morphisms are the morphisms
that generate the freely generate this category are elementary computations. And so the collection
of all morphisms that you get from compositions are essentially the class of all possible
programs. So within this category, when two objects are equivalent and therefore when two programs are
equivalent is a fairly non-trivial thing, right? So you can imagine having a data structure where
nothing substantively changes, but you've just got like a time step or something that goes up
every time you apply an operation. So it just increments from one, two, three, four. So in
that case, you're never going to have equivalences every time you apply an operation, even if it, even if the operation morally does nothing, um, it's
going to be a different object.
So even that would show up as being somehow irreducible.
Uh, but there are also less trivial cases of that, like with hypergraphs, right?
So with hypergraphs, um, you have to, to determine equivalence, you have to have
some notion of hypergraph isomorphism.
And that's a complicated thing even to define,
let alone to formalize algorithmically.
And so you quickly realize that these notions,
you can't really separate these notions of reducibility
and irreducibility from these notions of equivalencing.
And that somehow it's all deeply related
to what data structures do you kind of define as being
equivalent or equivalent up to natural isomorphism or whatever.
And that's really quite a difficult problem that relates to definitions of things like
observers in these physical systems, right?
If you have someone who is embedded in one of these data structures, what do they see
as equivalent, which might be very different to what a kind of God's eye perspective uses
being equivalent from the outside.
So are there close time like curves in Wolfram's physics project?
Sorry, HD project?
No, we can say Wolfram's physics project.
That's how it's known, right?
No, so yeah, that's a really good question, right?
Because in a way, it's very easy to say no, because we can just we can
do that trick that I just described. You know, you just tag everything with a with a time
step number. And then of course, you know, you even if the hypergraph is the same, the
time step is different. So you there's no equivalence thing you don't in the multiway
system or the causal graph, you never see a cycle. But that's somehow cheating, right?
You know, and what we kept when people ask about CTCs, what they care about is not this
very nerdy
criterion of, oh, do you actually get exactly equivalent data structures?
What they care about is nerdy.
Criterions seems to define this entire conversation up until this point.
Well, yes, I suppose, you know, you take two people with math backgrounds and get them
to discuss stuff.
Yeah, exactly.
It's going to happen.
Right. But yeah, so yeah, what? Um, but, uh, yeah.
So yeah, what they care about people who care about time travel.
What one cares about is, yeah, exactly.
It is time travel and, and, and causality violations and things which, uh, which
don't necessarily care about your equivalencing or care about them and care
about it in a slightly different way.
Um, yeah, I mean, so my so my short answer is I don't know. Because I think
we can't, my personal feeling is we are not yet at this level of maturity where we can
even pose that question precisely. For the following reason, right? So even defining
a notion of causality is complicated. So in most of what we've done in that project in
derivation, derivations of things like the Einstein equations and so on, we've used what
on the surface appears like a very natural definition of causality for hypergraph rewriting.
So you have two rewrites, you know, each one is going to ingest some number of hyperedges,
it's going to output some other number of hyperedges, those hyperedges have some identifier, and then you can ask, okay, did this future event
ingest edges that were produced in the output of this past event? And so if it did, then
the future event couldn't have happened unless the past event had previously happened. And
so we say that they're causally related. So the somehow the, if the output set of one
has a non-trivial intersection with the input set of another, we say that they're causally
related. That's a, seems like a perfectly sensible definition, except it requires, it has exactly the problem
we've been discussing, right? It requires having an identifier for each of the hyperedges.
You need to be able to say this hyperedge that this event ingested is the same as this
hyperedge that the other event output. But if they're just hyperedges, they're just
structural data, there's no canonical choice of universal identifier, of UUID. And so what that means is you can have these degenerate
trivial cases where, for instance, you have an event that ingests a hyperedge, changes
its UUID, but doesn't actually change anything structurally. The graph is still the same.
Nothing has actually changed, interestingly, but the identifier is different. But now any event in the future that uses that edge is going to register as being causally
related to this other event that didn't do anything, right? And so you have a bunch of
these spurious causal relations. So it's clear that that definition of causality isn't quite
right. And so what's really needed is some definition of causality that isn't subject
to this problem, but it's very unclear what that is.
And I've worked a little bit on trying to formalize that using by essentially identify
by recursively identifying hyperedges based on their complete causal history.
So the identifiers are not chosen arbitrarily as random integers or something, but instead
each hyper edge encodes in a slightly block chainaining way a directed acyclic graph representation
of its complete causal history.
And so then two hyperedges are treated as the same
if and only if they have the same history
of causal relationships in the rewriting system.
And that's somewhat better,
but again is quite complicated to reason about.
And as I say, it's all deeply related to this question
of what data structures do you ultimately treat
as being equivalent, which is really an observer dependent thing. It depends on the computational sophistication
of the person or entity who is trying to decode what the system is doing. It's not the kind
of inherent problem, property of the system itself.
So what do you make of observer theory, which is a recent large blog post by Stephen and
a theory, an outlook. So what do you make of it?
Yeah so observer theory really has, it's a rebranding of this thing that's been a feature
of the physics project since before we started it right. So this idea that yes exactly that you
cannot sort of consider a computational system independent of the observer that is
interpreting its results. And somehow both the computational sophistication of the observer
and the computational sophistication of the system have to be factored into that description somehow.
So in a way, it's a very natural idea, which is really the prelude to this work we did
on quantum foundations and other things in the context of the physics project.
I like to think of it as a kind of natural extension of a bunch of stuff that happened
in 20th century physics, right?
Because of course, this is not how these things were viewed at the time, but both general
relativity and quantum mechanics can in some sense be thought of as being theories
that you arrive at by becoming, by being more realistic about what the observer is capable
of, right?
So if you say, you know, okay, traditional, a lot of traditional scientific models made
this assumption that the observer was kind of infinitely far removed from the system
they were observing, that they were, that they somehow, you know, they were these kind
of omnipotent entities, they didn't have any influence over the systems,
they weren't constrained by the same laws. But if you then say, okay, well, maybe the
observer has some limitations, maybe they can't travel faster than light, right? What
does that imply? Well, if you follow the right chain of logical deduction, what that implies
is general covariance and therefore general relativity, that as soon as you have observers
who can't travel faster than light, they don't necessarily agree on the ordering of space-like separated events and
suddenly you get general relativity. Equivalently, if you have observers who are constrained
by the same physical laws of the systems that they're observing, then what that means is
if you try and measure some property of a system, what happens when you measure it?
Well, you have to have some interaction with it, you have to kind of poke it somehow, and the poke that you
receive back is going to be equal in magnitude to the poke that you gave to the system. And
so anytime you try and measure some quantity, there's a minimum amount that you have to
disturb it. And again, if you kind of follow that chain of reasoning to its logical conclusion,
you get at least the kind of Heisenberg picture of quantum mechanics. So in a way, both general
relativity and quantum mechanics are, as I say, ways of becoming
more realistic about what observers are capable of and ways of coming to terms with the fact
that observers are constrained by the same physical laws as the systems that they observe.
So observer theory, which I mean, I don't think it's yet a theory, so I'm not sure it's, you know, I'm not, I'm
sure I'm hugely fond of the terminology, but I mean, it's a, it's a, yeah, it's a conceptual
idea is really just the kind of computational instantiation of that. And, you know, so my
feed, okay, you mentioned before this very interesting thing about geometry that somehow you have
this freedom of do you choose to vary curvature, do you choose to vary torsion, do you choose
to vary non-metricity.
My feeling is that there's a similar free parameter that exists in our scientific models
with regards to the role of the observer.
And this is again maybe a point of philosophical departure between me and Stephen.
So you can imagine these two extreme cases, right?
You can imagine the case where all you care about is the computation that the system is
doing.
So you're just building up some structure from bottom up rules.
And so the observer, so to speak, is just some trivial object that's seeing the data structure.
And all of the kind of computational burden is being shouldered by the system itself.
And that's the way that the physics project is often presented, right?
You just have a hypergraph and it's doing its thing and we perform analyses on it.
That's one extreme. There's another extreme where you could say, well, maybe the system itself is trivial.
The computation it's doing is essentially trivial.
And all of the sophistication is, all the computational burden is
shouldered by the observer. So the case of that would be what Stephen
refers to as the Ruliad, which is really just this, what I was describing earlier,
this kind of category of all possible elementary data structures and all possible computations.
And so in that picture, I mean, that's an object that minimizes algorithmic complexity.
It minimizes Kolmogorov complexity.
The set of all possible computations has the same algorithmic complexity as the set of
node computations, just purely for information theoretic reasons.
In that case, the actual computation that generates it is trivial.
It's trivial to specify but in order to get a particular computational path or in order to restrict down to a particular multiway system you have to have an observer some generalized observer who is making equivalences between different parts and the sophistication of that observer can be arbitrarily high.
equivalences between different paths and the sophistication of that observer can be arbitrarily high. And so you have these two extreme cases, one case where the observer is trivial, all
the computation is being done by the system, another case where the system is trivial,
all the computation is being done by the observer. And my argument is these two cases, I mean,
there's no observational way of distinguishing between them. And in fact, there's the whole
interstitial space in the middle where you have some of the burden being sheltered by
the system, some of the burden being shoulder by the observer.
And these are not really things that we can observationally distinguish.
And so in a sense, it's a genuinely free parameter in how we construct our models.
And I would even go so far as to say that I think in some sense, this argument that
occurred in early European philosophy between the kind of empiricists and the rationalists,
between people like Locke and Hume on the kind of empiricist side and people like Descartes
and Bishop Barclay and so on on the rationalist side. This is really the modern version of
that same argument. The empiricists saying, we need to get the observer out of the picture
as much as possible and just describe the systems. The rationalists saying, no,
no, you know, what matters is the internal representation of the world. And you know,
the external reality is somehow some secondary emergent phenomenon. That's exactly this case,
right? In a sense, the two extremes of, you know, maximal algorithmic complexity of the
observer versus maximal algorithmic complexity of the observer versus maximal algorithmic complexity of the
system.
I'm confused as to the difference between observation and perception.
Because Stephen would say that, look, because you're an observer of the kind that you are,
you then derive general relativity or have that as a property or quantum mechanics.
But then firstly, we all don't perceive the same.
And then we also don't perceive quantum mechanics nor general relativity
In fact in many ways we perceive the earth as being flat and we don't perceive any of the other colors outside of the spectrum of visible light
So yeah, it's a painstaking process to then say well, what are the laws of physics?
we have to somehow derive that test that and then
The question is well, does a cat perceive the same laws? Well, a cat doesn't perceive, perceive.
This is what I mean.
We don't perceive the same.
The cat doesn't perceive the same, but presumably it's on the same field.
We're playing on the same field.
The cat is playing on the same field of general relativity and quantum mechanics as we are.
So sure, our perceptions are different, but then would Wolfram say that our observations are the same? Like, delineate for me an observation and a perception.
Yeah, that's a really important distinction, right? Because, and it goes back to some really kind of foundational ideas in early philosophy of science and, you know, people like Thomas Kuhn and others who kind of, who stressed the idea, and Karl Popper,
who stressed the idea of theory ladenness of observation.
Right, that, so, you know, the basic,
I think in the way that you're using those terms,
I think it's an important distinction, right?
The perceptions are kind of much closer
to just the qualia that we perceive, right?
The qualia that we experience,
and the observations are some kind of interpretation
that we give to them.
And so the important point, I think the point that people like Kuhn and Poppe were making with theory-ladenness
is that, you know, we, in a sense, we perceive nothing as it quote really is, right? Like
anytime we, anytime we make a scientific observation, we're not perceiving the phenomenon. It's
filtered through many, many layers of observation and interpretation and analysis.
When we say that we have detected this particle in this particle accelerator,
what does that actually mean? Well, it means that there was some cluster of photons in this
detector that were produced by some Cherenkov radiation, which would then
stimulated some photovoltaic cells
on the scintillator.
And, you know, there are maybe a hundred layers
of models and theories and, you know,
additional bits of interpretation in between
whatever was going on in that particle accelerator
and the bits of photosensitive cells that were stimulated
in the scientists' eyes as they looked at the screen
and saw this thing.
And so if you actually try and trace out how many levels of abstraction are there between
the quote unquote perceptions and the quote unquote scientific observations, it's huge,
right?
And it only takes one of those to be wrong or, you know, or tweaks a little bit.
And suddenly the model you have of the world, which is still just as consistent
with your own perceptions is completely different. Right. So yeah, I think it's important. That's an
important thing to bear in mind. It's a thing in a sense, which annoys me a little bit with regards
to some criticisms of, you know, experimental validation, Because I think people tend to get, that's an area where people kind of get confused
in terms of that distinction.
The people say, you know-
It annoys you just a bit?
Only a bit?
Well, maybe I don't have to deal with it as much as you do.
So-
Well, no, I don't deal with it.
I just mean, I'm curious if it annoys you more than that
or if you're just being polite.
Well, I mean, it maybe would annoy me if I had if I was being confronted with it all
the time. But you know, when you see occasional when you see people kind of saying that, oh,
you know, the multiverse is, you know, unobserve is fundamentally unobservable. That seems
to me to make this exactly the you know, the mistake that you're the characterizing, right?
It's not perceivable,
sure. But then most things that we care about in science aren't perceivable, right? I think
David Deutsch has this nice example that no one has ever seen a dinosaur, no one ever will see
a dinosaur, will never get a dinosaur in a lab, right? If you restrict science to only be about
things that we can directly perceive or test in the laboratory or something, then you can't make
statements about dinosaurs. You can make statements about the composition and distribution
of fossils, but you know that's not very interesting or at least it's you know if you only care about
the properties of certain rocks you would be a geologist not a paleontologist. The point is that
when we look at the composition and distribution of fossils that perceptual data is consistent
with a model of the world that logically implies
the existence of dinosaurs.
And that's really what we mean when we say we have evidence of dinosaurs.
So not that I'm particularly defending the multiverse view or anything like that, but
there's a really important distinction between, yes, the multiverse is not perceivable, which
is true, and it's not possible on the basis of perceptions that we can have
to validate a model of the world that is logically consistent with the existence of a multiverse,
which is a very different statement and a much more reasonable statement. And yet, you
know, in the popular discourse about these things, those are things that often get confused.
So yeah, it annoys me when I see it and, you know, maybe would annoy me more if I saw it
more often.
Speaking of points of annoyance, what are your thoughts on the state of publishing?
So what's your stance on peer review and where academic publishing is headed, even in its
current state?
Yeah, so I had the slightly depressing experience recently, I'm not sure whether you've done
this of going to Google Scholar and searching, you know, in inverted commas as an AI language
model or, you know, some other similar thing, right?
And just seeing the sheer volume of papers that have passed so-called peer review in
so-called prestigious journals that are just obviously, you know, not human written, with no indication of that fact. And there are obviously plenty of examples, you know,
the, I forget what, the Sokol affair and, you know, other things where, you know, this process that,
on the surface, sounds like a very reasonable idea. The idea that, you know, you claim some
new result, you get people who know the field to kind of say, yes, that's a reasonable result or no, this is not quite right.
That's a perfectly reasonable model.
It's just not what peer review actually is in practice.
And yeah, it's important to remember as well that in a sense, the modern system of scientific
publishing and indeed the modern system of academia was not really designed, right?
No one sat down and said, this is how we should do science.
It just kind of happened, right?
This model of scientific journals and peer review and editors and so on, you can trace
that back to a direct extension of these early proto journals like the transactions of the
Royal Society, which if you go back and look at them, were very different to modern scientific journals.
It's always kind of entertaining when you go and read submissions to the transactions
of the Royal Society that were made by Robert Hooke and Robert Boyle and Isaac Newton and
so on, because they basically read like blog posts.
They're actually very, very informal.
You have these guys, they just go in and they say, oh, I did this, I did that, I mixed this
chemical with this and I saw this thing and then, you know, my cat knocked
my, you know, knocked my experiment over and whatever. And it's very conversational, it's
very discursive. And yes, it was reviewed, but you know, the review process was much
less formalized than it is. And, you know, I'm not saying that something like that could
work today. I mean, science is much more sort of industrialized and so on. You clearly need some kind of more systematic way of processing the volume of scientific
literature that's being produced.
But still, it's pretty evident that there was never any person who said this is a good
model for scientific research and dissemination.
This is how it should be done.
It just kind of, it naturally evolved from a system that really wasn't set up to accommodate what it's become. Another important thing to remember is that
the notion of scientific publishing and the notion of peer review served a particular,
served a pair of purposes which in the modern world have essentially become distinct. So
it used to be that the journal publishers served two roles. They were there for quality
control because of peer review and they were there for dissemination because they actually
printed the physical manuscripts that got sent to libraries and things. In the modern era with
things like archive and sci-archive and bio archive and generally you know pre-print servers and you
know people able to host papers on their website, dissemination that's which was always the expensive
part of journal publishing. We don't need that anymore, right?
We've got that covered.
So peer reviews for quality control.
So yeah, exactly.
So the real role for journals now is quality control, in my opinion.
And the issue with that is that's incredibly cheap because, you know, I review papers as
does every other academic and we do it for free.
We do it because it's kind of public service and whatever.
And it's an important thing to do.
Um, so we don't get paid.
Uh, the people writing the papers don't get paid.
The journals shouldn't need to spend lots of money to print physical copies.
So really journal publication should be not quite free, but basically
incredibly cheap and it's not right.
And, uh, the reason is because you have these journals
who are essentially kind of holding on
to this very outmoded model
where they're pushing the dissemination part,
I would argue at the expense of the quality control part.
And so that's why I've been a great advocate.
There are these new kinds of journals that are coming out.
There's one called Discrete Analysis and a few others
that are these so-called archive overlay journals, which I think are a fantastic idea. there's one called discrete analysis and a few others,
that are these so-called archive overlay journals,
which I think are a fantastic idea.
Where the idea is we say,
the content itself is gonna be hosted
on the archive preprint service.
So we don't need to care about dissemination.
So that's all incredibly cheap.
We just literally post a link to an archive paper.
And so all we're gonna do is,
we'll worry about the quality control.
And then once you start to think about that,
and once you're not bound to having physical copies
that have to go to printers and things,
you can actually do peer review in a very different,
and I would argue much more productive way.
You can have things like open post-publication peer review,
where rather than pre-publication,
the manuscript gets sent to some anonymous reviewers
and then they spend six months deliberating
and they get the result back and no one ever sees it.
You can have something where someone posts a pre-print on Archive, it goes on an open
review site and then anyone in that area or anyone outside the area can come in and say,
I don't understand this or this doesn't make sense or this is a great paper or whatever
and then you can kind of upvote, downvote, you can say, oh yeah, I agree with your criticism
and etc.
And the whole thing can be open and de-anonymized.
And it would have to be anonymized by the person who's publishing, who's posting it up there because otherwise if people see that Ed Witten posted something, more eyes will go toward that.
But you can also, if you're in the field, you can discern sometimes who's publishing what.
Yeah, absolutely. And certainly in math and physics and these places where, and computer science, in places where, you know, in those fields, it's been standard for many decades now, for several decades, that everyone posts their
work on archive, right? And they post their work on archive typically before or possibly
simultaneously with submitting their work to a journal. So if you get, even if you,
even if the journal, I mean, so because of that, physics journals, you know, journals
like J-HEP or Classical Quantum Gravity, et cetera, they don't even try and anonymize
their manuscripts because they know if they anonymized it, you know, journals like J-HEP or classical quantum gravity, etc. They don't even try and anonymize their manuscripts because they know if they
anonymized it, you could just Google the first sentence and go find the archive paper and see
you posted it. So, yes, I think, you know, double-blind peer review, etc. made sense in
a particular era to eliminate exactly the kinds of biases that you're characterizing and other ones.
But for math and physics, where the workflow is
you put your paper on archive
and then maybe a couple of weeks later
you submit it to a journal, it doesn't make sense at all.
And so people don't even try.
So about the journals inflated prices,
outside of an oligarchy or collusion,
what's keeping it high?
I mean, I'm reticent to claim that it's a collusion.
I mean, so a lot of it is just that a lot of it is tied into the promotion structure
in academia, right?
So a lot of it is tied into if you want to get a permanent job in academia, if you want
to advance up that ladder, you need to get, you know, there's this general view that you need to get published in the
fancy journals.
And then that means that the journals that are generally perceived by university administrators
as being the fancy ones know that they can charge essentially arbitrarily high prices
and people will pay them because they kind of because you know, their livelihoods depend
on it.
Yes, yes.
It's a sort, it's a really quite sorted situation when you think about it.
I saw a talk recently by someone who was going into the academic world saying that some of
the applications for professorship or postdocship that the second question after what is your
name is how many citations do you have?
And then people try to game this because you can publish something that is just worthy
of publication and do that many times rather
than produce something that you feel like it's of high quality, but will get less citations
than if you were to split that up and then you just flood the market.
Yeah, absolutely. And you know, there are these metrics, there's author level metrics
like the H index and so on, which, you know, which measure, you know, so H index equals
N means that you have N papers that have been cited at least N times. And that gets used actually quite frequently in hiring committees
and tenure committees and things like that. And yeah, it's incredibly easy to game, right?
It's this classic good heart's law example where, you know, as soon as you know that
you're being measured on that criterion, you can then say, oh, I'm going to just cite myself
in all, you know, every future paper I'm going to write, I'm going to cite myself in all
previous ones. And then I can very easily get some kind of N squared
dependence on my H index, and then I can get my friends to cite me too, and I can, as you say, rather than,
you know, rather than investing a year to write this one really good polished definitive paper on this subject,
I'm going to write 10, like, salami sliced mini, you know, minimum publishable unit things.
Yeah, yeah, right. That's a great way of saying it.
Right. And yeah, and all of that happens, right? And it requires,
I know I'm guilty of some of that too, you know, not because I want to be, but because,
you know, I need to, you know, I live in the academic system. That's kind of how
one has to operate to a certain extent. If you're competing with other people who are doing that,
it's awful, right? And I don't want to be in that situation. And, you know, I, yeah, if obviously I've given the choice,
I always try to be someone who, yeah, if I'm going to invest the time to write a paper
on something, I want to write in as much as possible, the definitive paper on that thing
and have it clean and polished and something that I'm proud of. But yeah, it's, I think
it's my impression at least is that it's becoming increasingly hard for that to be a viable career strategy
Yeah
What's fortunate in your case is that you were employed by Wolfram for some time and so you were able to work on the ideas
That were interesting to you and not have to concern yourself
Maybe I'm incorrect
but at least from my perspective you didn't have to concern yourself with incremental publications on ideas that aren't innovative in order for you to build a
Credit to your name.
But maybe I'm incorrect.
Well, I mean, there was certainly an element of that, right?
So during the time I was employed at Wolfram, I also was, I mean, initially I was a graduate student,
or actually very early stages I was an undergraduate, then I was a graduate student,
and then I was a kind of junior academic.
So I still had some academic position during that time.
And for that reason, it wasn't something I could completely ignore, right?
Because that would have been kind of irresponsible from a career standpoint.
But yes, in a way, it did take the pressure off because it meant that I had a kind of
more or less guaranteed funding source for at least part of my research.
And I wasn't having to repeatedly kind of beg, you know, government
funding agencies for more money and things and show them long lists of papers.
Um, it was also useful in a different way, which is that it meant that the
stuff I was doing got much more exposure than it would have done otherwise.
I mean, you know, we wouldn't have met, you know, if, if it hadn't been for
Stephen and, and, and the, the kind of the additional, both the additional
cache and the additional, uh, flack that is associated of the additional, both the additional cache and the additional
flak that is associated with, you know, with having his name attached to the project. And
so, yeah, in a way it meant that there was, you know, for my level in the academic hierarchy,
my work ended up being significantly overexposed. And yeah, that was good in a way, it was bad
in another way.
Why would it be bad?
Well it meant that...
Okay, so one negative aspect of it, which has not been hugely problematic, but is,
Steven has a certain reputation, right?
And that reputation is positive in many ways and negative in many other ways.
And if you are billed as, you know, you are the person,
you are the other person or one of the other people working on the Wolfram Physics Project,
you get, there's a, there's a sense in which you're elevated by association and you get
tainted by association. And people assume that, you know, yeah, people assume that many of the
negative characteristics associated with, you know, I don't know, not giving appropriate credits to,
to prior sources or having slightly inflated ego issues, et cetera, I don't know, not giving appropriate credits to prior sources or having slightly
inflated ego issues, et cetera. Right. Many of those things kind of get projected on you,
rightly or wrongly, but yeah, by virtue of association.
Yeah. Or that you're supporting that. So maybe you don't have those qualities. Okay.
Right. Right. And it's a difficult thing to, I mean, in a way it helped because it meant that a lot of the criticism
of the project got leveled at Stephen, not at the rest of us, right?
Yes, yes.
Which is, so in a way it was useful, but yeah, but in other senses, you know, it was a, yeah,
it's a delicate balance.
So how do you see academics' engagement with the ideas from the Wolfram Physics Project?
Yeah, it's been mixed, very mixed. So on the kind of traditional fundamental physics people,
it's mostly been ignored, right? So if you look at your average string theorist, many
of them will have heard of the project and will say, oh, that's that weird kooky thing
that that guy did and we don't really know anything about it, right? At least that's
the general response that I've seen.
They'll say they scrolled through the blog post but then didn't find anything
readily applicable to their field and so they're just waiting for it to produce
results. That's the general statement.
Right, right, exactly.
And then they'll say the necessary, well I wish him luck, but firstly I don't think they actually mean that.
Secondly, if they do they only mean that because they're not competing for the same dollars.
Yes. And I've certainly had conversations with people who are not quite so polite. But yes. But yeah, so there's that crowd. There are some people in the quantum gravity community who have
actually taken some interest and have started, you know, have cited our work and have used it
and it's been incorporated in other things. So causal set theory is one example of a, that's again a slightly unconventional branch
to quantum gravity that's really quite formalistically similar in a way.
Causal sets are really just, you know, they're partially ordered sets.
They're really the same as causal graphs in some sense.
And so there's a precise sense in which you can say that the, you know, that the hypergraphically
writing formalism is just giving you a dynamics for causal set theory, which causal set theory
does not possess a priori because it's essentially a kinematic theory.
And so, yeah, in those communities, it's been somewhat more receptive.
There's been, again, there are in areas, this is essentially unsurprising, right?
So in areas where there is formalistic similarity, like say loop quantum gravity gravity where there's some similarity in the setup of things like spin networks and spin
foams, there's been some interest in these kind of topological quantum field theory models
or topological quantum computing models where again there's this interest in this intersection
between you know, combinatorial structure, topology, etc. and fundamental physics, there's
been some interest. An area where we've got a lot of interest is in applied category theory.
So, you know, people who, I would say that's been,
at least in terms of the stuff that I've worked on,
that's been by far our kind of most warm reception
are people working on categorical quantum mechanics,
and particularly these kind of diagrammatic graph rewriting approaches
to quantum mechanics like ZX calculus and so on.
We've had some very, very productive interactions with that crowd and also with
people not directly on the physics side, but interested in the formalism for other reasons.
So there are people like the algebraic graph rewriting crowd, many of whom are in areas
like Paris and Scotland, who again have been very interested in what we've been doing,
not necessarily for physics reasons, but because they're interested in the algebraic structure
of how we're setting things up, or they're interested in how the formulas can be applied
to other things like chemical reaction networks or distributed computing and that kind of
stuff.
Uh-huh.
You're currently at Princeton, correct?
Right.
Okay.
So what do you do day to day?
So mostly I work on computational physics. So I work on, you know, developing
Yeah, developing algorithms and things for understanding physical phenomena through computational means which is, you know
more or less a direct extension of you know of the stuff that I was doing at Wolfram research, but
Yeah, I'm in a sense having been associated with the physics project
and with Wolfram Research for some time, I now consider in part my role to be trying
to get some of those ideas more deeply embedded in sort of traditional scientific and academic
circles and not so much tied to, yeah, as you were putting it earlier, Stephen's own
personal research dollars and that kind of thing.
How do you feel when the popular press almost invariably ascribes all, if not the majority
of the credit of the Wolfram Physics project to Wolfram himself?
Yeah, it's difficult, right?
So as I say, in a way, there is a positive aspect to that, which is that it means that, you know,
we, you know, it means that- You're shielded from direct criticism.
Right, right. Less likely to be blamed. But no, I mean, it, yeah, it's emotionally difficult,
right? I think, I don't know, maybe not for everyone, but certainly for me, I, you know,
I find it quite psychologically tough if, you know, if there's an idea that I've had that I'm reasonably proud of or,
you know, result that I've proved that I'm reasonably proud of, et cetera, it's not the
best feeling to see, you know, headlines and Twitter threads and whatever, where it's all
being ascribed to, to, to one person. Um, and in my small way, I tried to push back
against that, but, oh sorry, gone. I love Wolfram, I love Stephen, but,
so this goes without saying,
he doesn't do many favors in that regard.
So when someone gives him the accolades,
it's rare that I'll see him say,
oh, and by the way, that result was from Jonathan Gerard.
Right, right.
And again, I guess we're all guilty of that to a certain extent.
I'm acutely aware that in the course of this conversation, I haven't mentioned, for instance,
Manognar Namaduri, who is the person who I kind of did a lot of this work on categorical
quantum mechanics with, who deserves, again, a reasonable fraction of the credit for that
insight.
So I'm guilty of this too, and I guess everyone is to an extent. Stephen may be more
than many people, but it's a feature of this personality that I can't claim to have been ignorant of.
Sure, sure. So he has another claim, which is that he solved the second law of thermodynamics.
And from my reading of it, I wasn't able to see what the problem was solved the second law of thermodynamics. And from my reading of it, I wasn't able to
see what the problem was with the second law and how it was solved. Other than you say
you derive it from statistical mechanics, which was there before. I must be missing
something because I don't imagine Stephen would make that claim without there being
something more to it. So please enlighten me.
Yeah.
Okay.
So, uh, I think as with many of these things, um, that, that blog post about the
cell, that series of three blog posts about the second law, I think was, um, there
was interesting, you know, just like within chaos, right?
I think there was, there was a lot of interesting stuff there, uh, that, that
they got figured out.
It wasn't quite as grandiose as I think Stephen made it out to be. But again, that's a responsibility
of any scientist, right? Is to slightly inflate the significance of what they're doing. But so
my reading of it is as follows. So there's a kind of standard textbook, popular science type way that entropy increase gets
explained, right?
Which is, you know, you say if you define entropy as being, you know, the number of
microstates consistent with a given macrostate or the logarithm of that, which is Boltzmann's
equation, then, you know, the fact that entropy has to increase is kind of obvious in some
sense because, you know, the number of ordered states or the number of ordered
microstates or the number of microstates consistent with an ordered macrostate is always going
to be smaller than the number of microstates consistent with a disordered macrostate.
And so if you're just ergodically sampling in your space of states, you're going to
tend towards ones which are less orderly and not towards ones that are more orderly.
And that argument or that explanation seems convincing for a few seconds until you really start to think about it and you realize that it can't possibly make sense. And one reason,
I mean a very foundational reason why it can't possibly make sense is because
that explanation is time symmetric, right? So if it's the case that you're ergotically sampling
your space of possible states, and
yes, okay, the less ordered ones are always going to be more numerous than the more ordered
ones, then yes, it's true that evolving forwards in time, you're going to tend towards the
less ordered ones. But it's also true that if you're evolving backwards in time, you
would tend towards the less ordered ones. But of course, that's not what we observe
in thermodynamic systems. So that explanation can't be right or at the very least can't be the complete answer. And so I think the conceptual
problem is a real one, right? I think it is true that we really don't fully understand the second
law of thermodynamics from a statistical mechanical point of view. And as soon as you start trying to
apply it to more general kinds of systems, the problems become worse. I mean, there's a, there's a famous
example that, you know, was brought up by, by Penrose of, you know, what, you know, what
happens when you try and apply the second law of thermodynamics to the early universe.
And again, you get to, you seem to get these two contradictory answers that, so, you know,
as the universe evolves forwards, if we believe the second law, things should be getting,
you know, as we, as we get further and further away from the initial singularity,
entropy should be getting higher and higher.
And yet, when you look back close to the initial singularity,
and you look at the cosmic microwave background and so on, it looks very, very smooth.
It looks basically Maxwellian, like a Boltzmann distribution.
It looks more or less like a maximum entropy state. So we have this
bizarre situation where as you move away from the Big Bang, entropy gets higher, but as you go
towards the Big Bang, entropy gets higher. So something must be wrong. And, you know,
Penrose has these arguments about conformal cyclic cosmology and how, you know, the role of
gravitational fields is essentially to decrease global entropy and all that kind of stuff. But
that's all, you know, again, fairly speculative. And I would say at some deep level, that's still
a story we don't really understand. So that I think is the problem that's being solved.
And that series of blog posts proposes, and again, this is not really that, I mean, even
in NKS, there were indications of this idea.
But yeah, I mean, the basic idea is that you can explain the time asymmetry in terms of
computational irreducibility.
Explain.
Where you say, okay, so even if you have a system whose dynamics are exactly reversible,
in practice, because of computational irreducibility effects, they can become, the system can become
pragmatically arbitrarily hard to reverse.
And that you can think about it essentially as being a kind of a cryptanalysis problem, right?
So in a sense, the dynamics of a computationally irreducible system are progressively encrypting
certain microscopic details of the initial condition. So that in practice, even if it is
in principle possible to reverse from a computability standpoint, if you try and think
about the computational complexity of that operation, it's equivalent to solving
some arbitrarily difficult cryptanalysis problem
to work out, okay, where exactly was that gas molecule
at time t equals zero?
And that goes some way towards explaining
this time asymmetry problem.
I don't think it's a complete explanation.
I think there's a yet deeper mystery there,
but I do think it's an interesting collection of ideas.
Yeah, so that's observer dependent.
So it would be difficult for you.
Sorry, not difficult for anyone, but difficult for an observer.
But for the system itself.
Yes.
Would there still be that issue of having to decrypt for the system itself?
Well, no, I would argue not because, yeah, it's a very important point, right?
These notions are all observer dependent because in a sense, the Boltzmann equation requires
the existence of a macro state, right?
And the macro state is an observer, it's a synthetic kind of observer theoretic idea,
right?
It's like, you've got a bunch of molecules bouncing around in a box.
And so they have some micro-state details, but then you want to describe that box in
terms of gas kinematics.
You want to describe it in terms of a density and a pressure and a temperature and whatever.
So those give you your macro-states.
But the, you know, the choice to aggregate this particular collection of micro-states
and say these are all consistent with a ideal gas with this, you know, temperature
in this adiabatic index, whatever, that's an observer dependent thing. And so, yeah, and that's
another point that, again, I don't think it's completely original, but I think has not been
adequately stressed until these blog posts, which is that different definitions of an observer will
yield different definitions of entropy, different choices of coarse-graining yield different
definitions of entropy. And therefore, you know, in that sense, it's kind of unsurprising that, as Von Neumann and
Colt Shannon and people kind of pointed out, that the term entropy is so poorly understood and that
there are so many different definitions of it. There's entropy in quantum mechanics, there's
entropy in thermodynamics, there's entropy in stat mech, there's entropy in information theory,
and they're all similar vibes, but
they're formally different. And you can have situations where one entropy measure is increasing,
one entropy measure is decreasing. And that becomes much more easy to understand when
you realize that they are all measures of entropy relative to different formalizations
of what it means to be an observer. And yeah, so with regards to the decryption thing, yes, I would say there's an aspect
of it that is fundamental, that is purely a feature of the system.
Even if you don't have any model of the observer and you're just looking directly at the data
structures, you can have the situation where the forward computation is much more easy
or much more difficult than the reverse computation
and obviously those kind of one-way functions those get used in things like cryptography, right?
So you know the existence of those is quite well studied in cryptanalysis. So those certainly
exist and those can give you some form of time asymmetry. But arguably the version of time
asymmetry that's relevant for physics is the observer dependent one. It's the one way you say
actually you know in this particular for this particular aggregation of microstates and this particular
interpretation of that aggregation as this macro state, this is the computational complexity
of the reversal operation. And that is an observer dependent thing.
You mentioned Penrose and I want to get to some of your arguments. I don't know if you
still have them, but I recall from a few years ago, you mentioned that you have some issues
with Penrose is non computational mind argument. So I want to get to that. But I want from a few years ago, you mentioned that you have some issues with Penrose's non-computational mind
Argument, so I want to get to that but I want to say something in defense of Stephen that people don't realize what it's like
When you're not in academia to one get your ideas
Taken seriously by academia and then also what it's like in terms of funding
So people will say that yeah, sure Stephen is ratamontate or self triumphant, but you have to be that to the
public because that's your funding source. Whereas for the
academics, they are that to the grant agencies, to the people
they're asking for money, you have to big yourself up. It's
just that you don't get to see that.
Yeah, I know. I absolutely agree.
Great, great. Now for Penrose, please outline what are your issues with, I think it's the Penrose-Lucas
argument, although I don't know if Penrose and Lucas, I know Lucas had an argument and
it's called the Penrose-Lucas argument.
I don't know their historical relationship.
Right, right.
And yeah, there's an original argument that's purely using kind of mathematical logic and
Turing machines and things.
And then there's the Penrose-Hameroff mechanism, right? Which is the proposed biochemical mechanism by which there exists
this non-computability in the brain. Yeah, I mean, so, okay, there's an...
Okay, how to phrase this. There's an element of this, which I'm quite sympathetic to,
which goes back actually to one of the very first things we discussed, right? Which is the
distinction between what is model versus what is reality. Turing machines are a model.
Yes.
And so if you say, well, the mind is not a Turing machine. I mean, if that's the if that's
your only statement, then I agree, right? But then nothing, you know, like the universe
isn't a Turing machine in that sense, right? And the question is, is it useful to model
the mind as a Turing machine? Or is it useful to model the mind as a Turing machine or is it useful to model the universe as a Turing machine?
And there I think the answer is emphatically yes.
And you know, okay, are you going to be able to model everything?
Well, not necessarily.
So again, to that extent, I do have some sympathy with the Penrose-Lucas argument that I'm open
to the possibility that there may be aspects of cognition that are not amenable to analysis in terms
of Turing machines and lambda calculus and that kind of thing. I just don't think that
the particular examples that Penrose gives, for instance, in his book Emperor's New Mind,
are especially convincing examples. I mean, so he has this argument that mathematics,
the process of apprehending mathematical truth must be a non-computable process because
we know from Gödel's theorems that for any given formal system, if it's consistent, then
there must be statements that are independent of that system, where both the statement and
its negation are consistent with the underlying axioms. But we, you know, so Gödel's original argument proved that for piano arithmetic, for the standard axiom
system for arithmetic. And later on it was discovered, you know, works for any axiom
system that's at least as strong as piano arithmetic. And so Penrose's argument, I mean,
I'm caricaturing a bit and it's a little unfair, but you know, the basic argument is,
well, we can obviously see that arithmetic
is consistent. So when we construct this girdle sentence that says this statement is unprovable,
we can see that it has to be true. And yet, within the formal axioms of arithmetic, as
they are computable, it cannot be decided in finite time that that statement is true. And okay, so most of that is correct.
But the part where you say, well, we as human observers can clearly see that that statement
is true, well, that presupposes that we are able to know the consistency of integer arithmetic,
which we have strong reason to believe is consistent.
But Gödel's second incompleteness theorem says that, well, we can't know that formally either.
So in a sense, he's presupposing the conclusion.
He's already presupposing that we can know the truth value of an independent proposition,
namely the consistency of piano arithmetic, in order to prove that we can know the truth value of another independent proposition,
namely this Gödel sentence.
And so for me, it just feels extremely circular.
So it doesn't...
Sorry, can he not use like, what if he didn't say that it's irrefutable, rather that probably
so far, it seems like piano arithmetic is consistent. And if it was to explode, it'd
be so odd that it hasn't exploded already. And we've explored it quite extensively. Every
day we increase our credence and the consistency of it.
Can he not use an argument like that?
He absolutely could and that would be correct.
But then the problem with that is there's nothing in that argument that a computer could not replicate, right?
A machine could also make that same argument.
You could also write a computer program that says,
okay, I'm going to test loads of propositions in piano arithmetic and see whether I find an inconsistency. And the more propositions I test, the less likely it is that the piano
arithmetic is inconsistent. And so I can construct, this is the machine speaking here, I can construct
some kind of Bayesian argument that says, I'm this level of confident that this proposition
is true. So yes, human beings can do that kind of Bayesian reasoning, but then so can
a machine. And so the crux of the Penrose argument is, or the Penrose-Lukas argument, is that, you
know, there is this additional non-computable step where the human somehow knows, not assumes,
but just knows that piano arithmetic is consistent, and from that deduces that T has to be true.
And I don't see how you can justify that without essentially presupposing the conclusion.
So what's the difference between intuitionist logic and constructivist logic?
Ah, okay, that's a fantastic question.
So and, you know, cycles back to the stuff we were talking about at the beginning with
regards to like constructivist foundations for physics, right?
So I would say, so constructivism is really a kind of broad, okay, the simple answer is
intuitionistic logic is a special case of constructivist logic.
So constructivism is a broad philosophical movement where the idea is, so okay, for the
people who don't know the history of this, so in the aftermath of Gödel's incompleteness
theorems and Toski's undefinability theorem and Turing's proof of the undecidability of
the halting problem and all these limited results in mathematical logic that happened in the early 20th century,
people started saying, okay, well, how can we trust that anything is true in mathematics?
So if we always have to make some unprovable assumption about the consistency of our axiom
system, how can we ever be confident of anything beyond just the kind of heuristic Bayesian
argument that we made before?
And so then various people, especially a guy called Brouwer,
and later in his later years David Hilbert,
coddled on the idea that, okay, what you could do
is you could say, well, if we strengthen our criterion for mathematical proof,
if we say that when you reason about a mathematical object,
it's not enough just to reason about it abstractly. You actually have to give an algorithm, a finite deterministic
procedure that constructs that object before your statements can even make sense. That's
a much stronger condition and it immediately rules out certain forms of mathematical proof.
So for instance, a proof by contradiction, it would not be allowed in such a, uh, in such a paradigm. Because if you prove a statement, okay.
So obviously, you know, uh, suppose I want to convince you that, you know, this equation
has a solution.
So one way I could convince you is to make a proof by contradiction.
I could say, assume it doesn't have a solution and then derive some piece of nonsense.
Yes.
My assumption had to be wrong.
Yes.
You can prove existence without construction.
Right, right.
But that only works if I assume that the axiom system
I was using to prove that is consistent
and that the inference rules I was using
to derive that contradiction were actually sound.
If they weren't, if it was an inconsistent axiom system
or the inference rules were not sound,
then I could derive a contradiction
even from a statement that was true.
And so it would be invalid.
And of course we know from Goodell's theorems and so it would be invalid. And of course
we know from Gödel's theorems and from Turing's work that we cannot for any non-trivial formal
system know conclusively that the system is consistent or that the inference rules are
sound. Whereas instead if I try and convince you by saying look here's a program, here's
an actual algorithm that constructs a solution for you and you can just go and check whether it solves the equation.
Somehow that's much more convincing.
You don't have to assume anything, except that maybe the validity of the model of computation,
but you can check that too, right?
So there's no, you're placing a much lower kind of epistemological burden on the underlying
axioms of mathematics.
You can use those to guide you in how you search for things, but ultimately the
ultimate criterion, the ultimate test for truth is,
can you define a deterministic algorithm that actually witnesses the structure that you're talking about?
And so this was intended to be a kind of almost a get-out clause,
you know, from these limited of results to say this is a way that we can
kind of bypass many of these, not all of them, of
course, but many of these issues. Now, it's a very,
very significant limitation because it immediately
means that there are very large classes of
mathematical structures that you just can't talk
about at all. You know, the structures where you
can't avoid undecidability and independence. But
but rather astonishingly, there are large parts of
mathematics, including areas like analysis, which you maybe wouldn't have thought would be amenable to constructivism, where many of the most interesting results, you know, the Heine-Borel theorem or whatever, right, you can actually prove using purely constructivist means.
So that's really what constructivism is about. Then intuitionism, which is a particular flavor of constructivism that's due to Brouwer.
So once you've decided that you want to work in constructivist mathematical foundations,
then you still have the problem of, okay, well, what am I, what are my underlying rules
going to be?
What, what, how do I actually impose those constraints in a systematic way?
And so intuitionism is just one approach to doing that, where you say, okay, I want to
outlaw
non-constructive proofs like proof by contradiction. How do I do that? Well, one, you know, what's
the one thing that should be outlawed is any use of double negation. So the, the axiom
of double negation that not, not X is equivalent to X. I shouldn't be able to do that because
that allows me to do non-constructive proofs. And it turns out that if you're going to outlaw
that you also need to outlaw what's called
the law of excluded middle, the statement that A or not A is true for any proposition
A.
Sorry, you need to outlaw it or is it equivalent to outlawing that?
It's equivalent.
So one necessitates the other.
And then, you know, so in the kind of logical foundations, that's what you need to do.
And then that implies certain things like that, say the axiom of choice in set theory,
right?
The statement that if you have a collection of, if you have some collection of non-empty
sets and you assemble a new set by choosing one element from each of that, from each element
of that collection, that that set is necessarily non-empty, something which is very intuitively
obvious for finite collections, but very not intuitively obvious for
finite and countable collections, but not intuitively obvious for uncountable collections of sets.
Is that the root of the word intuitionism? Like is it actually meant to say that this is more intuitively the case?
It's more...
That's not... so my understanding is that it's more that
these were meant to be the minimum rules that
somehow, yeah, I mean, in a way, yes, these were meant to be kind of the minimum conditions
that matched human mathematical intuition.
Yeah, I don't know.
I know there's a whole history of, like I mentioned, I want to do a whole video on my
gripes with names.
So it could be something philosophical about Kant and intuition.
I have no clue. But do intuitionists not have a concept of
infinity? Because you mentioned Heine-Barrell and so it's not embedded in
that the infinitesimals. Right, right. If you say you can do analysis I don't
understand how that can be done. Yeah okay this is a really important point. So
I mentioned that intuitionism is just one
flavor of constructivism and there are many others and there are ones that are more or
less strict. So there's a stricter version of constructivism called finiteism, which
is exactly that, where you say, not only am I going to be constructivist, but my algorithms have to terminate in finite time.
So if you're an intuitionist and you don't subscribe to the kind of finiteism idea,
you might say, well, I can write down an algorithm that solves this.
There is a deterministic procedure, but it may not necessarily terminate in finite time.
So, you know, an example of that would be the integers, right? So with the integers, um, I can write down an algorithm which provably constructs the
complete set of integers.
That algorithm doesn't terminate.
If I were to run it on a finite machine, it wouldn't terminate.
Um, but any given integer can eventually be derived by just repeatedly applying that,
that procedure.
Um, so you could, so there is actually a way subject to this kind of weaker version of intuitionism, there is a way that you can reason about infinite mathematical structures.
But if you then say, oh no, I'm not going to allow myself to do that.
I want all of my, you know, all the deterministic procedures that I write down have to be constrained so that they always terminate in finite time.
Then you become a finiteist. And then there's variants of that like ultra-finitism, which I think is quite fun, where one effectively believes that there is a
largest number and that number is decreasing over time because of essentially physical constraints.
Yeah, I like it. I don't believe in it, but I like it.
Well, again, it's this question of what do you mean by belief, right?
I mean, if mathematics is intended to be a kind of tool set for modeling certain processes
of thought, then, you know, there are certain kinds of problems where I think it's useful
to take a finiteist or ultra-finiteist mindset.
Yeah, I agree.
If you're a mathematical Platonist, which I'm not, then you might say, okay, well,
I believe that the mathematical universe is much larger than, you know, in some ontological
sense than the universe that's conceived by ultrafinitists. But, you know, at least for
me, I at least am a pragmatist. And I say, well, you know, I'm going to use whatever
version of mathematics I think makes sense for this particular problem.
Uh-huh. So what do you believe to be the primary issue between combining, well, the primary
difficulty? What do you believe to be the primary difficulty between combining general relativity
and quantum mechanics? Right. So that's been formulated in many ways. I mean, so I'm going to,
having just sort of slightly slated Penrose for his consciousness views, let me try and
right that wrong a little bit by saying I think Penrose has a really, really nice argument
for why, even just at a conceptual level, quantum mechanics and general relativity are
incompatible, which is the following. that if you take the two of the most foundational principles, that's kind of the, in a sense, delineate,
you know, how quantum mechanics is different from classical mechanics
and how general relativity is different from classical mechanics.
Those would be the superposition principle in quantum mechanics,
the principle that if you have a system that can be in this eigenstate or this eigenstate,
it can also be in some complex linear combination of them. And on the side of the Einstein equations of general relativity, it's the principle of
equivalence, right? It's the principle that accelerating reference frames and gravitational
reference frames are really the same, or to translate that into slightly more mathematical
terms, that anything that appears on the left-hand side of the field equations in the Einstein tensor
you can move as a negative contribution to the right-hand side in the stress energy tensor.
equations in the Einstein tensor you can move as a negative contribution to the right hand side in the stress energy sensor.
So Penrose has this really nice argument for why those two principles are logically inconsistent and the argument goes like this that so
suppose that
you've got a kind of
something like a Schrodinger-Catt type experiment where you've got I know you are like a robotic arm that contains a mass at the end that's
producing a gravitational field
and it's connected up to a radioactive nucleus that has some probability of decaying.
So that arm can be in one of two positions. It can be position A, position B.
And the position that it's in depends on the quantum state of that nucleus.
So now, just naively, what you appear to have done is created a superposition of two different gravitational field configurations. Okay, so if you do that, you can write down a wave equation,
you can write down the wave function that corresponds to that superposition and everything looks just fine.
So so far there's no problem.
But then if you believe the if you believe the equivalence principle,
then you should get the same wave function if you then do the same calculation in an accelerating frame.
So if you take that whole desktop apparatus and rather than doing it here on the earth,
you do it in a spaceship that's accelerating at 9.8
one meters per second squared.
And you have exactly the same experimental setup
with the same robotic arm,
you should get the same wave function.
But if you calculate it,
which again is just a standard calculation
in relativistic quantum mechanics,
you get almost the same answer.
The two wave functions differ by a phase factor, which normally wouldn't be too much of a problem. Normally, you The two wave functions differ by a phase factor,
which normally wouldn't be too much of a problem. Normally, you know, if they differ by a phase
factor, you say that they're somehow the same quantum system. But the phase factor depends
on time to the power four. And because of some slightly technical reasons that have
to do with the fact that quadratics have two solutions, if you have a phase factor depends
on time to the power four, that's telling you that the fact that quadratics have two solutions. If you have a phase factor depends on time to the power 4,
that's telling you that the wave function you've written down corresponds to a superposition of two different vacuum states.
And one of the core axioms of quantum mechanics is that you can't superpose two different vacuum states for the very simple reason that
the vacuum state is the kind of zero point from which you measure
energies, you know, using your Hamiltonian. So if you have a superposition of two different vacuum states,
there's no longer a uniquely defined Hamiltonian. There's no longer a uniquely defined energy because there's no rule for how you superpose those vacua.
So it is inherently illegal in quantum mechanics to produce those superpositions.
So somehow by just assuming that you could superpose gravitational fields,
you've been able to use the equivalence principle to violate the superposition principle or
equivalently vice versa. There's a more mathematical way of seeing the same thing, which is to say that, okay, so at a very basic level, all right, quantum mechanics is linear
and has to be linear by the Schrodinger equation. The Schrodinger equation has to be linear
because of the superposition principle. So if I have two solutions to the Schrodinger
equation, then a complex linear combination of those states with appropriate normalization has to also be a valid solution
to the Schrodinger equation. General relativity is nonlinear and has to be nonlinear because
in a sense, if you take the Einstein field equations and you linearize them, you linearize
the gravitational interaction, then what you get is a version of general relativity that doesn't possess gravitational self-energy. So in other words, the reason why general
relativity is a nonlinear theory is because in Newtonian gravity, if I have a mass, that
mass produces a gravitational potential. But the gravitational potential doesn't produce
a gravitational potential. But in general relativity, because of the mass energy equivalence, I have a mass that
produces a gravitational potential, but that gravitational potential has some energy associated
to it.
So it also produces a gravitational field and that produces another gravitational field
and so on.
So there's actually a whole infinite set of these smaller gravitational fields that are
being produced.
So this is often summarized by the slogan that gravity gravitates.
And that appears as a nonlinear contribution
to the Einstein field equations,
as these off diagonal terms that appear
in the Einstein tensor.
And so it has to be nonlinear
because if you were to take two solutions
to the Einstein equations, two metrics,
and just try and add them together,
you quite clearly wouldn't get a third solution
to the Einstein equations in general
because what you've done is you've added
the gravitational potentials,
which is really what the metric tensors are indicating,
but you haven't incorporated all these additional nonlinear contributions induced by the sum
of the gravitational potentials themselves. So the basic problem is that you can't superpose
gravitational fields, right? And that's really what the Pennerer's argument is kind of indicating,
that if I try and take two metric tensors and just add them in a
way that's consistent with the Schrödinger equation, I'll violate the Einstein field
equations. And if I try and take two solutions to the Einstein field equations and combine
them in a nonlinear way that's compatible with general relativity, I'll violate the
linearity of the Schrödinger equation. And at some level, that's the basic problem, right?
The problem is that the linearity of Schrödinger versus the non-linearity of Einstein means that superpositions of gravitational fields
cannot be described without violating at least one of those two formalisms.
Does the conceptual difficulty still persist in the linearized quantizing, linearized general
relativity?
So my understanding is that you can certainly get further with quantizing linearized. So
if you just linearize your gravitational interaction, you can not only evolve quantum fields on
top of a curved spacetime described in terms of linearized gravity, which you can do for
Einstein gravity, but you can also describe the back reaction of the quantum fields onto the metric tensor.
I actually don't know how much further than that you can go. I suspect, but what I do know is that
it's definitely a lot easier. You can make much more rapid progress with quantizing gravity if you
assume linearizations than if you don't. I think there are still some problems that persist, but
I think they're nowhere near as difficult. So how is it that higher category theory overcomes this?
near as difficult. So how is it that higher category theory overcomes this? That's a great question. I don't, and I mean the basic answer is I don't know,
but I know there's there's a very tempting kind of hypothesis, right? So I
mentioned towards the beginning that there are these category theoretic
models for quantum mechanics and and also I think I mentioned even mentioned
briefly that there are these models for quantum field theory as well. And the way that that works is, so we talked at the start about
these, you know, dagisymmetric compact closed monoidal categories, which are the kind of,
you know, the basic mathematical setup for categorical quantum mechanics. The problem
with that though is that every time you apply one of these morphisms, every time you apply one of
these time evolution operators, you are essentially picking out a preferred direction of time, right? You are assuming,
you've got, you know, if you imagine each of your quantum, each of your spaces of states
is a space of states on a particular space like hypersurface, when you, once you construct
a unitary evolution operator that's a solution to the Schrodinger equation, you are selecting
a preferred direction of time, which is of course not relativistic, that's not covariant. So to go from the non-relativistic version of quantum
mechanics to a version that's compatible at least with Lorentz symmetry, you need
to have some systematic way of transforming one time direction to
another. Well if you think about it in the category theoretic perspective,
through the category theoretic lens, there's a systematic way to do that,
which is through higher categories. So if you consider categories which have objects and morphisms, you can
also consider two categories that have two morphisms between those morphisms that allow
you to transform morphisms to each other, not just objects to each other. And so if
you take the two category version of the one category picture of categorical quantum mechanics, you can
allow the two categories to correspond to gauge transformations between your evolution
operators. So you're transforming the direction of time in a way that's consistent with, say,
with the generators of the Lorentz group. And so what you get in some appropriate special
case is what's called a functorial quantum field theory. So Baez and Dolan constructed this axiomatization
of functorial and particularly topological
quantum field theories based on what's called
a T.S. Segal axiomatization that used these two categories
and indeed even higher categories as a way of formalizing
this notion of gauge transformations,
of being able to transform between time directions.
Okay, so that's a nice piece of mathematics and in my opinion is
one of the more promising avenues towards constructing a kind of
mathematically rigorous foundation for quantum field theory. What does it have
to do with quantum gravity? Well, this is where it necessarily becomes very
speculative. But so there's an idea that goes back to Alexander Grothendieck, who
I mentioned, this amazing
algebraic geometer from the early 20th century who really developed a whole bunch of these
ideas from applied in higher category theory while he was sort of living as a basically
a hermit in the Pyrenees, I think.
But so Grothendieck made this hypothesis that's now called Grothendieck's hypothesis or the
homotopy hypothesis, which goes as follows.
Oh, okay. Let me motivate it like this. that's now called Grotendieck's hypothesis or the homotopy hypothesis, which goes as follows.
Oh, okay, let me motivate it like this.
So if I have a topological space, you know, it has some collection of points and it has
paths that connect those points.
But I can also have paths that connect the paths and those are called homotopies, right?
So I can continuously deform one path into another and I can use that information to
tell me stuff
about the topology of the space.
So you can use the homotopy information
to tell you about the homology, right?
You can find that there's a, if you're in a donut,
you can see that there's a hole there,
because if you have a loop,
a path that loops around that hole,
you can't continuously contract it to a point
without encountering some discontinuity.
So those homotopiesies you can formalize as,
you know, kind of higher order paths between paths. So in the language of category theory,
you could say my initial topological space is a one category that has points and, you
know, paths between the objects and morphisms. My, the first homotopy type is the two category
I construct from that where the two morphisms are homotopies between those paths. But then
I can also consider homotopies between homotopies and so on. So I can construct this
whole hierarchy of higher categories and higher homotopy types. And then that terminates at this
infinity category level, which is that the hierarchy has some natural endpoint. And somehow we know that from various results in higher category theory
that all the information that you care about up to weak homotopy equivalence about not
just the space you started from, but all of the intermediate spaces that were in our hierarchy,
all of that information is somehow contained in the algebraic structure of that infinity
category. So the infinity category determines up to we come out of the equivalence, everything that comes in the hierarchy below
it. And that's why kind of infinity category theory is so different to even just normal
finite higher category theory. Infinity categories somehow contain far more information. There's
actually is a specific type of infinity category called an infinity groupoid because the, you
know, because the paths are invertible.
And Grotendieck was really one of the first people who encouraged topologists to stop
thinking about fundamental groups and start thinking about fundamental grouppoids without
needing to define distinguished base points and things like that.
But the homotopy hypothesis is this really deep statement that kind of goes in the other
direction.
Where, so we know that starting from a space and doing this you know hierarchical
construction you build up to this infinity category that tells you you know
up to we can possibly equivalent all the topological information about that space
and all of its homotopy types. Grotendieck then said well maybe that's
really the definition of a topological space that infinity categories are just
spaces, infinity group points are spaces or at least they define the definition of a topological space that infinity categories are just spaces, infinity group points are spaces, or at least they define the structure of a space and all
of its homotopy types up to become homotopy equivalent. So it's kind of a converse direction
at that statement. And that's the homotopy hypothesis. It's not proven, it's not even
precisely formulated, but it's a very interesting idea that I think is largely believed to be
correct, right? It aligns well with our intuitions for how algebraic topology should work
so
therefore
Attempting speculation about about the relationship between that and physics go so going back to the quantum field theory picture for a moment
So suppose you don't just stop at two categories or indeed three can yes you keep going right?
You keep adding these higher gauge transformations. So not just gauge transformations
that deform time direction to time direction,
but higher gauge transformations
that deform gauge transformation to gauge transformation.
You build up a higher homotopy type that way.
What happens when you get to the infinity category limit?
Well, so what you end up with is something
that has the structure of a topological space.
So starting from something that's completely non-spatial,
you've ended up with a topological space. So you're starting from something that's completely non-spatial, you've ended up with a topological
space.
And so in the spirit of these kind of emergent space-time views, you know, like ER equals
EPR and so on, one hypothesis that's quite tempting to make is maybe that infinity category
defines the structure of our space-time, right?
The topology and geometry of space-time emerges in that infinity category limit that I take
by just adding higher and higher gauge transformations starting from categorical quantum mechanics.
And so if that's true, which again, to be clear, we have no idea whether that's true
or not, right?
But if that were true, then the coherence conditions, the conditions that define how
the infinity category relates to all of the lower categories in that hierarchy,
those coherence conditions would essentially be an algebraic parameterization for possible quantum gravity models.
And so that would be a very, if that ended up being correct, that would be a really nice way to kind of conceptualize and formalize
the essential problem of quantum gravity, that we're really trying to nail down the coherence conditions that relate that infinity category to all the
All the higher categories in that hierarchy
Now what would it be like to study the topology? So there's something called the stone duality. I'm sure you're aware of
Which relates topology to syntax?
So I've never heard of someone studying stone duality at the infinity categorical level at the topology
that's induced from that category? What does that look like?
Yeah, that's a really interesting question. So yes, the way that stone duality works is
so if you have a, I mean, there's again, as with many of these things, there's a nice
categorical interpretation in terms of Boolean top-osses and things. But the basic idea is that if you have a Boolean algebra, a kind of minimal
algebraic axiomatization for logic, there's a way that you can formalize that in terms
of this mathematical structure of a lattice, right? And specifically an orthomodular lattice,
I think. I may be getting that wrong. I think it's an orthomodular lattice. But so in which
essentially every point in that lattice is a proposition and then you
have these meet operations and these join operations that become equivalent to your
and and or operations in logic.
And the reason that's significant is because those same class of lattices also appear in
topology because there are specific spaces called stone spaces that are essentially the...
So okay, sorry, let me say that less confusingly.
So if you take a topological space and you look at...
Oh, geez.
It doesn't like topological spaces.
No.
Okay, let's try that again.
Okay.
Sorry about that.
That's being kept in.
We'll kick that part in.
So wait, wait.
Is it angry at you?
No, it was angry at... So there's a gate just outside. So wait, wait, is it angry at you?
No, it was angry at someone. There's a gate just outside, which sometimes opens and closes.
And this is my fiance's Dachshund, who is very, very territorial.
And he was up until now sleeping very soundly and has just woken up.
And so we may get some interruptions.
Well, congratulations on the engagement.
Thank you. Thank you.
Yes, anyway, so what was I saying?
Yes, okay, so if you take a topological space, then you can look at its open set structure.
So if you take the collection of all open sets, you can look at, in particular, you
can look at the open set containment structure.
You can look at which open sets are included in which others.
And when you do that, you again get the structure
of an orthonodular lattice, because the lattice operations
are essentially defined by the inclusion relations
between the open sets.
And so there's this duality between topological spaces
and this class of lattices.
So you could ask, what are the particular topological spaces
that you get if you look for topological spaces
whose open set lattices are the lattices that you get
from looking at Boolean algebras,
and those are the stone spaces.
So they have the kind of topological spatial interpretation
of logic in some sense.
And in a way you could say topos theory
is really about trying to generalize that idea, right?
That's another way to think about it.
So every elementary topos has an internal
logic and also every elementary topos has some kind of spatial interpretation because
the axioms of elementary topos theory, this finite limit axiom and this existence of power
objects or sub-object classifiers is really the analogue of, is really some generalisation
of the axioms of point set topology, right? Because you know, that's the topos theoretic analog of saying that your open sets have
to be closed, you know, the collection of open sets has to be closed under arbitrary
unions and finite intersections and so on. So topos have spatial interpretations and
they also have an internal logic. So there's a particular kind of topos called a Boolean
topos whose internal logic is Boolean algebra and whose spatial interpretation is therefore a stone
space. But actually you can do the same construction for any elementary topos that you liked.
And so then really what you're asking is, okay, when you go to higher topos theory,
if we take the higher category, which turns out to be that infinity category that you
get from the growth and deconstruction is, you know, admits a topos structure.
So then you could ask what is the internal logic to that and what is its
relationship to its spatiality. And what you end up with is the spatial structure
of an infinity homotopy type in homotopy type theory. So in homotopy type theory,
this is another kind of logic interpretation of, you know, of higher categories
where my apologies.
Sorry.
She's crying somewhat.
Hang on.
Wait.
Um, okay.
Yeah.
Yes, yes.
It's slightly more restricted in my emotions now, but if you imagine taking a proof system
and you say, okay, so now I'm going to interpret every point, every proposition in that proof
system as being a point in some space and every proof as being a path. Right?
So a proof just connects two propositions together. Then, so I can prove one proposition
for another. I could prove that two propositions are equivalent. I can also prove that two
proofs are equivalent, right? I can take two paths and I can continuously deform them.
But that proof exists in the next homotopy type, right? Because that's interpreted topologically
as a homotopy between those parts. And so you can do exactly the same construction.
And so in the infinity category limit, what you get is a logic which allows not just for proofs
of propositions, but proofs of equivalence between proofs and proofs of equivalence between those
proofs and so on. Right? So that would be the kind of, that's the internal logic of one of those higher top bosses.
It's a logic that allows for proofs of equivalence
between proofs up to arbitrarily high order.
Interesting.
So in theories of truth,
there's one called Tarski's theory of truth,
where your truth can only speak about the level
that's beneath it.
And then, and this is one of the ways
of getting around the liar's paradox,
is that you say, well, it's truth level one, and then you're speaking about a truth level
two or falsity level two, etc. And then the criticism is, well, what happens Tarski when
you go all the way up to infinity? And I don't think he had an answer, but it's sounding
like there can be a metaphor here for some answer.
Yes, I mean, potentially, it's not something I've thought about a huge amount, but it's certainly the case that in these kind of higher order logic constructions, there are things that
happen at the infinity level that don't happen at any finite level. And it's conceivable that, yes,
you might be able to say, you might be able to do a kind of tasky thing of evading the light,
or you may be able to do some kind of- Right.
Quine's paradox. I mean, I think the same thing happens with Quine's paradox,
right? Or where you have, you know, you try and construct, you know, liar paradox type scenarios
without self-reference where you say, you know, the next sentence is false, the previous sentence
is true or something. But then the logical structure of those things changes when you, as soon as you go from having a finite cycle of those things to having
an infinite cycle, the logical structure changes. And I think the same is true of things like the
Tarski theory of truth. And yeah, it may be that there's some nice interpretation of that in terms
of what happens as you build up the, you know, to these, to these progressively higher order
solpuses in homotopy type theory. I don't know. I mean, but it's an interesting speculation.
What would be your preferred interpretation of truth?
So from a logic standpoint, I'm quite taken with the kind of, with the definition of semantic
truth that exists in things like Tarski's Undefinability Theorem, which is the idea
that you say a proposition is true if you can incorporate it into your formal system without changing
its consistency properties, right? So if the, you know, you see you have formal system S
and you have proposition T, T is true if and only if S plus T is, you know, if only if
con S plus T is the same as con S. And that's a fairly neat idea that I think, I mean, it's
used a lot in logic and it's quite useful for formalizing certain concepts of mathematical
truth and particularly for distinguishing these kind of concepts of like completeness
versus soundness versus decidability, which often get confused. Those become a lot easier
to understand in my experience if you start to think of truth in those terms.
Yeah, great. John, that's a formal definition of truth that works for formal statements, but what
about colloquial informal ones?
No, no, no.
I agree.
It's extremely formal.
But I was actually I was about to say that I think it also aligns quite well with some
basic intuition we have for how truth works when we reason about things informally, right?
So if we have some model of the world, right, and that's like our formal system or some informal system, right?
And if you take on board some new piece of information,
generally speaking, the way that humans seem to work
is if we can incorporate that new piece of information
without fundamentally changing the consistency properties
of our model of the world,
we are much more likely to believe that statement is true
than if it necessitates some radical re-imagining
of the consistency properties of our internal representation. likely to believe that statement is true than if it necessitates some radical reimagining of
the consistency properties of our internal representation. And so I think informally,
there's a version of that same definition of truth that has a bit of slack, right? That you say, okay, a proposition could be provisionally true, but how likely I am to
accept it as true depends on how radically I have
to reformulate my foundations of reality in order to incorporate it in a consistent way.
I see. Well, John, I don't know what subject we haven't touched on. This is a fascinating
conversation. Thank you, man.
No, this was fantastic. As you say, it's been a long time coming, but I'm really glad we had this opportunity to chat and, uh, and yeah, I really look forward to
staying in touch.
I've become, I have to confess when you first reached out, I hadn't heard of you, but in
part because you reached out and in part for, because, you know, of the, of the explosion
of your channel, I've been following a lot of what you've been doing subsequently.
And I think, um, no, I think TOE is, is, is a really fantastic resource and the, yeah, your, your particular niche is one that definitely
that desperately needs to be filled. And I think you're doing a fantastic job of filling
it.
What would you say that niche is? And I asked just because it's always interesting for me
to hear, well, I have an idea as to what TOE is or what TOE is doing, what theories of
everything the project is. It doesn't always correspond with what other people think of it.
Right. So the reason I really like your channel and the reason I like witnessing these conversations
and to some limited extent participating in them as well is the following reason, right?
It feels to me like you've got these two extremes out there, right? There are these really quite vacuous kind of science popularization
or philosophy popularization, YouTube channels and documentary series and things where you
often have a host who goes very far to kind of play up the fact that they're ignorant
of what's being discussed and they don't really have any strong opinions and it's just, they
go and ask some brain boxes for what they think and it all get assembled in some nice documentary package. That's kind of one
extreme. Then you have the other extreme of you know you take some physicist, some philosopher
who's been working on their own pet theory for 30 years and they go make some you know some you know
long YouTube video about it just advocating that and shouting down all the competition and being
very kind of bigoted and dogmatic or whatever.
And it feels like what you are managing to do, because you are an extremely intelligent
and well-read person with background in math and physics and who has very wide interests
outside of that, and who more so than any other YouTuber I've encountered, actually
makes an effort to really understand, you know, the
stuff that they're talking about and the stuff that their guests are talking about.
You know, that's even just in itself, that would be incredibly valuable.
But then what I think that allows you to do is to do something that's somehow a really
nice synthesis of the best aspects of those two approaches, whilst avoiding their more unpleasant aspects, which is to be someone, to be the kind of interested, educated,
motivated interlocutor who is, you know, not completely inert, like in the, in the, in the
kind of the sort of popular science documentary case, but also not, you know, dogmatically pushing
and saying, ah, you know, you're completely wrong. You need to be thinking about loop quantum gravity or something, but just saying,
oh, but how does this connect to that? Or is it possible you could rethink, you could think of
things in this, in this, you know, being that kind of Socratic, uh, dialogue partner in a way that I
think you are almost uniquely placed because of your skillset and your, your personality to, to,
you know, that's a role you're almost uniquely placed to play in that space. I've never really seen that work in any context outside of your channel. And I
think that's something really quite special.
Well, man, that's the hugest compliment and I appreciate that. Thank you so much. I think
you've captured, well, I don't know if I'm the bigot in that, but I'll interpret that
as me not being a bigot just to sleep at night.
No, no, no, exactly. I mean, I think you handle the balance really well as someone who clearly
has ideas and has opinions and has views as you know, as you have every right to as someone
who's thought about this as much as anyone else, right? But you're not trying to shout
down opposition, you're not trying to force some view down someone's throat. You are, you are, as far as I can tell,
you are actually, you know, in completely good faith, just trying to explore with genuine intellectual curiosity, the space of ideas and, and, and, you know, and present new perspectives
and point in directions that people may not have previously thought of, um, in a way that I think
a lot of people say that they're trying to do, but I've very rarely seen anyone actually, you know, and people
might be able to simulate that for a while, but, you know, after a while that, you know,
the, the, the mask kind of slips and you see, Oh, really they're kind of pushing this viewpoint
or whatever.
And, you know, so part of that is that I don't have that incentive structure of having to
produce and get citations in order for me to live.
Because if I was, then I would have to specialize much earlier and I wouldn't be able to survey
as much before I specialize.
So currently I'm still in the surveying mode, like again it's before I go down and eat.
So I'm lucky in that regard.
And I man like, holy moly, super cool.
So I have many questions from the audience, by the way.
I mean, just just informally on the following up on that.
I mean, I think the, in many ways,
I think the string theory landscape video
is the perfect embodiment of that sort of side of you, right?
It's the fact that I don't know any other person really
who could have done something like that
because it requires both, you know,
you're not, you know, you come across quite critical of string
theory, right? So no, no string theorist would have made that video, but also no one whose
paycheck depends on them investigating loop quantum gravity would have invested the time
to understand string theory at the level that you had to understand it in order to make
the video. And so it's like, I don't know who else would have filled that that niche, right? Yeah, that was a fun project
I find it's just it's so terribly in vogue to say I dislike string theory
but then simultaneously to feel like you're voicing a controversial opinion and
I wanted to understand string theory before us and I by the way, I love string theory
Right. I think it may be describing elements of reality correctly and that may be
theory. I think it may be describing elements of reality correctly and that may be why it has, I misspoke by the way when I said in the video that it has no predictions, it had
mathematical predictions, maybe still does. And this is something Richard Borchards emailed
me because he said, that's something I would correct in the video. It has mathematical
predictions, it doesn't have physical ones. But anyhow, I think that's why it may prove
so fruitful mathematically.
Mm-hmm.
And it also, I mean, like parts of it have physical predictions that are, but they just
happen to not strictly depend on the string theoretic interpretation, right?
So there are condensed matter predictions of ADS-CFT that have been quite experimentally
validated, right?
It's just that ADS-EFT came from string theory,
but it doesn't strictly depend on string theory.
Oh, right, exactly, exactly.
Okay, so one of the questions from the audience is,
has John ever done psychedelics?
Yes, so I have tried psychedelics,
and actually I consider it,
I don't want to come across as too much
of a kind of drug pusher,
but I consider it one of the most important things I've ever done. I don't do it regularly because I'm, you know,
afraid of the effect that it has on the brain and things like that. So, you know, I had a list of
things I wanted to try and I tried each of them once. I'm very glad that I did. And the main
takeaway was, you know, the stuff we were talking about before about the computation that a
system is doing, and there's the computation that the observer is doing.
Really what you've got is that you've got these two computations and you've got a third
computation that is the encoding function, the thing that maps a concrete state of the
system to an abstract state in the internal representation of the observer.
And really, all three of those things are kind of free parameters.
And I've been thinking about that kind of stuff since I, not in those terms precisely,
but in some form for a long time, from when I was a teenager onwards.
And in this very nerdy intellectual way, thinking about, oh yes, you know, surely my
model of reality, if my model of reality changes even slightly, then, you know, the interpretations
of the perceptions and qualia that I experience is going to be radically different. But it
doesn't matter how much you intellectualize that idea. It's very, very different if you
just like subjectively experience it, right? And that's in a sense, driving home the fact that if you make what is in the grand scheme of things, an absolutely trivial modification
to your brain chemistry, your modes of decomposing and understanding the world completely just
dissolve as happens with things like LSD. Actually, experiencing that from a first hand
perspective is really, really important.
It kind of convinced me.
I don't want to, again, I don't want to seem too, okay.
It would be too strong to say ultimately convinced me of the validity of that way of thinking
about things, but it definitely is something that occurs to me when I, when I'm kind of,
when I'm worried that I'm overplaying this observer dependence of phenomena line, I kind
of think, well, no, actually, if you, if you modify even just very slightly neurotransmitter that I'm overplaying this observer dependence of phenomena line, I kind of
think, well no, actually if you modify even just very slightly
neurotransmitter balances in the brain, the internal perception of reality
changes, you know, it's kind of really really radically. Yes. Okay, well here's a
physics question. What would happen if an object wider than a wormhole throat
flies into the wormhole?
Does the wormhole widen?
Does the object cork the wormhole?
Does it deform the object?
If it deforms, how?
What about if the object flies at an even faster speed, so 0.9 the speed of light?
Okay, interesting question.
So I mean, wormholes obviously are not known to be physical.
They are valid solutions to the Einstein equations. You know, Einstein, Rosen, Bridges and extended
Schwarzschild solutions are valid solutions, but the Einstein equations are incredibly
permissive and they permit many, many more solutions than things that we believe to be
physical. So, if you just take the Einstein field equations on face value. So, okay, one
thing to remember is that when an object is falling into the wormhole, it's not like it has to fit
into the throat, so to speak, right? The object is, because if you imagine the topology of what's going on,
you've got this two sheets sort of hyperboloid almost, right? And the wormhole throat that's connecting them.
But any object you throw in is localized to one of the sheets.
So it's traveling on that sheet and follows the world lines on that sheet. It's not like it's some
plug that's trying to go through the throat, through the space in the middle. So it may
well be that the world lines, I mean, this will happen due to tidal deformation, so that
the object will be stretched in the radial direction and compressed in the angular directions as
it gets pulled in just due to gravitational tidal effects.
But the fact that the object is quote unquote bigger than the wormhole throat doesn't matter.
From its perception, its world lines are traveling on some smooth region of space.
It never encounters any kind of discontinuity, anything that has to sort of fit through, so to speak.
Okay.
Would you kindly ask him,
how would he tie science and spirituality together?
So I think one always has to be a bit careful with that,
right?
I mean, so I'm certainly not,
in the sense that I don't want to take either
of the two extreme positions of saying, oh, you know, science validates the existence of an immortal
soul or something, which I don't believe, but nor do I want to say, oh, science invalidates whatever,
the, you know, the numinous dimension. I think it's, you know, they're largely agnostic to one
another.
I do think there are some things, okay, so actually it comes back to the stuff
we were talking about at the beginning in a way about
the kind of the language that we use
and the models that we use for constructing reality, right?
Like do you actually believe that the universe is a computer?
Do you actually believe that the solar system
is made of clockwork or something?
And again, the answer is no, right? Like, my view is that these are just models we use based on the
ambient technology of our time. And I kind of have a similar feeling about a lot of theology and a
lot of spirituality, right? That it's so if you go if you go and read writings by people like,
I know, John Duns Scotus or, you know, medieval scholastic theologians, the questions they're grappling with are really the same
questions that I'm interested in.
You know, so like, okay, for, take a concrete example, right?
So I realized I'm talking about religion here, not necessarily spirituality, but I'll tie
it together in a sec.
But so you could ask the question.
So our universe, right, is neither, it seems to be neither
completely trivial, right, it's neither kind of maximally simple, nor is it kind of maximally
complicated, right?
So there's some regularity, but it's not completely logically trivial.
You know, it's not like every little particle follows its own set of laws, but it's also
not like we can just reduce everything to one, as far as you can tell, we can just reduce
everything to one logical tautology.
So as far as I can tell, the first people to really discuss that question in a systematic
way, at least from European theology and philosophy, I'm less, I'm more ignorant of other traditions,
were the scholastic theologians, were people like Don Scotus who asked, you
know, why did God create a world which is neither maximally simple nor maximally complex
effectively? And Don Scotus' answer is a perfectly reasonable answer, right? Which is because
such a God created the world that way because that world is the most interesting. And if
we were to, so if I were to formulate that question in modern terminology, I would formulate it in
terms of Kolmogorov complexity, right?
I would say why is the algorithmic complexity of the universe neither zero nor infinity?
Why is it some finite value?
And the answer, as far as you can tell, is essentially because of information theory,
because we learned from Shannon that the kind of the most interesting or the highest information density, you know, the most interesting signal is one that is neither completely, you know,
noisy maximum information nor completely simple, but somewhere in the middle. So really, Dunn's
scotus hit upon a really foundational idea in, you know, modern algorithmic information
theory. He didn't formulate it in those terms because, you know, he didn't know what comogero
complexity was. He had no way of, that ambient thinking technology
didn't exist.
So he formulated the answer in terms of the ambient
thinking technology of the time, which was God
and the Bible and all that kind of stuff.
And so I don't want to be someone who sits here and says,
oh, look at those people.
They were talking about God and whatever,
and weren't they so ignorant?
Because I don't want people to look at, not that I think I'm wrong, but I don't want people to look at
my work in a thousand years and say, oh, look, he thought the universe was a computer, how
silly he was, right? I don't think the universe is a computer. I think it's a useful model
just as they thought God was a useful model. And wish it was, and maybe to an extent still
is. So that's kind of my general view about sort of theology and spirituality
is that I think there are some classes of questions where it's useful to think about
things in terms of Turing machines or you know, fiber bundles or whatever it is. And
there are some classes of questions where it is useful to couch them in terms of the
soul or you know, an immortal spirit or God or whatever. And you can do those things without
believing in the ontological reality of any of them, as indeed I don't.
But that doesn't make them not useful.
Now can you actually distinguish those two if you're a pragmatist?
Because it's my understanding, if you're like William James, the utility of it is tied to
the truth of it.
Yeah, I mean, that's, it's a tricky one.
That's something, okay, being completely honest, I don't know.
It's something I've gone back and forth on over the years, right?
Because in a way, so yes, you might say, okay, do I believe in God or do I believe in the
soul in some ontological sense?
And the answer is no.
But if that's your definition of exist or that's your definition of belief, then I also
don't believe in electrons, right?
I don't believe in space time.
You know, I think all of these things are just models, right?
Like do I think that, you know, space time is a useful mathematical
abstraction, but in a sense we know that, you know, in black holes or in the big bang
or something, that's probably an abstraction that loses usefulness and eventually will
be superseded by something more foundational. So do I believe in space time in an ontological
sense? No. Do I believe in particles in an ontological sense?
No.
Interesting.
Whereas you might say, okay, well, therefore that means probably that my definition of
the word exist is not very useful, right?
I should loosen that definition a bit and be a bit more permissive.
So then you might take the William James view of, okay, well, you could say, I believe that
space-time exists in as much as I think it's a useful model for a large class of natural phenomena.
Again, it's a bit like the dinosaur thing we were talking about earlier.
You could say, well, I don't believe that space time doesn't exist in an ontological sense, but it's kind of consistent with a model of reality that does have good experimental validation or observational validation. But then if that's your criterion,
then I kind of have to admit that, okay, well, in that sense, maybe I do believe in a soul,
right? Because there are, you know, so for instance, you know, I don't believe that there's
any hard line distinction between, you know, the computations that are going on inside
the brain and the computations that are going on inside the brain and the computations that are going on inside
lumps of rock or something.
And really the distinction is,
it comes back to the point you were making earlier about,
what laws of physics would a cat formulate?
So in a sense, okay, yes,
maybe they exist in the same objective reality,
whatever that means,
but whatever their internal model of the world is,
it's gonna be quite different from mine because cats have not just different brain structure, but they have a different kind
of social order, their culture is different, et cetera. Just like my internal representation
of the world will be different to different humans who was raised in a completely different
environment with a different education system, et cetera. And so it's not like some abrupt
discontinuity. There's a kind of smooth gradient of how culturally similar are these two objects or these two entities,
and therefore how much overlap is there in their internal representation of the world.
So, you know, I have more overlap with you than I do with a cat,
but I have more overlap with a cat than I do with a rock, and so on, right?
But there's no hard-line distinction between any of those things, at least in my view, right? So in a way you could say, well, therefore I'm some kind of panpsychist or
I believe that there's, or I'm an animist, right? I believe that there's kind of mind
or spirit in everything. And again, I think that's not a comp- you know, it's not personally
how I choose to formulate it. I choose to formulate it in terms of computation theory,
but it's not a completely ridiculous way of translating that view. And, you know, these kind of druidic animistic religions,
you know, a lot of what they're saying, if interpreted in those terms, is perfectly reasonable.
So, yeah, it's just a very verbose way of saying, no, I don't have any particularly
good way of distinguishing between the two. And so in a sense, I have to choose either
between being ultra pragmatist and basically saying I don't believe in anything or being ultra permissive
and saying, yeah, I basically believe in everything, which seem like equally useless filters.
Well, another commonality between us is that the way that you characterized the scholastics, I
believe, and their ideas of God and then being inspired and realizing that that's similar,
not the same, of course, but similar to ideas of computation now, or at least how they were
describing it.
And that's one of the reasons why on this channel I interview such a wide range of people.
It's because I work extremely diligently to understand the theories and to be rigorous,
but I also feel like
much of the innovations will come from the fringes but then be verified by the
center. In other words, like the fringes are more creative but they're not as
strict. The center is much more stringent but then it has too fine of a sieve.
Right, right. It's like those simulated annealing
algorithms that you get in combinatorial optimization,
right? Where you're trying to find some local minimum of a function. So you set the parameter
really high initially. So you're kind of exploring all over the place, but being very, very
erratic. And then gradually over time you have to lower the temperature parameter. And
I think there's something in that as a model of creativity. That at the beginning you have
to be kind of crazy and irrational and whatever,
and then gradually you have to drop that temperature and kind of become a bit more strict and precise
and slowly start to nail things down.
Now the Santa Fe Institute has an interesting, I don't know if it's a slogan,
but it's the way that they operate, which is you have to be solitary and even loopy inane and then go back to people to then be verified
and actually have some wall to push against you because otherwise you're just floating
in the air.
Sorry, since I was to continue being complimentary to you and the channel.
I mean, that's another thing which I think is very rare and which you do extremely well,
which is to actually take seriously.
You know, I think it's again, it's something which I think a lot of people say that they
want to do or would like to think that they want to do.
But a lot of people seem to be, I know I'm bad at this too, right?
I try, but I think I fail.
Um, where if you're, if you, if you're presented with some really crazy, very speculative idea
and it's hard to kind of make head or tail of what the person is talking about, you know,
for a lot of people, it's the kind of instinctive reaction to say is complete
nonsense.
Like, don't waste my time.
And you know, certainly a lot of the mainstream physics community has that opinion and to
an extent has to have that, you know, view because if you know, one of the things you
learn if you start writing papers about fundamental physics is you get a huge amount of sort of
unsolicited correspondence from people trying to tell you their theory of the universe, right?
But you know, it's always it's also important to be mindful of stories like the story of
Ramanujan, right?
Like writing to G.H.
Hardy and people, you know, who must have seemed like an absolute nutcase, but who actually
was this kind of era defining genius.
And you know, so again, you have to be careful not to set the filter too strict.
And yeah, I think, you know, one thing that I think you do extremely well is really to kind of.
I think the, I think the expression in the post-Rap's community is, you know,
steel man, these, these kinds of arguments, right?
Is to say, you know, if you're, if you're presented with some idea that seems on
the surface completely nuts, let's try and adopt the most charitable possible
interpretation of what's happening.
Like how might we be able to make this make sense?
And yeah, it's something I try to do with ideas in physics and theology and other things,
but I think you certainly do it far better than anyone else I've encountered.
Is this related to why you follow the Pope on Twitter?
No, it's not.
That is a completely, yes.
Okay.
Well spotted.
No, that's because... So, all right.
The backstory to that is...
So that Twitter account was made when I was like 15 years old.
And I didn't use it.
I think I sent sort of two or three weird tweets as a teenager
and then let it die.
Okay.
I didn't even realize it was still around.
And then when the physics project got announced,
which was really the first bit of serious
media attention I ever received, right?
And I was having interviews and magazines and other things.
And I got a message from the director of public relations at Wolfram Research saying, they
found your Twitter account and it's got like some, you know, it's got 2000 followers.
I can't remember what it was.
People started following this Twitter account.
I was like, I don't have a Twitter account. And then I, and then I figured out, oh, they
found this Twitter account that I made when I was 15 and, and never deleted and forgot
existed. Now, when I was 15 years old, for some reason, I thought it was funny. So this
is some betrayal of my sense of humor. So I tweeted kind of weird nerdy math stuff
and whatever. And in my teenage sense of humor, I thought it'd be funny if I only followed
two people, the Pope and this person called Fern Britton, who is a sort of daytime television
star in the UK. And I don't know why I thought that was so humorous, but I thought it was
entertaining. And then I think Fern Britton left Twitter or something.
And so when I went back to this Twitter account,
the only account I followed was the Po.
And then I thought, oh, okay, well forget it.
I'll just leave it.
And then I since then have followed a few other people,
but he's still there somehow.
Okay. So it's just a relic you can't bear to get rid of him.
Like some people can't bear to delete some deceased person
from their phone.
Like it's for posterity.
What's the reason? Why do you still have it?. Like it's for posterity. What's the reason?
Why do you still have it?
Yeah, it's partly posterity. It's and it's partly because there is still a part of me
that for whatever reason thinks it's kind of funny that I that I follow a bunch, basically
a bunch of scientists and like, you know, science popularizes
Christopher Hitchens and then the Pope. Yeah.
Yeah. And then the Pope. Yeah.
Okay. So speaking about other people's theories, this question is, does Jonathan see any connections
between the Rulliad, Eric Weinstein's geometric unity, and Chris Langan's CTMU, which is also
known as the cognitive theoretic model of the universe?
So on a very surface level, I guess I see some connections.
I have to confess, so I'm not, I don't know really anything
about either geometric unity or CTMU. I've encountered both. People have told me things
about both. I've been able to find very little formal material about CTMU at all. And the
little I know says, okay, yeah, it probably does have some similarity with, you know,
this general thing we were talking about earlier of, you know, having a model of reality that
places mind at the center
and that kind of takes seriously the role that the Observer's model of the universe plays in, you know, in constructing an internal representation.
I think that's certainly a commonality, but I'm kind of, I'm nervous to comment beyond that because I really don't understand it well enough. With geometric unity, yeah, I don't really know what I mean.
Even if I were to understand it technically, which I don't, my issue would still be a kind
of conceptual one, which is I think it's kind of insufficiently radical, right?
I mean, it's like, it's really the idea is, you know, use, you know, use the existing methods from gauge theory to figure out, if we have a Lorentzian
manifold with a chosen orientation and chosen spin structure, here is the kind of canonical
gauge theory that we get defined over that structure.
And the claim is that gauge theory unifies gravitation and the three other gauge forces.
Like I say, I certainly wasn't convinced that that's formally true just by reading the paper,
which even if it were true, I would find it a little bit disappointing if it turned out
that the key thing that was needed for radical advance in physics just turned out to be a
bigger gauge group.
That would be a little bit anticlimactic.
Now we've talked about the pros of computational models and you even rebutted, at least from
your point of view, Penrose's refutation of computations.
But this question is about what are the limitations or drawbacks for using computational models? Minus complexity and irreducibility, like that's just a practical issue.
Right, sure, but even conceptually there may be issues, right? So,
I, and again, this is kind of what I mean when I say I'm not dogmatically trying to assert that
the universe is a Turing machine or something. There may be physical phenomena that are
fundamentally non-computable, as
Penrose and other people believe. But I don't think we know that yet. And certainly the
parts of physics that we kind of know to be true, we know are computable. And so computation
is therefore, again, going back to the pragmatist point, computation is therefore at least a
very useful model for thinking about a lot of physics. Whether it's useful for thinking about everything, who knows? Probably not, right? But yeah,
there are open questions. So for instance, it might be the case that, so we know, we
have known since Turing's first paper on computable numbers, that most real numbers
are non-computable. So if you have, you know, if the universe turned out
to be fundamentally based on continuous mathematical
structures and based on real numbers, then, you know,
at its foundational level,
it would be a non-computable structure.
But then you'd still have this open question of, well,
you've still got this issue of the observer.
You could imagine the situation where you have a
continuous universe that's based on non-computable mathematics,
but all experiments that you can in principle perform within that universe would yield computable values of the observables.
And in that case, and in fact, you know, again, there are papers by people like David Deutsch who've argued similar things, right?
That, you know, within, for instance, within quantum mechanics, you have, you know, arbitrary
complex numbers appearing in the amplitudes.
And so, you know, most of those are going to be non-computable.
But eventually you project those onto a discrete collection of eigenstates, and those are computable.
So in the end, it doesn't matter that the underlying model was based on non-computable
mathematics because the part of it that you can actually interface with as an observer still has computable outcomes, which means
that there is still going to be an effective model that's consistent with observation that
is nevertheless computable.
So in a sense, I don't think we know that yet.
I don't think we know whether it's even possible to set up, if the universe were non-computable,
would it be possible to set up experiments that are effectively able to excavate or exploit that non-computability to do, you know, practical
hypercomputation or something?
Wait, sorry, is David Deutsch suggesting that quantum mechanics only has point spectrums
and that there are no continuous spectrums?
Oh, sorry, let me not malign, let me not malign, that's it, specifically in the context of,
you know, quantum information theory and finite dimensional Hilbert's basis, right? So, you know, even if you have only
a finite eigen basis, so all your, all your measurements are computable, you know, the,
the, the, the eigenstates are discrete sets, but the amplitudes are still non-computable,
right, in general.
Uh-huh. Okay. I have a nettling point that I want to bring up that I hear mathematicians and
physicists say, but I don't think it's quite true.
So when they're talking about discrete models, they'll say discrete versus continuous.
But it should technically be discrete versus continuum.
Because you can have two graphs which are discrete, and you can have continuous maps
between them.
Because you just need the pre-image to be open.
It's not a continuum, but it's continuous.
Right.
I hear that all the time and I'm like, why does no one say that?
But I just want to know, am I understanding something incorrectly?
No, I think you're not understanding something incorrectly.
I think you're thinking about this more deeply than most mathematicians do,
which is perhaps a positive sign.
I mean, so yes, the distinction between what is discrete and what is continuum
is actually not very well-defined.
So let me give you a concrete example.
So, and this is actually something that comes
from a method of proof in logic called forcing.
It was developed by Paul Cohen,
for which we're on the first level.
So, and one of the key ideas in forcing
is this idea called a forcing P name,
which is a slightly technical idea,
but basically what it allows you to do is to talk about the cardinality of a set from
a different set theoretic universe, from a different domain of discourse.
The significance of that is, so, okay, what do we mean when we say that something is discrete?
Well, what we mean is that it can be bijected with the natural numbers, right?
That it's countable.
It can say it consists of a countable collection of data.
And when we say that something is continuous,
I mean, modular considerations
of the continuum hypothesis and something,
basically what we mean is that it's uncountable,
that you can't biject it with the natural numbers.
But you know, what is a bijection?
Well, a bijection is a function.
And what is a function?
Well, set theoretically a function is just a set, right?
It's a set of ordered pairs that map
inputs to outputs. So if you have control over your set theoretic universe, you can
control not just what sets you can build, but also what functions you could build. So
you can have the situation where you have a set that is countable from one larger set
theoretic universe in the sense that it's the function that bijects
that set with the naturals exists in that universe. But if you restrict to a smaller
one, that function can no longer be constructed. So internal to that set, that, you know, the
internal to that universe, that set is no longer countable. It's no longer, it's effectively
gone from being discrete to being continuous. The set itself is the same. It's just that
you've made the function that bijected it with the naturals non-constructive.
So if you like to an observer to a generalized mathematical observer internal to that universe,
it looks like it's continuous.
And again, there are versions of this idea that occur throughout topos theory.
P.T. Johnston, one of the kind of pioneers of topos theory, did a lot of work on these models of, these topos
theoretic models of the continuum, where you can have a very similar phenomenon, where
you can have some mathematical structure that looks discrete from a larger super topos,
but if you take some appropriate sub topos, you make non-constructible the functions that
essentially witness it as being discrete. And so internal to that, it becomes a continuous structure.
And so you can actually do things like locale theory and pointless topology
in a manner that is fundamentally agnostic as to whether the spaces you're dealing with are actually discrete or continuous.
So, yeah, even the question of whether something is discrete or continuous is in a sense observer dependent.
It's dependent on, you know, what functions you can and cannot construct or compute within
your particular model of mathematics.
So what I was saying is that continuity and continuousness is the same to me, but what
is continuous is not the same as a continuum.
For continuum, I would just say it's a spectrum with the real numbers say, but continuous is just a function
has the property of that is continuous. That can be there even when there's
discrete phenomenon. Yes exactly and in fact you know that I mean that's related
to the fact that a you know you can have a countable space that's not
discrete right. I mean so it's a discreteness in topology means that you
know you that only the you know that know, you, that only the, you know, that has essentially, you know, the only the points themselves have, you know, represent
open sets.
Um, then it is, so in a sense it's, it's kind of the, it's the, I forget whether it's the
courses for the finest possible topology, one of the two, it's the, it's the dual to
the box topology.
Um, but, or the trivial topology.
Um, but you can have, but you can perfectly well have countable topological spaces that are not discrete and you can have you know
Discrete topological spaces that are not countable. So yeah, somehow these yeah, it's again
It's this problem of sorry. Is this further complicated with the lowenheim-skolen theorem?
Which in one way says if you have something that's countable you have a model where it's uncountable and of every cardinality and vice versa.
Right, right.
Yes.
It's certainly right.
I mean, so the downward lo and heimskolen theorem is used in the, in the forcing construction
that I mentioned earlier.
I see.
Okay.
All of which, all of which is to illustrate exactly the point that I think you're making,
which is the, so, you know, there's the notion of continuity of, you know, pre-images of
open sets are open that comes from analysis and topology.
But that's not the same as the notion of continuum in the sense of that the thing is not countable.
And even that notion is sort of dependent on what functions are constructable.
One of them is essentially an analytic property.
You can have continuous maps between countable spaces, but you can't have countable maps
between continuous ones and so on.
Again, John, I don't know what subject we haven't touched on.
This was fascinating.
Yeah, this was fantastic.
No, this was really fun.
I'm really glad we finally got the chance to do this.
And yeah, I hope I didn't become too incoherent towards the end.
But it's...
Same.
No, no, you're totally fine.
I'm glad you enjoyed this episode with Jonathan Gerard.
If you'd like more episodes that are similar to it.
Again, Wolfram was interviewed himself three times on theories of everything.
It's on screen right now.
I recommend you check it out.
Also, the string theory video that Jonathan mentions is called the iceberg of string theory.
And I recommend you check it out.
It took approximately two months of writing, four months of editing with four editors, four rewrites, 14 shoots, and there are seven layers.
It's the most effort that's gone into any single theories of everything video. It's a rabbit hole of the math of string theory geared toward the graduate listening. There's now a website, curtjymungle.org, and that has a mailing list.
The reason being that large platforms like YouTube, like Patreon, they can disable you
for whatever reason, whenever they like.
That's just part of the terms of service.
Now a direct mailing list ensures that I have an untrammeled communication with you.
Plus soon I'll be releasing a one page PDF of my top 10 toes. It's not
as Quentin Tarantino as it sounds like. Secondly, if you haven't subscribed or
clicked that like button, now is the time to do so. Why? Because each subscribe, each
like helps YouTube push this content to more people like yourself, plus it helps
out Kurt directly, aka me. I also found out last year that external links count plenty toward the algorithm,
which means that whenever you share on Twitter, say on Facebook or even on Reddit, etc,
it shows YouTube, hey, people are talking about this content outside of YouTube,
which in turn greatly aids the distribution on YouTube.
Thirdly, there's a remarkably active Discord and subreddit for theories of everything where people explicate Toes, they disagree respectfully about theories,
and build as a community our own Toe. Links to both are in the description. Fourthly,
you should know this podcast is on iTunes, it's on Spotify, it's on all of the audio platforms.
All you have to do is type in theories of everything and you'll find it. Personally,
I gained from rewatching lectures and podcasts. I also read in the
comments that, hey, toll listeners also gain from replaying. So how about instead
you re-listen on those platforms like iTunes, Spotify, Google Podcasts, whichever
podcast catcher you use. And finally, if you'd like to support more conversations
like this, more content like this, then do consider visiting patreon.com slash KurtJayMungle and donating with whatever you like. There's also PayPal,
there's also crypto, there's also just joining on YouTube. Again, keep in mind it's support
from the sponsors and you that allow me to work on toe full time. You also get early
access to ad free episodes, whether it's audio or video, it's audio in the case of Patreon, video in the case of YouTube. For instance, this episode that you're listening
to right now was released a few days earlier. Every dollar helps far more than you think.
Either way, your viewership is generosity enough. Thank you so much.