Theories of Everything with Curt Jaimungal - Stephen Wolfram: Computation, Physics, Going Beyond "Evolution"
Episode Date: September 16, 2025Get 50% off Claude Pro, including access to Claude Code, at http://claude.ai/theoriesofeverything As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer!... Visit https://www.economist.com/toe In this episode, I speak with Stephen Wolfram—creator of Mathematica and Wolfram Language—about a “new kind of science” that treats the universe as computation. We explore computational irreducibility, discrete space, multi-way systems, and how the observer shapes the laws we perceive—from the second law of thermodynamics to quantum mechanics. Wolfram reframes Feynman diagrams as causal structures, connects evolution and modern AI through coarse fitness and assembled “lumps” of computation, and sketches a nascent theory of biology as bulk orchestration. We also discuss what makes science good: new tools, ruthless visualization, respect for history, and a field he calls “ruliology”—the study of simple rules, where anyone can still make real contributions. This is basically a documentary akin to The Life and Times of Stephen Wolfram. I hope you enjoy it. Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e SUPPORT: - Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join - Support me on Patreon: https://patreon.com/curtjaimungal - Support me on Crypto: https://commerce.coinbase.com/checkout/de803625-87d3-4300-ab6d-85d4258834a9 - Support me on PayPal: https://www.paypal.com/donate?hosted_button_id=XUBHNMFXUX5S4 SOCIALS: - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs Guests do not pay to appear. Theories of Everything receives revenue solely from viewer donations, platform ads, and clearly labelled sponsors; no guest or associated entity has ever given compensation, directly or through intermediaries. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Ontario, the wait is over.
The gold standard of online casinos has arrived.
Golden Nugget Online Casino is live.
Bringing Vegas-style excitement and a world-class gaming experience right to your fingertips.
Whether you're a seasoned player or just starting, signing up is fast and simple.
And in just a few clicks, you can have access to our exclusive library of the best slots and top-tier table games.
Make the most of your downtime with unbeatable promotions and jackpots that can turn any mundane moment into a golden,
opportunity at Golden Nugget Online Casino. Take a spin on the slots, challenge yourself at the
tables, or join a live dealer game to feel the thrill of real-time action, all from the comfort
of your own devices. Why settle for less when you can go for the gold at Golden Nugget
Online Casino. Gambling problem call connects Ontario 1866531-260. 19 and over, physically present
in Ontario. Eligibility restrictions apply. See Golden Nuggett Casino.com for details. Please play responsibly.
I'm shocked. Oh my gosh, this is something that completely violates the intuition that I've always had.
It's something that I said couldn't happen.
From discrete space to Darwinian evolution to entropy and the second law, Stephen Wolfram's computational view of the universe makes claims about all of these in a unified fashion.
Today's episode is a treat.
If you're a fan of this channel theories of everything, then you're likely someone who enjoys surveying large swaths of lessons from disparate fields, attempting to see.
how they all relate and integrate. Same with me, Kurt Jymungle. Now today, Stephen Wolfram outlines
how this polymathic disposition has helped him solve to his satisfaction, some of the major
outstanding problems in fields as diverse as computer science, fundamental physics, and biology.
This is a journey through his life in science where I tease out the lessons he's learned
throughout his career and how you can apply them yourself if you also want to make contributions.
I was honored to have been invited to the Augmentation Lab Summit,
a weekend of events at MIT last month,
hosted by MIT researcher Dunya Baradari,
the summit featured talks on the future of biological and artificial intelligence,
brain computer interfaces,
and included speakers such as the aforementioned Stephen Wolfram
and Andreas Gomez-Emelson,
subscribe to the channel to see the upcoming talks.
Stephen, welcome.
Thank you.
It's a pleasure.
How does one do good stuff?
It's an interesting question. I mean, I've been lucky enough to have done some science. I think
it's fairly interesting over the course of years. And I wonder, how does this happen? And I look at
kind of other people doing science and I say, how could they do better science? You know, I think
the first thing to understand is when does good science get done? And the typical pattern is some
new tools, some new methodology get developed, maybe some new paradigm, some new way of thinking
about things. And then there's a period when there's low-hanging fruit to be pricked. Last maybe five
years, maybe 10 years, maybe a few decades. And then some field of science gets established,
it gets a name. And then there's a long grind for the next hundred years or something that
people are doing, sort of making incremental progress in that area. And then maybe some new
methodology gets invented, things liven up again. And one has the opportunity to do things in that
in that period. I have been lucky in my life because I've kind of alternated between developing
technology and doing science, maybe about five times in my life. And that cycle has been very
healthy. It wasn't intentional, but it's been worked out really well because I've spent a bunch
of time developing tools that I've then been able to use to do science. The science shows me things
about how to develop more tools, and the cycle goes on, so to speak. So I've kind of had the
opportunity to be sort of have first dibs on a whole bunch of new tools because I made them,
so to speak. And that's let me do a bunch of things in science that have been exciting and fun to do.
I mean, I think a bunch of science I've done, I was realizing recently that it's also a consequence
of sort of a paradigmatic change, this idea taking the idea of computation seriously. And by
computation, the fundamental thing I mean is you're specifying rules for something and then you're
letting those rules run, rather than saying, I'm going to understand the whole thing at the
beginning. It's kind of a more starting from the foundation's point of view. Well, what I realized
actually very recently, and it's always surprising how long it takes one to realize these sort of
somewhat obvious features of history of science, even one's own history, is that, you know,
I've been working on a bunch of things in fundamental physics and foundations of mathematics,
foundations of biology, a bunch of other areas where I'm looking at the foundations of things.
using a bunch of the same kinds of ideas, the same kinds of paradigms.
And I'm realizing that a bunch of what I'm doing
is kind of following on from what people did about 100 years ago,
maybe sometimes a little bit more than 100 years ago.
And I was wondering, why is it that, you know,
a bunch of things I'm interested in,
I'm going back and looking at what people did 100 years ago,
I'm saying they got stuck.
I think we can now make progress.
What happened?
Think what happened is that in the 1800s,
there was this kind of push towards abstraction.
There was this idea that you could make formal versions of things that happened most notably in mathematics
where kind of the idea emerged that sort of mathematics was just a formal game
where you're defining axioms and then seeing their consequences.
It wasn't a thing about actual triangles or whatever else.
It was an abstract exercise.
And once people had ground things down to that level of abstraction,
same kind of thing happened with atoms and so on in the structure of physics.
Once people had ground things down to that sort of deconstructed level,
they had something where they didn't know what to do next.
Because what turns out to be the case is that that's sort of the setup for computation,
is you've ground things down to these kind of elementary primitives,
and then computation takes over,
and that's the thing that uses those primitives to do whatever is going to happen.
And so I think a lot of what got stuck then was closely related to this phenomenon
of computational irreducibility that I studied for the last 40 years or so, that has to do with,
even though you know the rules, it will not necessarily be the case that you can kind of jump ahead
and say what will happen. You may just have to follow those rules step by step and see what happens.
You may not be able to make the big theory that sort of encompasses that describes everything that
happens. And I think what happened in a bunch of these fields is people kind of ground things down
to the primitives, then they effectively discovered computational irreducibility, implicitly discovered
it, by the fact that they couldn't make progress. And I think that, and things like Gödel's theorem,
which are reflections of computational irreducibility, were kind of the other signs,
more direct signs of that phenomenon. But the end result was people got to these primitives
and then they couldn't get any further. And now that we actually understand something about
the kind of computational paradigm, we can see to what extent you can get further and the kinds of
things that you can now say. So it's kind of interesting to me to see that. I mean, one particular
area where I've only learned the history recently, and I'm kind of shocked that I didn't know the
history earlier, is about the discreetness of space. So, you know, back in antiquity, people were
arguing back and forth, you know, is the universe continuous or discrete? You know, are there atoms? Does
everything flow? And nobody knew. End of the 19th century.
finally one had evidence that, yes, matter is made of molecules and atoms and so on, matter is
discrete. Then first decade of the 20th century became clear you could think of light as
discreet. That had been another debate for a long time. And at that time, first few decades
of the 20th century, most physicists were sure this whole discreetness thing was going to go all the
way, that everything was going to turn out to be discrete, including space. I didn't know that
because they published very little about this.
And the reason was they couldn't make it work.
After relativity came in, it was like, how do we make something that is like space,
but is capable of working like relativity says space should work.
And it was also somewhat confused by the idea of space time
and the similarities between time and space, which were more mathematical than physical, really.
That kind of confused the story.
But I think then the thing that became clear was, I think, 1930 was, I think, the time when particularly Berner Heisenberg was, I think, one of the ones who was really, you know, space is discreet. He had some, I think, I need to go look at his archives and things, but I think he had some kind of discrete cellular model of space, and he couldn't really make it work. And then eventually he said, forget about all of this. I'm just going to think about processes in physics as being these particles come in.
something happens in the middle, but we're not going to talk about that.
We're just going to say it's an S matrix, and then things go out.
And so that's when he started just saying, I'm going to look at the S matrix,
this thing that just says how what goes in is related to what comes out.
I'm not going to talk about what's in the middle,
because I got really stuck thinking about sort of the ontology of what's in the middle.
So then after that, you know, the sort of quantum field theory and quantum mechanics and so on
started working pretty well.
people forgot about this idea of discrete space. And in fact, the methods that they had would not
have allowed them to have much interesting to say at that time. And finally, through sort of
series of events that are kind of mildly interesting in my own life, I kind of came to realize
how you could think about that in computational terms and how you could actually make all of that
work. One of the cautionary tales for me is this question about is matter, even discrete or
continuous. The people had argued about people like Ludwig Boltzmann had gotten, you know,
had sort of said towards the end of the 19th century, he kind of said he believed very much in
the atomic theory of matter. He was like, nobody else believes this. He said, you know, I'm going
to write down what I have to say about this. He says, one man can't turn back the tide of
history. I'm going to write down what I know about this so that when eventually this is
rediscovered, when eventually people realize this is the right idea, they won't have to rediscover
everything. Well, in fact, it's kind of a shame because in 1827, I think, a chap called Robert Brown,
who was a botanist, had observed this little pollen grains being kicked discreetly when they were
on water or something, and that it was realized eventually that brownian motion is direct evidence
for the existence of molecules. So if Boltzman had known the botany literature, he would have
known that, in fact, there was evidence for molecules that existed.
Just those connections were not made.
And so for me, that's a kind of cautionary tale, because in modern times, you know,
I think about what does it take to kind of find experimental implications of the kinds of
theories that I've worked on?
And, you know, some of those things are difficult.
And it's like, well, you know, it might be 200 years until we can do this kind of investigation,
or it might cost, you know, $10 billion to set up this giant space-based thing.
But it might also be the case that, in fact, you know, somebody in 1972 observed exactly a phenomenon
that is the thing that would be what I'm looking for.
And, you know, in fact, one of the things that's kind of ironic, and I've seen this a bunch of times
in my career in science, is that when people don't kind of have.
have a theory that says how an experiment should come out. And they do the experiment,
and the experiment comes out differently from the way they expected. They say, I'm not going to
publish this. This must be wrong. And so a lot of things which later on one might realize
were, you know, when you have a different theory, I might realize, gosh, you know, that experiment
should have come out a different way. It got hidden in the literature, so to speak, which is,
you know, this is a feature of the kind of institutional structure of science, but it's something
where, you know, if one's lucky, somebody would have said, sort of done the honest experiment
and just said, this is what we found. And, you know, even though this doesn't agree with the theory
that so far we understand, so to speak. And so I'm actually been using LLMs quite a bit
to try and do this kind of thematic searching of the scientific literature to try and figure out
whether you could save the $10 billion, not do the experiment now, but just use the thing that
already got figured out. But, you know, I think in terms of, I don't know, my efforts to do science,
as I was saying, I think one of the things that is sort of a critical feature is methodology
and when new methodologies open up. And, you know, sort of I've been kind of lucky to be
alive at a time when kind of computation and computers first make it possible to do kind of
experiments with computers, so to speak. And, you know, that said. I built lots of tooling
to be able to do that.
But I think the thing
that's always interesting
about doing computer experiments
and particularly this is a consequence
in the end of this whole computational
irreducibility idea
is almost every computer experiment
I ever do comes out
in a way that I didn't expect.
In other words, I'll do something
in doing some things even the last week or so
on particular domain
where I've got some theory
about how it will come out
and if I didn't have any theory
about how it would come out,
I wouldn't do the experiment. First point is the experiment has to be easy enough for me to do
that on a whim I can do the experiment. It can't be the case that I've got to spend a month
figuring out I'd do the experiment because then I've got to be really sure about why it's worth
doing. You know, it's got to be something where the tooling is such that I can do the experiment
easily. I happen to have spent the last 40 years building that tooling and, you know,
it's available to everybody else in the world too. But, you know, I'm, I'm, as far as I'm concerned,
And I'm the number one user of, you know, Wolfram Language and so on, as far as I'm concerned.
I doubt I'm the person who uses it the most of everybody out there, but I'm the user I care about the most, so to speak.
And, you know, it's a very nice thing to be able to take an idea that I have and be able to quickly translate that into something where I can, you know, make it real and do an experiment on it.
So I have to have some idea about what's going to happen, others wouldn't do the experiment.
But then I do it, and the thing I typically say to people who work with me is we have to understand the computational animals are always smarter than we are.
So, you know, do the experiment, and you find things that you never expected to find.
Why don't you give an example, a specific one?
Yeah, I mean, so let's see, I was looking at almost any kind of simple program.
the question is, what kind of thing can it do?
And so, for instance, let's say, looking at simple programs that rewrite networks,
okay?
You run the network rewrite thing, what's it going to do?
Well, sometimes it builds this elaborate geometrical structure.
I had no idea it was going to do that.
I thought it was going to make this messy kind of sort of random looking thing.
But no. Actually, it builds this very sort of organized, elegant structure in some particular cases. Or, for example, I've been looking most recently, as it happens, at Lambda calculus, very old model of computation that I happen to have never studied before, but have a particular reason to be interested in right now. And it's a question of, okay, I'll give you an example that happened to me just a few days ago. So there's a very simple specification, simple program. The program will run for a while.
They're different versions of this program.
You can enumerate lots of different possibilities.
It will typically run for a while and, well, usually it will stop.
Sometimes it won't stop.
It will just keep going.
Sometimes it will go on repetitively and so on.
And so I was looking at a bunch of these things and I was wondering, you know, what's the maximum lifetime of one that eventually stops?
So I studied a bunch of different cases and there was, you know, I thought, okay, I found it.
It's a lifetime, I don't know what, a few hundred or something.
and I could find that with some simple experiments.
But then there was one where I started looking at it
and I started looking at pictures of it
and I'm kind of a seasoned hunter at this point
for these kinds of things.
And there was something about it that didn't seem quite right.
There was something about one that I thought was going to just go on forever,
but it seemed like it was doing things
that might not allow it to go on forever.
So I pushed it a bit harder, push it harder.
Sorry, what do you mean by you pushed it harder?
Pushed the program harder.
Sorry, what do you mean by you pushed it harder?
push the program harder?
I let it run for longer.
I let it run overnight on a network of computers.
I mean, that's in practice what I did.
Okay.
And, you know, and then come back in the morning and, oops, it ran for, I don't know,
some tens of thousands of steps and then stopped.
Never expected that.
And then there was another one where I, having seen that particular phenomenon,
there was another one where I kind of suspected this one is going to stop.
and that one stops after some, well, let's see, that one I kind of found a method for figuring out how it will develop.
And that one, I think, stops after a few billion steps.
So these are things you just, you don't expect.
And, you know, this is very typical of what happens in this computational universe.
It's a bit similar to the physical universe.
There are things that happen in the physical universe that you don't expect.
But it's particularly sort of in your face when you can see, you know, in the physical universe,
you don't necessarily know what the underlying rules for something are.
So you could always be wondering, you know, do I just not know enough about how snowflakes form or something?
But in this case, you know the rules.
You know exactly what went in.
And yet you'll sort of have this forced humility of realizing you're not going to be able to figure out what happens.
And sometimes you'll be quite wrong in your guess about what's going to happen.
So, you know, one of the principles in doing that kind of science is, you know,
you'll do the experiments. They'll often come out in ways you don't expect. You kind of just have
to let the chips fall where they fall, which is something in doing science that can be psychologically
very difficult. You know, you had to have had some kind of theory that caused you to start doing
the experiment. Otherwise, you wouldn't have done the experiment. And so there's a, there's something
of a psychological pressure to say, look, I had this theory. This theory has to be correct. You know,
something went wrong with my experiment. You know, let me tweak my experiments.
let me, you know, ignore that part of the experiment or something, because I'm sure this
theory must be correct. One of the things that, you know, is an important thing that I kind of
learned long, long, long ago now is just let the damn chips fall where they'll fall.
And turns out, one of the things that's happened to me is sometimes some of the,
sometimes these chips fall in places that very much violate various kinds of prejudices
that I have. And it's just like, I'm more interested in where the chips actually fall.
than in supporting some prejudice that I have.
And as it's turned out, in the end, what's happened is sometimes several years later,
I realize actually the way those chips fell was more consistent with my prejudice than I could ever have imagined.
So I'll give you an example.
So, you know, a lot of things I've done have been sort of deeply deconstructive of the way the universe works.
That is, they're very non-human interpretations of what goes on.
the universe and so on. They're very, you know, the universe is some giant hypergraph of things.
It's a very kind of humanly meaningless object. It doesn't have sort of a resonance with kind of
our human sensibilities and so on. It's deeply abstract, sort of deeply deconstructed in a
sense. And yet, as a person, I'm quite a people enthusiast. You know, I like people. I find
people interesting. I work with people. I, you know, I'm a company full of people and so on. And so
For me, it was always something of a conflict that, on the one hand, I'm interested in people.
On the other hand, the things I'm doing in science are deeply deconstructive of anything sort of human about what's going on in science.
And that was the situation in my world for a couple of decades.
And then I realized, more recently, that the nature of the observer is actually critical in the end.
In the end, sort of the ideas about the Rulia and so on, the kind of entangled limit of all possible computations.
that's sort of the ultimately deconstructed, dehumanized thing. But what you then realize, what I realized eventually, is how our perception of the laws of physics depends critically on our nature as observers within the Ruliad. In other words, from going from a completely dehumanized view of science, that is this totally abstracted Ruliad, turns out the humans are actually really important in giving us the science that we have. So from, you know, even though
So I was sort of unhappy in some sense of some sort of psychological prejudice I was unhappy
with the idea that everything is deeply dehumanized because I kind of like humans.
In the end, a couple of decades later, I realized actually the humans are kind of much more
at the center of things than I had ever expected.
So that was kind of an interesting realization.
Another one like that that I resisted for a long time is these things I call multi-way systems,
which I had invented back in the early 90s.
and which I had thought by the late 90s,
that was a possible view of how quantum mechanics might work,
that there are these kind of many paths of history
and that are being followed by the universe and so on.
I really resisted that idea because I felt,
sort of egotistically, that I didn't want it to be the case
that there were all these different possible paths of history,
and the one I was experiencing was just one of those paths,
or something like that.
That was the assumption that I had made about what the
Spell that out.
So, you know how in physics we like to reduce something that's complex into something that's more
elegant, more efficient, more simple?
Turns out, you can do that with dinner.
Hello Fresh sends you exactly the ingredients you need.
They're pre-measured, they're pre-portioned, so you don't have to do this superposition
of too much cilantro versus why is there no cilantro or what have you, collapsing in your kitchen
every night. They have literally 40 different recipes every week, vegetarian, calorie smart
recipes, protein heavy recipes, which is my favorite, whatever your diet function looks like.
And they all take approximately 15 minutes to a half hour. That's less time than it takes
me to figure out whether time is fundamental. I've been using it myself. It's the one experiment
that in my life has always had reproducible results. It's delicious, it's easy, it saves me time
from living on black coffee while editing episodes at 2 a.m. say,
personally, my favorite part, though, is that it's an activity I can do with my wife.
Thus, it not only serves as dinner, but as a bonding enterprise.
So if you want to make dinner the most predictable part of your day,
go to hellofresh.com slash theories of everything, 10 FM,
to get 10 free meals plus a free item for life.
one per box with an active subscription.
Free meals applied as discounts on first box,
new subscribers only, varies by plan.
Again, that's hellofresh.com slash theories of everything,
10FM, all one word.
HelloFresh.
The best way to cook just got better.
How do you go from Maxwell's equations?
The implications of this idea of multi-way systems would be.
What I realized when we did the physics project,
in 2019 was that, in fact, that isn't the right picture, that the idea of multi-way systems and the
idea that there are these many parts of history, that's the right story. But the thing to realize is
we are embedded as observers in this universe that is branching all the time. And the critical
point then is that we are branching as well. So from this idea, from this first sort of naive idea
that when you have something where you have many branches of history,
that our experience must just go down one branch.
That's really not the right picture.
Actually, there are a couple of issues.
One is that the branches can merge,
and the other is that our experience can span many branches.
We are extended objects.
Our minds are extended objects in this branchial space
and the space of these possible branches.
I didn't realize that until 2019.
And that makes it, that means that my sort of ignoring multi-way systems for, you know, close to 30 years was a piece of sort of incorrect prejudice.
And I was kind of lucky enough that eventually kind of started thinking about, look, might as well try and take this seriously and see how, and see what its consequences actually are.
It was something actually Jonathan Gorod was one of the people who's like, you should take these more seriously.
I'm not sure he saw what the outcome would be, but that was a, you know, why are you resisting
this so much? It's perhaps a, so that was, but so it's always an interesting thing when you,
when you have kind of this sort of, you have to have a belief about how things are going to work,
otherwise you don't even look there, but you have to, you know, just believe the experiments
you're doing or the things you figure out. I mean, for me, if I,
I was doing, I mean, back in the day when I was long ago, when I was sort of first doing
physics, I worked a bunch with Dick Feynman, who was a physicist who one of his great
strengths was he was a really good human calculator. And I can't do that. I'm a good computer
calculator, but not a good human calculator. I built these computer tools because I wasn't a
very good human calculator. But Dick Feynman was really good at doing these calculations and
getting to the right answer. And then he would go back and say, he didn't think anybody would be
impressed by the fact that he got to the right answer by doing this complicated calculation.
He thought people would only be impressed if he could come up with this really simple kind of
intuitive explanation of what was going on, which he often managed to come up with. Then he would
throw away the calculation. Never tell anybody about the calculation. And everybody would be like,
how did you possibly figure out this kind of intuitive thing? And they'd all think, oh, it must be
simple to come out with this intuitive thing. It wasn't simple. It was the result of some long
calculation, which he didn't think anybody would be impressed with because he found it easy to do
those things. You know, I don't, you know, for me, the only kind of thing where I know I know what
I'm talking about is I do a computer experiment. It comes out in a certain way. The computer does
what the computer does. There's no kind of sort of I might have made a mistake somewhere type
situation. I mean, I think if I look at the kinds of things that I've tended to do in science,
they sort of mix what one might think of as kind of philosophy and what one might think
of as this kind of very detailed, kind of solid computational experiments and so on.
I mean, that turns out to be, for me, has been sort of a powerful methodology for dealing
with things, to go from on the one side sort of a general, almost philosophical understanding
of how things might work. And then sort of the challenges to be able to sort of think computationally
fluently enough that you can go from that sort of philosophical understanding to say,
the program I should run that is a manifestation of that philosophical understanding, and then let's
see what it actually does. And then I don't have to worry, am I getting it wrong, because the
program just does what the program does. And it's kind of, you know, I find it kind of charming when
my new kind of science book came out in 2002, people saying, but it's wrong. It's like, what does
that mean? What about it was wrong? I don't know, but I mean, it's kind of people assume that you could
sort of have got the wrong answer by doing the wrong calculation or something. But this is the
nature of computer experiments. You just, you know, you specify the rule, you run the program,
the program does what the program does. There's no, you know, no humans are involved. No possibility
of error exists, so to speak. You can be wrong in the interpretation of what's happening.
You can be wrong in the belief that what's happening in the computer experiment is relevant to
something else. But the actual experiment itself, it just is what it is. Now, you can be confused,
I will say. And the number one source of confusion is when people don't look at everything that
happens in the experiment. So people say, there's a certain tendency in science, people have had this
idea that, you know, being scientific is about generating numbers. And so one quite common type of
mistake is to say, well, there's a lot of detailed stuff going on underneath, but I'm just
going to plot this one curve as the result. And that means that you don't really get to see
sort of the detail of what's happening. You're just seeing this one sort of summary, and sometimes that
one summary can be utterly confusing. It can just lead you into kind of the thinking the wrong
thing. And so for me, you know, being able to have sort of the highest bandwidth thing that I think
we have to kind of understand what's going on is our visual system and being able to sort of
visualize what's happening in as much detail as possible, I've always found very important.
And often when I do projects, in fact, I just got bitten with this in the very latest project
that I was doing. I always try to make sure that I front the effort to make the best possible
visualizations.
Interesting.
Because if, you know, the thing that one does that's a mistake is to do the project with kind
of crummy visualizations and then say, and now I'm going to present it and I'm going to make a really good
visualization. Then you do the really good visualization and then you're like, oh gosh, there's
something I can now see that I didn't see while I was doing the project. So I was just a little bit
bitten with that because there was a particularly complicated kind of visualization that I didn't
go to the effort to make quite early enough in the project that I'm currently doing. And so I'm
just having to redo a bunch of things because I realize that I can understand them much more
clearly using this sort of more sophisticated visualization technique. But that's a, you know,
that's kind of, you know, you have to, that's just one of these things when you kind of start
sort of thinking computationally about things, this idea that you can see as deep into the computation
as possible, as important, rather than saying all I care about here is this thing of plotting
this one curve because that's what scientists have done for the last, you know, a couple of hundred years.
You know, I think one of the things I realized only very recently about my own personal sort of
scientific journey, is back in the early 80s, I started doing a bunch of computer experiments,
visualizing sort of the computations that were going on, figuring things out from that.
And for me, doing that in 1981 or something like that was completely obvious.
It was like, how could you ever not think about doing something like that?
But the question was, why was that obvious to me?
And, you know, it turns out what I had been doing for several years previously was building my first
big computer system, which was a system for doing algebraic computation, symbolic algebraic computation.
And I had gotten into doing that because I was doing particle physics. In particle physics,
one of the things you get to spend a lot of time doing is computing Feynman diagrams.
Feynman diagrams are this way of working out, well, actually this S-Matrix thing that I'd
mentioned earlier, a particular way of doing that, that's sort of the best way we know to do it.
I have to say, as a sort of footnote to this, Dick Feynman always used to say,
Feynman diagrams, they're kind of the stupidest way to do this kind of calculation. There's
got to be a better way, he said. I remember one time telling him, if you work out there's sort of
a series that you generate a Feynman diagrams, and I think at the Kth order in these Feynman
diagrams, I'd worked out that the computational complexity of doing these Feynman diagrams went
like k factorial to the fifth power. So as you go to higher orders, it takes unbelievably much
more difficult to work things out. And so I was telling the study of Feynman, he's like, yeah,
this is a stupid way to work things out. There's got to be a better way to do it. We haven't
known what that better way is. I'm sort of excited right now because I finally think I understand
kind of in a bigger, more foundational picture of what Feynman diagrams really are and how one
can think about them in a way that does allow one to sort of go underneath that formalism and
potentially work things out in a much more effective way. It's, I mean, to sort of
It's a spoiler for some things that I'm still working on, but essentially in finding
diagrams, you're drawing these diagrams that sort of say an electron goes here and then it interacts
with a photon, and then the photon interacts with the level of electron and so on. It's a diagram
of sort of interactions. And what I realized is that really those diagrams are diagrams about
causality. They're diagrams that show the lines that represent here's an electron. Really,
the electron is basically a carrier of a causality. There's an event that happens,
an electron interacts with a photon, and that event has a causal effect on some other event,
and that causal connection is represented by this electron line, so to speak, in this Feynman
diagram. That way of understanding things allows one to connect sort of Feynman diagrams to a bunch
of things that have come up in our physics project, to do with things we call multi-weight causal
graphs, and there's a whole rather lovely theory that's starting to emerge about these things,
but I haven't figured it all out yet. In any case, that's a, that's a, that's a,
irrelevant side show. But back in the late 70s, I was trying to get computers to do these very
ugly, nasty calculations of Feynman diagrams because that was the only way we knew to work out the
consequences of quantum field theory in those days, particularly QCD, which was a young field in those
days. And I, as the teenage me, had the fun of being able to work out a bunch of calculations
about QCDs for the first time.
And now they're well-known, sort of classic kinds of things.
But then they were fresh and new because it was a new field.
But it was one of the things that had happened was I had built a bunch of capability
to do symbolic computation, algebraic computation.
And one of the features of doing algebraic computation is that you don't just get a number
as the answer.
If the answer to your calculation, if the computer spits out 17.4,
There's not a lot you can do with 17.4.
There's not a lot of intuition you can get from 17.4 on its own.
But when the computer spits out this big, long, algebraic expression, it has a lot of structure.
And one of the things that I had kind of learned to do was to get intuition from that structure.
And so, for example, if you're doing, I don't know, you're doing integrals, let's say.
I've never been good at doing integrals by hand, but I became what I learned from doing thousands of integrals by computer
was kind of things about the intuition about the structure of what happens in integrals.
and that allows one to kind of make, to sort of jump ahead and see this complicated integral.
It's going to be, I think, intuitively, roughly of this structure, and that's a big clue in actually being able to solve the thing on a computer.
So the thing that I realized only very recently is that my kind of experience in doing algebraic computation had gotten me used to the idea that what a computer produces will be a thing that has structure from which you can get intuition.
And so when I started thinking about sort of actual simple programs and what they do, it was sort of obvious to me, really was obvious, that I should just make some sort of visualization of what all the steps that were going on were, because I expected to get intuition from kind of the innards of the computation, so to speak.
You know, however, you know, one of the things that I will say is that I started studying, in that case, cellular automata back in 1981, and I, you know, found out a bunch of things about cellular automata. I thought they were pretty interesting. And I had generated back in 1981, I generated a picture of this thing called Rule 30, which is a particular cellular automaton that's been my all-time favorite. And it has the feature that has a very, very simple rule, but you started off from just this one black cell.
It makes this complicated pattern.
Many aspects of that pattern look for all practical purposes random.
It's something that my intuition back in 1981 said couldn't happen.
It said, you know, if the rule was simple enough, there will be a trace of that simplicity
in the behavior that's generated.
And so when I, you know, I generated a picture of Rule 30,
I even put it into a paper I published, but I didn't really pay attention to it
because my intuition was so strongly, nothing like that can happen.
Then, actually, it's sort of methodologically amusing, I think, June of 1984, I happened to get a high-resolution laser printer.
They were a new thing.
There were big, clunky objects at that time.
And I thought I was going to go on some plane flight, and I thought I'll make some cool pictures for my new laser printer.
So I printed out Will 30 at high-resolution and took it with me, and I'm starting to look at it.
And it's like, hmm, what on earth is going on here?
I finally sort of really started, well, actually some other things that happened.
I had also studied other aspects of cellular automata and how they related to theory of computation and so on.
And that kind of primed me for really looking more seriously at this picture and realizing, oh my gosh, this is something that is completely violates the intuition that I've always had, that to get something complicated, you need complicated rules in some sense or you need a complicated initial condition.
this is something new and different.
And it is kind of amusing that I realized that it really, at that point I was primed enough
from the other things I'd studied, particularly about computation theory, that it only took
a couple of days before I was like sort of telling people about, you know, I was going to
some conference.
And actually, I found recently there was a transcript of a Q&A session at that conference
where I'm kind of talking as if I'd known it forever about, you know, how Rule 30 works
and so on. But actually, I'd only know it for two days. But an important point about that
was, first of all, it's something I had kind of quotes discovered, but I had not understood. I'd
not internalized it. But to be able to internalize it required me to build up a bunch of other
context from studying, well, a bunch of things about cellular automata, a bunch of things
about computation theory. Given that priming, I was able to actually understand this point about
what Rule 30 is and what it means.
Now, for example, this phenomenon of computational irreducibility,
I'm now at sort of 40 years and counting since I came up with that idea,
and I'm still understanding what its implications are,
whether it's for, you know, AI ethics or for, you know,
proof of work for blockchains or a whole bunch of different areas.
I'm only now understanding, in fact,
the thing that I've understood most recently, I would say,
is that I think a lot of the story of physics
is a story of the interplay
between computational irreducibility
and our kind of limitations
as observers of the world.
That it's something like, for example,
the simplest case is the second law of thermodynamics
where kind of the idea is
you've got a bunch of molecules bouncing around
and the second law says
most of the time it will seem like
those molecules get sort of more random
in the configurations that they take on.
And that's something people have wondered about
since the mid-1800s.
and I had wondered about, it was one of the first things I got really interested in physics back when I was 12 years old or so, was this phenomenon of how does randomization happen in the second law.
And what I finally have understood is that it happens because the sort of computational irreducibility of the underlying dynamics of these molecules bouncing around, yet that's sort of there's an interplay between that and the sort of computational limitations of us as observers.
Because we as observers aren't capable of sort of decrypting the sort of computation that happened in this underlying collisions, we just have to say, oh, it looks random to me, so to speak.
And so this phenomenon of computational irreducibility, sort of at 40 years and counting, I'm still understanding sort of the implications of this particular idea.
And I mean, another thing to say about sort of the progress of science, which I can see in my own life, and I can also see from history of science.
is it can take a long time. Once you've, once you've had some sort of paradigmatic idea,
it can take a long time for one to understand the implications of that idea. And I know for
things I've done, it's, I'm fully recognized the fact that it's taken me sometimes 30 years
or more to understand what it was that I actually really discovered. It's kind of like,
if other people don't figure it out for 50 or 100 years, that's kind of par for the course.
Because it took me 30 years to figure out what, you know, what the significance of this or that thing was.
I mean, I see that also in technology development.
And, you know, in building Wolf and language, we build certain paradigmatic ideas.
We have certain ideas about the structure of what can be in the language.
And then it can sometimes take a decade before we really realize, given that structure, here's what you can build.
You kind of have to get used to the ideas.
You have to grind around the ideas for a long time before you kind of get to the next step
in kind of seeing what's possible from. I kind of see it as sort of this tower that one's building
of ideas and technology in that case. And the higher you are on the tower, the further you've built up
on the tower, the further you can see into the distance, so to speak, about what other things
might be possible. I think sometimes when people sort of hear about history of science, discoveries
that are being made and so on, there's a certain tendency for people to think that science works
by, and somebody wakes up one day.
Wendy's most important deal of the day has a fresh lineup.
Pick any two breakfast items for $4.
New four-piece French toast sticks, bacon or sausage wrap, biscuit or English muffin sandwiches,
small hot coffee and more.
Limited time only at participating Wendy's taxes extra.
And just figures out some big thing.
Right.
My own efforts in studying history of science and my own experience in my life doing science
is that simply never happens.
it's there's basically many years usually a decade of buildup to whatever it is one is going to
potentially discover i mentioned what happened with me in rule 30 once i was adequately primed
it was kind of like it all happened very quickly but that priming took many years and that's the thing
that you know usually what's reported in the storybook so to speak is only that final moment after that
priming of when you realize, well, actually, this fits together in that way and so on,
and you can then sort of describe what happened. One of the things I learned about Einstein
recently was that in 2004, he'd written several papers about a very different subject. He'd written
a bunch of papers about thermodynamics, particularly about the Second Law of Theronanics. I think he was
much influenced by Boltzmann, who was a very philosophically oriented physicist. It was a person
who sort of believed you could figure out things about physics just by thinking about them
in the style of natural philosophy, so to speak, rather than sort of being driven by, you know,
experiment to experiment type type thing. And Boltzmann figured out a bunch of things about atoms.
Boltzmann had basically had the core ideas of quantum mechanics, although of discreteness and so on.
That's what Plank picked up when he studied the black body radiation problem. But, um,
In any case, you know, at the time 1904 was still very much, can you prove the second law of thermodynamics from some underlying principles that did not just sort of introduce the second law as a law of physics?
Could you prove it from some underlying mechanical principles?
And many people have been involved in that. Planck was trying to do that, actually.
Plank was trying to do that when he discovered the quantum mechanics, the way of sort of understanding black body radiation in terms of discrete, discrete.
photons. I mean, that's a weird story, because Plank, people had wondered, why do things get
more random? And they kept on saying, to get randomness, you have to have some magic source
of randomness. So Plank's idea was that infrared radiation, radiative heat, would be sort of the
magic source of randomness that would sort of produce heat in everything and lead to that randomness.
And so he was actually studying that question when experiments came out of.
about black body radiation, and he then noticed that these calculations that Boltzmann had done
a lot earlier, where Boltzmann, just as a matter of sort of mathematical convenience, had said,
let's assume energy is discrete, and had then worked out what the consequences of that were.
Plank said, well, actually, if you say that energy really is discrete, you fit this data a lot better
than otherwise. It took Plank another decade to actually believe that this was more than just
a mathematical trick. Einstein was the one who really sort of said photons might be real.
years later. But in any case, the thing was interesting there was that Einstein was using kind of this
natural philosophy, philosophical approach to science as a way to think about how things might work.
And he tried applying that to thermodynamics in 1904, and he didn't get it right. You know, he didn't
figure out the second law. He, I mean, as we now know, sort of the paradigmatic ideas that you need
to figure out the second law come from ideas about computation and so on, which were another
close to 100 years in the future, so to speak.
But it's sort of interesting that, you know,
he was applying those kinds of philosophical thinking ideas,
and it was a misfire in thermodynamics.
It was a hit in relativity,
in the photoelectric effect,
and the existence of photons,
and also in Brownian motion.
But it's kind of,
it's an interesting sort of footnote
to the history of science.
And I think, you know,
another point that one realizes there
is there are things that there is an ambient level of understanding
that will allow one to make progress in that area
and there are things where there isn't.
And in fact, Einstein himself, I think 1916,
he wrote to somebody, you know,
in the end, space will turn out to be discreet,
but right now we don't have the tools necessary to see how this works,
which was very smart of him.
I mean, that was correct.
You know, it took another hundred years to have those tools.
but it's a thing where when you think about science, another issue is, are you at the right time in history, so to speak? Is the ambient sort of understanding of what's going on sufficient to let you make the progress you want to make? So one area I've been interested in recently is biology, where biology has been a field which really hasn't had a theory. You know, the best, the closest one gets to a theory in biology is natural selection from 1859. But that's a very,
very, it's, you know, if we say, well, why is biology the way it is? You know, is there a, when we
look at a biology textbook, what's the theoretical foundation of a biology textbook? We haven't
known that. And the question is, can there be a theory of biology? Most biologists don't really
imagine there as a theory of biology. They just say, we're collecting this data, we do these
experiments, we work out these probabilities of things if you're doing medicine. That's the typical
approach. And, you know, we don't imagine that there is an underlying theory. It's not like
physics, where we imagine there might be some underlying sort of primitive theory, so to
speak.
Sorry, why wouldn't computational irreducibility come into play in the biological case for
you to say, or for a biologist, to say that that's the reason why there is no toe for biology?
Yep.
Well, that's, there's some truth to that.
That's...
Are I jumping ahead?
No, no, that's good, good inference.
I mean, that's some...
I think the reason that biology looks as complicated as it does is precisely because of
computational reducibility. And the thing that surprised me, this is another sort of story of my
life in science. I'd worked on cellular automata back in the early 80s. It's kind of funny fact
is that this was days before the internet, so you couldn't look things up as easily, or at least before
the web. And, you know, people had seen that I was working on cellular automata. So I got invited
to all the theoretical biology conferences, because they figured this must be about, you know,
biological kinds of things
and it's kind of funny
because looking back now
I was kind of
the period in the 1980s
was a time when there was
sort of a burst of somewhat
interest in theoretical biology
there'd been another one
in the 1940s
and it kept on sort of dying off
but in 1980s there was something
of a burst of interest
and I realized that
in fact somebody was working with me
now whose biologists sort of
went back and looked at some of these
conferences and things
and I keep on saying
I don't really know much about biology
and he kept on saying
but you were there at all these key conferences.
Wait, just a moment.
It's one thing for you to be invited
because of cellular automata.
It's another thing for you to accept it.
Why did you go?
If you thought, hey, this is irrelevant.
Oh, no, I didn't think it was irrelevant.
I thought it was interesting.
I just, you know, had a certain...
I see.
And, you know, when I talked about cellular automata there
and talked about the way that cellular automata
are relevant to things like the growth of organisms and so on,
I thought it was interesting,
and other people thought it was interesting.
And it turned into a whole sort of subfield
of people studying things.
But I didn't think, when it comes to sort of the foundations of biology
and things like, you know, why does natural selection work,
I didn't think it had anything much.
Well, I wasn't sure if it had nothing to say about that.
Back in the mid-80s, I tried to see if it had something to say about that.
I tried to see if you could take cellular automata,
which have these definite underlying rules.
Those little underlying rules are a little bit like genomic sequences, so to speak,
which, where the genome is specifying the rules for building an organism.
So in a cellular automata, you can think about it the same way.
The underlying rules specify the rules for building these patterns that you make in cellular automata.
So the obvious question was, could you evolve those rules like natural selection does?
Could you make mutations and, you know, selection and so on on those underlying rules?
And could you see that produce some sort of interesting patterns of growth?
So I tried that in 1975.
I didn't find anything interesting.
Do you mind explaining how you tried that?
Because look, the way that it works with cellular automata, like Rule 30, is you have some grid.
How long is the grid?
Maybe 30 or 40.
No, no, you can.
It's an infrared grid.
Okay.
So you have some grid.
And then you have some rules.
But at no point do you change the rules?
Correct.
Okay.
So even if you were to change the rules, are you changing the rules for only finite parts of the grid?
No, no.
This is saying, instead of looking, and it's not going to work so well with something as simple as Rule 30, because you can go from rule 30, because you can go from rule 30.
30 to rule 31, rule 30 and rule 31 behave very, very differently. By the time you're dealing
with sort of rule a billion, rule a billion and one potentially doesn't behave really that
differently from rule a billion. Because many of the cases in the rule, the rule is saying,
if you see right near where you are, if you see, you know, cells red, green, white, then make
this. And, you know, by the time you've got enough colors and so on, you can imagine that, you know,
some of those particular rules won't even be used when a typical pattern is produced.
It'll be something so you can make some small changes to those rules and expect that maybe it won't
make a huge change in what comes out. At least in the short run.
Yes, yes. I mean, well, that's a complicated issue because it depends on what subpart of the rules
you end up selecting as you make this pattern. I mean, it's usually the case that when you make,
you produce some pattern from a cellular automaton, for example, every delivery.
sub-patch is typically using only some small subset of the rules, and it's using them in some
particular way, and it maybe makes some periodic little sub-patch, and then something comes
along that's using some different part of the rule, sort of crashes into that and destroys
that the simplicity that existed in that region. So in the case of Rule 30, there are only
eight cases in its rule. So you don't get to making any change to that is a big change, so to speak.
by the time you have something with like, say, 27 cases in its rule, you can kind of imagine
you're making a little change in that is less significant to the behavior that will occur.
But, okay, back in mid-1980s, I tried this, I tried looking at slightly bigger rules and
making small changes. I didn't find anything interesting.
Regarding biology? Yes. Okay. Yes. No, I was interested in kind of a model for natural
selection. It was a time when artificial life was first being talked about and worked on,
and, you know, Chris Langton had done a bunch of nice stuff with my cellular automata
and thinking about artificial life and so on, and it was, it would seem sort of the obvious thing
to do, but it didn't work. And so for years, I was like, it's just not going to work. And then,
last year, actually, I was interested in the foundations of machine learning. And the big lesson
of machine learning that we particularly learned in 2011 was, in a neural nets, if you bash it hard enough,
it will learn stuff.
And, you know, that wasn't obvious.
Nobody knew that you could get sort of a deep learning, you know,
a deep neural net to recognize images of cats and dogs and things.
And sort of by accident, it was discovered that if you just leave the thing learning long enough,
if you leave it training long enough, it comes out and it's actually succeeded in learning something.
So that's a new piece of intuition, that you can have a system like a neural net.
Neuronet is much more complicated than one of these cellular automata.
You can have a system like that.
You just keep bashing it, you know, a quadrillion times or something, and eventually it will successfully achieve some fitness function, will learn something, we'll do the thing you want it to do.
So that was a piece of new intuition.
So I thought, let me just run these, you know, this, I was writing something about actually foundations of machine learning.
And I thought, let me just try this experiment that I, you know, I thought I'd tried it in the 1980s and it hadn't worked.
But now we know something different from our studies of neural nets.
let me try running it a bit longer. And by golly, it worked. And I felt a bit silly for not having
discovered this any time in the intervening, you know, 40 years or something. But nevertheless,
it was really cool that it worked. And that meant that one could actually see for the first time
kind of sort of more, in a more detailed way, how natural selection operates. And what's really
going on in the end is computational irreducibility lets one go from these
quite simple rules, these very elaborate kinds of behavior, the fitness functions, the things
that sort of are determining whether you survive or not, those are fairly coarse in biology,
but let's imagine there's a, you have a coarse fitness function, like what's the overall
lifetime of the pattern before it dies out, let's say, or how wide does the pattern get?
You're not saying how it gets wide, you're not saying particularly, but just saying how wide does
it get.
Sure. Turns out with those kinds of course fitness functions, you can successfully,
achieve high fitness, but you achieve it in this very complicated way. You achieve it by sort of
putting together these pieces of irreducible computation. And that means that so in the end,
the answer, I think, to why does biological evolution work is that it is the same story as what
happens in physics and mathematics, actually. It is an interplay between underlying computational
irreducibility and the computational boundedness of quotes observers of that of that computation.
So in the case of physics, the observers are us, you know, doing experiments in the physical
world.
In the case of mathematics, it's mathematicians looking at the structure of mathematics.
In the case of biology, the observer is kind of the environment.
It's the fitness function.
The fitness function is kind of the analog of the observer.
And the fitness function is saying, you're a success.
you achieve this kind of course subjective. And the reason that biological evolution works
is that kind of there's so much power in the underlying irreducible computation that you're
able to achieve many of these kind of course fitness functions. So, you know, if you imagine that
the only way, I don't know, an organism could survive is if it breaks out of its egg
and immediately it, you know, computes a thousand primes and does all kinds of other
weird things, right? It's not going to survive. I did that. Right. It's no, I did when I was
born. As one does. But, you know, that's, one isn't going to be able to hit that particular
very complicated target. What actually happens is the fitness functions are much coarser than that,
and that's why biological evolution has been able to work. But if you say, well, what's actually
going on inside. What's going on inside is pieces of computational irreducibility being stuck
together in a way that happens to achieve this course fitness function. By the way, this is the same thing
as what's going on in machine learning, I think. In machine learning, it's the same story that the fitness
function in that case is you're trying to achieve some training objective. And you do that by sort of
fitting together these lumps of irreducible computation. The analogy I've been using is it's kind of like
building a stone wall. If you're doing kind of precise engineering, you might build a wall by making
precise bricks and putting them together in a very precise way. But the alternative is you can make a
stone wall where you're just picking up random rocks off the ground and noticing, well, this one
more or less fits in here, let me stick that in that way, and so on. That's what's going on in
machine learning. You're sort of building this stone wall. If you then say, well, why does this particular
feature of this machine learning system work the way it does? It's, well, it's because we happen to find
that particular rock lying around on the ground, or because we happen to go down this particular
branch in the random numbers that we chose for the training. And so it's kind of an assembly of these
random lumps of irreducible computation. That's what we are too. And that's, I mean, in biology
has, you know, the history of biology on Earth, you know, the dice has been rolled in particular
ways. We have stuck together these sort of lumps of irreducible computation and we make the organism
that we are today.
There are many other possible paths
we could have taken
which would have also achieved
a bunch of fitness objectives,
but it was just a particular
historical path that was taken.
One of the things that's kind of
a sort of slightly shocking thing to do
is to take one of these
evolved cellular automata
that looks very elaborate,
has all these mechanisms in it,
has all these patches
that do particular things
and that fit together
in these interesting ways and so on.
And you ask in LLM,
say, write a description
of this pattern
in the style of a biology textbook.
Hmm.
And it's kind of shocking because it sounds just like biology because it's successfully describing, you know, there's a, I don't know, it makes up, sometimes it can make up names for things.
You know, there's a distal, you know, distal triangle of this and so on, and, you know, interacting with this and this and this and you know, and you open a biology textbook and it reads kind of just the same.
It's a description, you know, biology, and the detail in biology is a description of this particular sort of sequence of pieces of computational irreducibility that got put together by the history of life on Earth and that make us as we are today.
Now, you know, since we care about us, it's very worthwhile to study that detailed history, that detailed, you know, lump of computational irreducibility that is us.
But if you want to make a more general theory of biology, it better generalize.
beyond the details of us and the details of our particular history.
And the thing that I've been doing actually most recently is I think we are the beginnings
of finding a way to talk about sort of any system that was adaptively evolved.
So in the case of the, if we look at all possible rules that could be going on in biology,
many of those rules weren't be ones that would have been found by adaptive evolution,
with coarse fitness functions.
So a way to think about this.
So the thing, we look at sort of what is biology doing?
Biology is, if nothing else, a sort of story of bulk orchestration and molecular processes.
There are all these, you know, one might have thought at some time in the past that biology is sort of just chemistry.
And by that I mean, in chemistry, we sort of imagine we've got liquids and so on.
And they just have molecules randomly bumping into each other.
But that's not what biology mostly seems to be doing.
Biology is mostly a story of detailed assemblies of molecules that orchestrate, you know, this molecule hooks into this one and then does this and et cetera, et cetera, et cetera.
And sort of discoveries in molecular biology keep on being about how orchestrated things are, not how this molecule randomly bumps into this other molecules.
Right. It's sophisticated.
Yes. But it also has this sort of mechanism to it. It has a, it's not just random collisions.
It's this molecule is guided into doing this with this molecule and so on.
It's a big tower of things, a bit like in these evolved cellular automata,
which also do what they do through this big tower of detailed sort of applications of rules and so on.
But so what I'm interesting...
During the Volvo Fall Experience event,
discover exceptional offers and thoughtful design that leaves plenty of room for autumn adventures.
And see for yourself how Volvo's legendary safety brings peace of mind to every Christmas.
morning commute. This September,
leased a 2026 XE90 plug-in hybrid from $599 biweekly at 3.99% during the Volvo Fall
Experience event. Conditions supply, visit your local Volvo retailer or go to explorevolvo.com.
Prested in is to have a theory of bulk orchestration. That's something that can tell one about
what happens in any system that is sort of bulk orchestrated, which can include things like a
microprocessor, let's say, which has, you know, its own sort of complicated set of things that
it does. A microprocessor is not well described by the random motion of electrons. It's something
different from that. But what is it? What can you, and does that theory that you make of the
microprocessor depend on the details of the engineers who designed it, or are there necessary
features of any system that has been built to achieve certain coarse-grained purposes?
So I'm sort of, I'm going down this path. We're not there yet. I mean, the, the sort of in physics, the idea of statistical mechanics is an idea a bit like this. Because the idea of statistical mechanics is, once you have enough molecules in there, you can start making conclusions just by looking at sort of averages based on what all possible configurations of molecules are like without having to have any details about the particulars of, you know, these collisions and those collisions and so on. So it's a,
the statistics wins out, there's lots of stuff there is more important than the details
of what each of the pieces of stuff does. In the case of biology, there's one additional
thing you seem to know, which is that the stuff you have has rules that were subject to some
kind of adaptive evolution. And even though we don't know what the purpose was, you know,
when people use sort of natural selection as a theory in biology, they look at some weird creature
and they say, it is this way because.
And sometimes that because is less convincing than other times.
But that's been the model of how one makes the theory in biology.
And so I think what I'm interested in is,
is there a theory that's independent of what the because is.
Just that there is a because, is that there is some coarse-grained fitness that's achieved.
Is that enough of a criterion to tell you that something about sort of this bulk orchestration,
this limit of a large number of things subject to that kind of constraint.
Actually, I figured out something this morning about this.
We'll see whether it pans out.
But I have one of the things that's really, really striking about these kinds of systems
where one has done this kind of adaptive evolution on a large scale is you just make pictures of them
and they just look very organic.
It's a strange thing.
And I think I have an idea about how to sort of characterize that more precisely that may lead one to something
that is sort of a, I mean, what one's looking for is something that's kind of a theory of bulk
orchestration has, for example, information theory has been a general theory of just sort of
the all possibilities type situation with data or with statistical mechanics and so on.
So, you know, this is, but sort of it's interesting methodologically perhaps when I'm working
on something like this, it's an interesting, for me, it's an interesting mixture of
thinking quite sort of philosophically about things and then just doing a bunch of experiments,
which often will show me things that I absolutely didn't expect. And I suppose there's a third
branch to that, which is doing my homework, which is, okay, so what have other people thought about
this? And that what I found is that, you know, when I try and learn some field, I often spend
years kind of accumulating knowledge about that field. And I'm, you know, I'm lucky enough I bump
into the world experts on this field from time to time, and I'll ask them a bunch of questions,
and usually I'll be kind of probing the foundations of the field. And one of the things I learned
about sort of doing science is you might think, you know, the foundations of a field are always
much more difficult to make progress in than some detail, you know, high up in the tree of that
field. This is often not the case, particularly not if the foundations were laid down 50 or 100 years
ago or something. Because what's happened is, you know, what you've discovered is when you talk to the
people who are sort of in the first generation of doing that field of science and you say, well, what about
these foundations? They'll say, good question. We wondered about that. We're not sure those foundations
are right, you know, et cetera, et cetera, et cetera. Now you go five academic generations later and people
will say, of course, those foundations are right. How could you possibly doubt that? You know,
it just becomes their building on top of this, you know, this thing that's far away.
from the foundations. Well, often the foundations are in a sense very unprotected. Nobody's looked at
them for decades, maybe longer. And often the ambient methodologies that exist have changed
completely. In modern times and things I've done a lot, sort of the computational paradigm is the
biggest such change. And then you go, look at these foundations, and you realize, gosh, you can
actually say things about these foundations, which nobody has even looked at for ages because they
were just building many layers above those foundations. And so I think that's been one of the things
I've noticed. And it's one of the, you know, for people who are sort of doing science and they want
to make some progress in some particular area, it's like, well, what is the foundational question
of this field? And, you know, sometimes there are people will sometimes have to think about that
quite a bit. You know, what really is the question that we're really trying to answer in this field
foundationally, not the thing that is the latest thing that the latest papers were talking about
and so on, but what really is the foundational question? And then you say, well, can you make
progress on that foundational question? And quite often the answer is yes. And even the effort
to figure out what the foundational question is is often very useful. But by the way, when you do
make progress on a foundational question, the kind of the trickle down into everything else is dramatic.
although often the stream trickles in a different area than the existing stuff had been built.
So in other words, you make progress on the foundations and now there are new kinds of questions
about that field that you get to be able to answer, even if the existing questions you
don't make very much progress on. They've been well worked out by the existing foundations.
You make new foundations, you can kind of answer a new collection of questions.
So I think that's a sort of a typical pattern.
And for me, kind of this effort, you know, I tend to try and sort of ambiently understand some field for a while.
And then I'll typically form some hypothesis about it.
Hopefully I'll be able to turn it into something kind of computational.
And then I can do the experiments.
I can know that I'm actually getting the right answer and so on.
And then I try and go back.
And often at that point, I'll try and understand how does this relate to what people have thought about before.
And sometimes I'll say, wow, that's a great connection.
You know, this person had figured out this thing, you know, 50 years ago that, you know, was pretty close to what I was talking about now.
I mean, like the idea of computational irreducibility, for example.
Once you have that idea, you can go back and say, well, when did people almost have that idea before?
Like Newton, for example, had this statement that, you know, he'd worked out celestial mechanics and calculus for the motion of planets.
And he made this statement that, you know, what did he say?
said something like, to work out the motions of all these different planets is beyond,
he said, if I'm not mistaken, the force of any human mind. So he had figured out that there
would be, you know, even though in principle he could calculate the motions of these planets,
he very diplomatically said it was, in his time, it was beyond the force of any human mind.
The force of, you know, the mind of God would be capable of working it out because,
after all, the planets were moving the way they were moving. But that was.
sort of an interesting precursor that you can go back and see that already, you know, he was
thinking about those kinds of things. But sometimes, and sometimes you find that people just
utterly missed something that later seems quite obvious. Like, for example, one of the things
in physics is the belief in space time. The belief that time is something very similar
to space, which is something that's been quite pervasive in the last hundred years of physics,
and I think it's just a mistake. I think Einstein didn't really believe that. The person who brought in
that idea was Herman Minkowski in 1919, who kind of noticed that this thing that Einstein had
defined, this kind of distance metric, the proper time, was, you know, T squared minus X squared,
minus Y squared or whatever, minus Z squared. And Minkowski was a number theorist, and he'd been
studying quadratic forms, sums of things with, you know, with squares in them and so on.
He's like, this is a quadratic form. It's so great. And look, you know, time enters into this
quadratic form just like space. Let's bundle these things together and talk about space time.
And I think that was, as I said, I think that was a mistake. I think that misled a lot of people.
I think my own view, which is pretty, I think, it's clear that this is the way things work,
is that sort of the nature of space as kind of the extent of some data structure, effectively,
some hypergraph, for example, that the nature of that is very different from the kind of computational
process that is the unfolding of the universe through time. The fact that those things seem
very different at first, it's then a matter of sort of mathematical derivation to find out the
relativity works and makes them enter into equations and things together, but that's not
their underlying nature. But in a case, that's a, it's a thing where in that particular case,
it was, you know, that was a thing where when you go back and look at the history, you say,
why do people believe in space time? Why do people believe that space and time? Why do people
believe that space and time the same kind of thing, you eventually discover that piece of history
and you say they went off in the wrong direction. Sometimes you're like, wow, they figured it out,
you know, they really were in the right direction, or they were in the right direction, but they didn't
have the right tools or whatever else. And for me, that's a very important grounding to know
that I know what I'm talking about, so to speak. Like recently, I was studying the second orthodynamics,
I finally think after sort of 50 years of thinking about it, that I finally nailed down how it really works
and how it sort of arises from this kind of interplay
between computational irreducibility and our nature as observers.
And I was like, let me really check that I'm actually getting it right.
And, you know, I'd known about the Second Law for a very long time.
And the Second Law is one of these things, which in a textbook,
you'll often see the textbook say,
oh, you can derive that entropy increases.
You say, well, actually you can flip around this argument
and say that the same argument will say that entropy should decrease.
And the, you know, my favorite is the books where the chapter ends, this point is often puzzling to the student. It's been puzzling to everybody else, too. But, you know, the question was, why did people come up with the things they came up with in this area? And so I went and kind of untangled the whole history of the Second Law, which I was surprised nobody had written before. Although after I figured it out, I wasn't as surprised because it's really complicated. And it requires kind of understanding various points of view that people had that are a little bit, a little bit tricky.
to understand. But I think, you know, in the end, I feel very confident that, you know,
how what I figured out fits in to what people have known before, what people have sort of
be able to do experiments on and so on. And that's a, you know, that for me is an important
sort of step in the kind of the philosophy, the computational experiments, the homework, so to
speak. I mean, I find that, you know, if I study some field and I'm like trying to read all
these papers and so on, and it's a complicated field with lots of complicated formalism,
I just, I find it difficult to absorb all that stuff.
For me, it's actually easier just to work it out for myself and then see where those chips
have fallen and then go back and figure out what the history is and see how what I've done
relates to that history.
Maybe sometimes, it's not so common, I have to say, but sometimes I'll discover something
in the history where I say, that's an interesting idea, and I can use that idea in something
that I'm trying to do.
That's, it tends to be the case that I've kind of learned enough of the ambient
kind of history beforehand that I'm not usually surprised at that level. But it's always
a humbling experience to learn some new field because you always, I mean, I feel a field I've
been trying to learn for a while as economics, which is deeply related to kind of structure of
human society and so on. And I'm sort of, I'm still at the stage. It's as happens whenever I learn
a field, that for a while, every new person I talk to will tell me something I didn't know
that makes me very confused about what is actually going on in the field.
I feel like I'm just at the, you know, just at the crest of the hill now for economics.
I'm not, I'm just, it's starting to be the case that I've heard that idea before,
and I'm beginning to understand how it fits into the sort of global set of things that I'm thinking about.
And I do think, by the way, that it's going to turn out that sort of, there's a, you know,
economics like biology, well, economics,
thinks it has more theories than biology thinks it has.
And it's, but, you know, there's sort of a question of what kind of a thing is economics.
What are, what are its foundational questions?
What can one actually understand?
I'm not there yet.
But I think it is really clear to me that the methodologies that I've developed are going to be very relevant to that.
I don't know how it will come out.
That's one of these things you have to let the chips fall as they will.
You know, I don't know how it's going to come out.
I don't know whether, I mean, I have kind of prejudices about what I'm going to learn
about, you know, cryptocurrencies and things like this, which is an interesting case because
it's kind of a case where sort of all there is is the kind of economic network. There isn't
the kind of obvious underlying human utility of things. I mean, just to give a preview of
some of the thinking there, one of the questions in economics is what is value? What makes something
valuable. And my proto theory of that, which is subject to change because it's not fully worked
out at all, is in the end, the main thing that's valuable is essentially computational reducibility.
In other words, in the world at large, there's lots of computational irreducibility. Lots of things
that are unpredictable, lots of things you can't do quickly and so on. But we humans have one
thing in fairly short supply, and that's time, because at least for the time being, we're mortal,
we have only a limited amount of time.
So for us, anything that kind of speeds up,
what we can achieve is something that is valuable to us.
And computational reducibility is the possibility
of finding these little pockets where you can kind of jump ahead,
where you're not stuck just going through,
letting things work as they work, so as speed.
So I think my proto theory is that the ultimate concept
of the ultimate source of value is pockets,
of computational reducibility.
The fact that you can sort of put together a smartphone,
and it's a whole smartphone,
rather than having to get all the ingredients together
and just go step by step, so to speak.
If we had infinite time,
we could build every smartphone from scratch ourselves,
but because we only have finite lifetimes,
it's worthwhile for us to have the finished goods, so to speak.
You know, it's the very beginning.
I'm just trying to understand these things.
So roughly speaking, anything that saves us time will be valuable, or at least what is valuable will save us time.
And then what saves us time is something that's computationally reducible.
Yes, yes, that's the idea.
And I mean, I think there's sort of questions about kind of what, you know, when you invent something, how do you build on that invention?
How do you take that lump of reducibility and make use of it and so on?
what is the value of that invention? That's not something usually taken into account in economics.
There's the scarcity of stuff, but not the value of this idea and so on.
And in biology, we can see the same thing happening. There is some sort of piece of
reducibility, some mechanism that you see being found, and then that mechanism is reused.
I don't understand how this stuff works yet, but this is, you know, in the end, it's kind of a theory
that allows one to understand something about kind of, without the function of things,
as well as about the mechanism by which the things occur, so to speak.
But, you know, this is, so for me, I mean, in terms of doing a project like that,
it's sort of this mixture of sort of the philosophy of what's going on,
the kind of conceptual framework, a bunch of computer experiments to see what actually happens,
and then sort of a doing one's homework understanding of how this fits in,
to what other people have figured out.
And it's a thing that I've done for the last 30 years or so now
is, you know, in academia, it's often like you write a paper
and then you're like, get some citations, you know,
let's boom, boom, boom, this is whatever.
I have to say I find it amusing that people pointed out to me
that citations to my own stuff sort of got corrupted in various databases
and so people are now copying the utterly incorrect citations
that are just complete nonsense
and you can kind of see that they didn't look at anything
they just like click click click
you know now I've now I've papered my paper so to speak
by putting in the right citations
the only thing is amusing to me right now
is because papers aren't on paper anymore
it's starting to be the case that you can have
the you know citations where they'll cite
all thousand authors explicitly
is very nice and egalitarian
and it means that the length of the
you know I'm always when I'm reading papers
I'm always like, something exciting is about to happen.
Something exciting is that, oh, shit, we reached the end of the paper.
Nothing happened.
Right.
Now, and I thought I hadn't reached the end because the thumb on my, you know, as I'm moving down on the window.
You can see the percentage.
Yeah, yeah, right.
It's like there's still a lot of way to go.
But no, actually, it's just pages and pages and pages of citations.
But, you know, what I've always thought, at least for the last 30 years, the much more interesting thing is the actual narrative history of the ideas.
that's the thing that really matters.
It's not, you know, it's like it's nice to be able to sort of cite your friends or whatever else you're doing.
But what's much more significant for sort of the history of ideas is can you actually thread together how this relates to other ideas that have come up?
And so when I wrote new kind of science, I put a huge amount of effort into the historical notes at the back.
And people, you know, it's like, oh, you didn't cite my thing.
Well, read the frigging note.
It's like, did I get it right?
Yes, actually, it's a very, very good portrayal of what actually happened.
And which I think is, and I think that's, for me, that's a much more useful thing than like, boom, I copied this citation from some database that had it wrong anyway.
And I think, and that's, you know, it's part of the story.
I mean, to me, you know, when you do a piece of science, there's the doing of the science.
and there's the kind of explaining of the science and there's the contextualizing of the science.
For me, kind of the effort of exposition is critical to my process in doing science.
I mean, the fact that I'm going to write something, talk about something, is very important to me
actually understanding what I'm talking about, so to speak.
And when I write expositions of things, I try and write expositions that I intend anybody to be able to
understand. And that is a, you know, that's a big sort of constraint on what one does, because if
one doesn't know what one's talking about, it's really hard to explain it in a way that anybody
has a chance to understand. And so that, you know, that for me has been an important constraint in
my efforts to do science is, can I explain it? And people sometimes think it's more impressive
if you explain the science in this very elaborate technical way. That's not my point of view.
It's more impressive if you can grind it down to be able to explain it in a way that anybody can
have who puts the effort in can actually understand it. And it forces you to understand what you're
talking about much more clearly. And it prevents the possibility of you just sort of floating over
the formalism and, you know, completely missing the point. But I think, I mean, you know, one of the
things that, you know, it's sort of the doing of science. There's one, another question is, who should
be doing science. You know, it's, I mean, I do science as a hobby, and I do it because I discover
interesting things, and I think that's fun. If I wasn't discovering interesting things, I just
wouldn't do it. It's not what I do for a living. You know, I run a tech company for a living.
It's something that has been kind of a, you know, I started off earlier in my life. I did science
for a living, so to speak, but for it's been, what, 40 years or something since I did
science for a living, so to speak.
So before we get to who should do science, we started this conversation about what is good
science.
And I wanted to understand your position, not only your views on science, but your position
in the history of science and you took us through the past 50 years up until even this
morning.
So that's great.
What's missing there is even a definition of what is science.
I opened with what is good science, and then I should have said, well, what is science
to begin with?
And then we can also contrast that with what is bad science.
Fair enough.
understand what is good science. To me, I think science has been traditionally about taking the world
as it is, the natural world, for example, and somehow finding a way to produce a human narrative
about what's going on there. So in other words, science is this attempt to bridge from what actually
happens in the world to something that we can understand about what happens in the world.
That's what the act of doing science is that effort to find this thing, which kind of is the
explanation that fits in a human mind, so to speak, for what's going on in the world.
That's been the traditional view of science.
Now, there are things that call themselves sciences, let's say computer science that really isn't
about that.
That's really a different kind of thing.
Computer science, if it was a science like that, will be what I now call ruleology,
the study of simple rules and what they do, the kind of.
of the computation in the wild, so to speak. So it's sort of a misnomer of a science, but that's the
tradition of how it's called. But so, you know, for me, the science is this, what is it that we
humans can understand that relates to what's, what actually happens out there in the world,
so to speak. Now, you know, I called my big book a new kind of science for a reason, because I
saw it as being a different twist on that idea of what science is.
because when you have these simple rules
and you can only know what they do
by just running them
you have a slightly different case
you have something where you can understand
the essence of what's happening,
the primitives, but you cannot understand
you can only have kind of a meta understanding
of the whole arc of what happens.
You can't expect what people
have been hoping for
from, for example, the physical sciences
where you say, you know,
now I'm going to wrap my arms around the whole thing.
I'm going to be able to say everything
about everything
that's going to go on there. So it's a slightly, it's a different kind of science, hence the title of
the book, so to speak. But, so, you know, that's my view of sort of what science is. I would say
that good science tends to be science that, at least science that I think is good, is science that has
some sort of foundational, it has sort of some foundational connections. It's, you know, there is
science, maybe I shouldn't say good science. I should say, say, high leverage science.
science that we can be fairly certain is going to have importance in the future, so to speak.
You know, when you're at some tentacle, some, you know, some detail of some detail,
the, you know, that is maybe the detail of the detail will open up some crack that will actually see something much more foundational,
but much of the time that won't happen.
And I think the thing that, to me, makes science that sort of high-leverage science,
is a science where sort of the thing you're explaining,
the thing you're talking about is somehow very simple,
very, very clean, very much the kind of thing
that you can imagine will show up over and over and over again,
not something where you built this whole long description
that went three pages long to say what you're studying.
It's like, this is this very simple thing,
and now there's a lot that comes out of that simple thing,
but it is sort of based on this kind of foundational primitive.
that's, at least, I wouldn't necessarily say that that is, I mean, if we talk about good science versus not very good science, you know, one thing is, so what would be my criteria? I mean, you know, I would say that science that nobody can understand isn't very good science, since the point of science is to have a narrative that we humans can understand. If you are producing something that nobody understands, that's, or nobody has the chance to understand, so to speak, that's not, that's not, that's not,
going to be a good thing. I think that there's also a lot of science that gets done that I would say
is not, I don't know, it's not what it's advertised to be, so to speak. This happens both in
theory and an experiment. What do you mean? Well, science is hard. And unless you, you know, it's like,
did it really work that way or did you fudge something in the middle, so to speak. And, you know,
there's a certain rate of fudging in the middle.
I don't think we know what the rate is.
And some fields, it's probably very high.
And it's often not even like I'm nefariously fudging it in the middle.
It's just, I knew it was going to come out this way.
So the mouse that didn't do that, I'm going to, you know,
that mouse must have been, you know, under the weather that day.
So we'll ignore that mouse type thing.
It's not, you know, it's not nefariously ignoring the mouse.
It's just ignoring the mouse because we're sure it isn't, you know,
that mouse isn't the important.
mouse type thing. And I think the, you know, I know when I was doing particle physics back in
the late 70s, you know, a formative experience for me was a thing, calculation I did from QCD about
some particular particle interaction, charm particle production in proton proton collisions. Okay, so I had
worked out, it will happen at this rate. There was an experiment that said no such thing was observed
at a rate, I don't remember what it was, five times below what I said it should happen at.
And so if you are the official scientific method operative, you say, well, then my theory must be wrong.
Well, I didn't think the theory was likely to be wrong because it was based on pretty foundational things.
And, you know, I wrote some paper with a couple of other people and, you know, half the paper was his, the calculation.
The other half of the paper was, well, there's this experiment.
and how could our calculation possibly be wrong?
Well, it turns out, as you might guess,
for me telling this story,
that, you know, the experiment was wrong.
And, you know, that was for me an important kind of formative realization.
Now, you know, do I blame the experimentalists for the fact that it was wrong?
No, experiments are hard.
And, you know, they had a certain set of ideas about how it would work out.
And, you know, those were not satisfied, and they missed it.
Just like in doing computer experiments,
If you don't measure the right thing, you might miss what you're looking for.
Now, just a moment. How does that jive with earlier when you were talking about the experiments, you have to let the chips fall where they may and accept it?
Fair point. I mean, you have to do a good experiment, and that's not a trivial thing.
I mean, in other words, if you do a bad experiment, you'll come to the wrong conclusions.
And one of the things that I suppose I've gotten so used to in doing computer experiments is, you know, how do you make a very clean experiment?
This is the typical problem.
The typical problem with experiments is you do the experiment and you get a result
and there was some effect that you didn't know that it mattered
that the experiment was done, you know, not at sea level or something.
But that was the critical thing.
You just didn't know that.
And so what tends to happen with computer experiments
is an awful lot easier than with physical experiments
is like can you whittle it down to the point where you're doing a very, very minimal experiment
where there's no, oh, there's some complicated thing
we don't know what it came from.
I mean, back in the 80s, when people were working on some, some, not all, but some of the
artificial life stuff, that was very much bitten by the experiments were just unbelievably
complicated.
They were like, well, I'm going to make this model of a fish, and it's going to have
100 parameters in it.
Well, then, you know, you can conclude almost nothing from such an experiment.
And you say, well, you know, the fish wiggles in this way, but, you know, it could do anything
with 100 parameters.
And so I think the, you know, I would say perhaps the right thing to say is that, and, you know, the most common cause of error, I suspect in experiments, is prejudice about how it's going to come out.
And then muddiness in the experiment.
When the experiment is whittled down enough, you can't kind of hide, you can't let the prejudice, you can't distort the experiment by the prejudice, so to speak, because it is so simple.
You can just see this is what goes in, this is what comes out.
There's nothing, you know, behind the curtain, so to speak.
So I think that's, but, you know, but a lot of experiments, physical experiments,
you know, people who do physical experiments have it much more, much worse than people like
me who do computer experiments because it's really hard, you know, to make sure you're not
having any other effect you don't understand, et cetera, et cetera, et cetera.
I mean, you know, the particular mistake that was made in this experiment,
mentioned before was they were studying, they were looking for tracks of particles in some
emulsion stack, and the particle tracks were a bit shorter than they were looking for,
because they just didn't know about how those kinds of particles worked. Well, you know,
that's something, I suppose, there are analogous mistakes one could make in a computer
experiment, but it's a lot easier to not make those mistakes in a computer experiment,
particularly if you do the visualization well, and you're really kind of seeing every bit,
so to speak, at least at some level.
Visualization is something I want to get back to
so I'm glad you brought it up
because you brought it up
three or four times now
as something extremely important
so can you talk about
how can visualization be done
mediocrely and then how is it done well
let's imagine you have several particles
and you want to visualize them
are you just talking about
we visualize them as spheres
instead of just 2D blocks
or are he talking about
well we should be able to rotate
what are you talking about?
No the most common mistake is
that there's a lot going on
and you choose
to only look at some small slice
of what's going on. So, for example, classic example, people studying snowflake growth, okay?
They were studying, they said, well, the thing we really going to concentrate on is the growth
rate of the snowflake, okay, how fast the snowflake expands, right? So they had worked out
their growth rate, and they'd carefully calibrated it, and they found out it was correct for
snowflakes and so on, but they never bothered to make a picture of what the actual snowflakes they were
growing would their model look like. Okay. Okay. All right.
So, you know, that's an example of kind of a visualization-type mistake.
So the trick is, can you make a representation of things that is as faithful as possible,
has as many of the details as possible, but yet is comprehensible to our visual system?
And that's easier in some cases than others.
Like cellular automata are a particularly easy case.
They're particularly suitable for our visual system.
It's just at least the one-dimensional ones that I've always studied.
You know, you just have this line of cells and they go down the page.
you get this picture. Boom, there it is. Now, even there, with cellular automata, what had been
done before my efforts were to look at two-dimensional cellular automata, where you have
kind of a video of what's going on. When you have that video, it is really hard to tell what's
happening. Some things like the Game of Life cellular automata, people studied that at some
length, and you can see these gliders moving across the screen, and they look very cool, and you can
see glider guns doing their thing, and all that kind of thing. But when you tip it on its side,
and you look at the kind of space-time picture of what's happening,
it becomes much clearer of what's going on.
And that's a case where our visual system, yes, we can see movies,
but we don't get sort of in one gulp the whole story of what happened.
And so that's another case where the more faithful visualization,
if you can do it, it's not so trivial to do it
because it's kind of a three-dimensional thing
and you can't see through all the layers
and you have to figure out how you're going to deal with that.
And it's complicated.
for the things that I've studied, for example, in our models of physics, where we're dealing
with hypergraphs. Hypergraphs are not on their face, trivial things to visualize. And in fact,
one of the things that made the physics project possible was that a decade earlier,
in Wolfram Language, we had developed good visualization techniques for graphs, for networks,
which was a completely independent effort that had nothing directly to do with my sort of project
in physics, but it was something we did for other reasons. And then,
that was something was critically useful in doing physics project. I mean, back in the 90s, when I was
doing graph layout, I actually found a young woman who was, who was spectacularly good on a piece
of paper laying out a graph to have the, you know, the lines not crossed and so on. Later on,
she became a distinguished knitwear designer. So I think that was, I don't know what was cause and what
was effect, but it was kind of a unique skill of most people are not, you know, most people can't do it.
give them a bunch of nodes in a graph and you say untangle this thing. It's really hard to do. I can't do it at all. But, you know, we had found algorithms for doing that, which I used a lot, you know, in doing things with the physics project because that was, you know, it was a sort of a pre-existing thing. But it's still more difficult to visualize what's happening and things I've been doing very recently last few weeks on Lambda calculus. That's an area where the obvious visualizations are just horrendous.
us. You can't, our visual system doesn't figure out what the heck is going on. And, you know,
it's been possible. I've found some, some reasonable ways to do that, which are very helpful in
getting intuition about what's happening. But that's our highest bandwidth way of getting,
getting data into our brains is, you know, 10 megapixels of stuff that we get through our,
through our eyes. And so that's, you know, that's the best. And that's the question is,
can you make a sort of a faithful representation of what's going on underneath that,
you can get into your brain so that your brain has the chance to get intuition about what's happening
or to notice anomalies. I mean, you know, somehow I think the story of my life in science
devolves every so often and like it did just a few days ago into a very large number of
little pictures on the screen and going through screen after screen of these things
looking for ones that are interesting. It's kind of a very natural history-like activity.
It's kind of like, you know, when do I see the flightless bird?
or something like this.
Yeah.
And, you know, why do I do that?
Obviously, I've used machine learning techniques
to prune these things, et cetera, et cetera, et cetera.
But in the end, you know, I'm looking for the unexpected.
And, you know, the unexpected for me
is what my brain registers is unexpected,
and that's the thing I'm interested in, so as speak.
Okay, two questions here.
One, if you're using machine learning to sift through,
I remember there's a talk from Freeman Dyson.
He was saying that when the Higgs particle was discovered, he's happy that the Higgs was discovered,
but he's not happy with how it was discovered because there was so much filtering out of data.
And he says you want to be looking for the anomalies.
I'm happy that the particle is finally discovered after many years of effort.
But I'm unhappy with the way the particle was discovered.
The Large Hadron Collider is not a good machine for making discoveries.
The Higgs particle was only discovered with this machine because we told the machine
what we expected it to discover.
It is an unfortunate deficiency
of the Hadron Collider
that cannot make unexpected discoveries.
Big steps ahead in science
usually come from unexpected discoveries.
Do you have reservations
about how some of these filtering techniques
can be done?
Absolutely. You get it wrong all the time.
The less filtering you do the better.
The more, that's why I end up with,
you know, looking at arrays of pictures,
you know, screen after screen of arrays of pictures.
It used to be on paper.
But, you know, I would print these things out and go through sheafs of paper looking for things.
Yeah, it's, I mean, there are things that are fairly safe to do, although they bite you from time to time.
Like I was just mentioning earlier, this very long-lived Lambda creature that I found, you know, the automated techniques that I had missed it.
So that was some, and I noticed something was wrong by looking at a visualization of what it was doing.
Okay, then the other question was approximately an hour and a half ago or so, we were talking about bulk organization theory and natural selection. So bot orchestration.
Okay, you made an acronym. Okay, that's a new acronym. Okay, the bot theory. And the question is whether it's critical to how bots themselves work.
Let's skip that.
So what I want to know is, you mentioned, if I recall correctly, that there was a recent visualization you did in order to make it easier to see the connection to biology.
Not quite. I mean, no, not quite. That was related to this. I'm doing multiple projects right now. So that was about a different project, which actually happens to have some relevance to biology, but that relevance is more related to origin of life. And it's a slightly more circuitous route. But so different kind of thing. But, you know, let's end the conversation on many people who email you. They email you.
their theory. Their theory of everything, they'll say, I have a theory. You have a recent blog post
about this, actually. I have several quotes from that. We can get to that, if you like, but
how can someone, many of the people who watch this show, many people who are fans of yours,
many people who watch Sean Carroll's show or any science show at all, they want to contribute to
science. And they may not have the tools to contribute to science. So they use LLMs, generally speaking.
Or they just don't do anything, but they have the want. How can they productively contribute to
science. That's an interesting question. So I think there are there are a couple of, okay, so first
point is there are different areas of science, right? And at different times, people with different
levels of training and expertise have been able to contribute in different ways. Like, there
was a time when in natural history, you could go and just find beetles and so on. And that was
a contribution to science because, you know, every beetle you found was something which eventually
there would be some systematic thing that came out from looking at that. You know, similarly
in astronomy, you find a comet. You know,
you find some astronomical phenomenon, you know, that's going to become more difficult to do as an
amateur because there are systematic, you know, high-precision telescopes and so on that are doing these
things. You're not going to find another continent? Not anymore. Not anymore. Not, you know,
in, you know, there have been times when this wasn't done so much by amateurs, but chemistry, for
example, there was a thing where you just, you study another compound and you just keep on doing that.
And in the end, you build up this kind of collection of knowledge where somebody is going to pick up, you know, somebody studied, you know, lithium hydroxide back in the day for no particularly good reason.
And then, you know, somebody at NASA realizes that's a way to scrub carbon dioxide or whatever and it gets used in the polar spacecraft or whatever it is.
You know, it's a, it's a, so I think there's this thing where there are things that you can kind of accumulate that maybe are not, you know, not necessarily in the nod of themselves.
they don't require sort of integrating a lot of things to be able to make progress.
There are areas that are more difficult.
So, for example, right now physics, as it has traditionally been done, is a more difficult
area to contribute to.
Because back, I would say, in the 1700s, not so difficult.
But now there's a pretty tall tower of stuff that's known.
I mean, stuff from mathematical physics and so on that's not.
known, and if you say, well, I'm going to have a theory of how space time works, if you don't
know what's already known about space time, which is couched in quite sophisticated mathematical
terms, and not capriciously, it's just that is, you know, our everyday human experience
doesn't happen to extend to how space time curves around a black hole. That's not everyday
intuition. And so it's inevitable that it's going to be couched in
terms that are not accessible from just having everyday intuition. And there are fields where
that not so much of that has happened. Physics is one where there's a pretty tall tower
of things that have been figured out that get you to the sort of the description of physics
as we know it today. Now, it turns out that, you know, in the things that we've been able
to do with our model of physics, it's one's a little bit closer to the ground again in some
aspects of it. Because the study of, you know, hypergraphy writing and so on, that's something,
you know, pretty much anybody can understand the ideas of hypergraphy writing. It's not,
doesn't require that you know a whole bunch of stuff about, I don't know, you know,
sophisticated things about partial differential equations and, you know, function spaces and all
this kind of thing, which are fairly complicated abstract concepts. It's something where, at least
at the simplest level, it's like you've got this thing you could run under your computer,
you can see what it does. Now, you know, connecting that to what's known in physics, that's
more challenging. And knowing kind of how that relates to, I don't know, some resulting
quantum field theory or something is more challenging. The resulting quantum field theory is
not a waste of time. The resulting quantum field theory is our best condensation of what we know
about how the universe works. It's not something where it's like forget all that stuff. We can just
go back to kind of the everyday intuition about physics. That was good thing a few hundred
years ago. It isn't a good thing anymore because we already learned a bunch of stuff. We
already figured out a bunch of things. And, you know, if you say, well, just throw away all those
things, let's start from scratch, that's, you know, then you've got to recapitulate those few hundred
years of discoveries. And that's a, you know, that's a, that's a heavy, that's a tall hill to
climb, so to speak. But I think the, you know, one of the areas where there's sort of a wonderful opportunity
for people to contribute to science that is sort of high-leverage science is in this field that I call
ruleology, which is kind of studying simple rules and seeing what they do. And whether it's cellular
automator or touring machines, a lambda calculus, or a hypergraph rewriting, these are things where, you know,
you run it on your computer, okay, I built a bunch of tools for doing this, which, you know,
make it really easy. But, you know, you do this. You're well-organized. You kind of, you won't immediately
have intuition about how these things work. At least I never have. You know, it takes actually doing
it for a while to have the intuition. And people usually don't at the beginning, they're just like,
oh, it'll never do anything interesting. We'll just run it and see what it does. And then, you know,
if you're well-organized and kind of can develop intuition, you will eventually get to the point
where you can say, okay, I see how this works, I can build this thing where I can add some definite
piece to knowledge. You know, I can, like, for example,
but we have this summer school every year for grown-ups,
and we have another summer research program for high school students.
So this year, for the high school students,
we had like, I don't know, 80 students or something,
and I'm the one who gets to figure out projects for, at least almost all of them.
Oh, you define them, or do they come to you and you sift through them?
Usually I, during the year I kind of accumulate a list of ones I'd really like to see done.
And so this year, several students, I gave in projects which I've been wanting to do for decades
and which are just studying particular kinds of simple systems.
One was multi-way register machines was one of them.
Another one, this year was games between programs.
Another one was this year, there was several.
But anyway, these are things where, and I can say these high school kids,
okay, they're very bright high school kids, and they're using our tools and so on.
In two weeks, we're able to make quite nice progress, and we're able to add something.
You know, I'm sure those things will turn into academic papers and things like this.
And they were able to, you know, starting from just being a bright high school student, so to speak,
not knowing, you know, eight years of mathematical physics type thing.
They don't know group theory.
They don't know differential calculus or maybe some.
They'd probably know basic calculus.
They probably.
but they wouldn't need to.
I mean, they, you know, this is just a sort of be organized, be careful, and, you know, have the motivation and sort of, and sort of, and a little bit, think foundationally enough that you're kind of drilling down to say, what's the obvious experiment to do?
Don't, don't invent this incredibly elaborate experiment where the conclusions won't tell you anything.
You know, try and have the simplest experiment.
You know, what is the simplest version of this that we can look at and so on?
And that, you know, it's really neat because it means it's an area.
Ruleology is an area, it's a vast area.
It's the whole computational universe, you know, I can, if you say, well, what's the thing
that's never been studied before?
I get out my computer, pick a random number, and I'm going to be able to give you something
that I guarantee has never been studied before.
And, you know, it'll have a lot of richness to it.
If you study that thing, that randomly generated thing, the particulars of that thing may or may not
show up as important in particular in the future, but certainly building up that body of knowledge
about things like that is something that is very high-leverage science. It's something that is
very, it's something that you can kind of be sure is something that's a solid thing that people
will be able to build on. You know, one of the things I find striking and a bit encouraging,
I suppose, is, you know, you think about something like platonic solids, you know, the acosahedron,
the dodecahedron and so on. You say, well, that's, you know, you have a, you have a, you have a
an object, you know, a dodecahedron, you know, a piece of wooden dodecahedron or something.
And you go back and you say, well, let's, you know, we find a dodecahedron from ancient Egypt.
It looks exactly the same as the dodecahedron we have today.
This is a timeless object.
It's a thing that, you know, the dodecahedron has been something that has been worth talking about
from the time of ancient Egypt to today.
And so similarly, these things in rheology have the same character.
They're very abstract, precise, you know, simple, and they're sort of foundational.
And it's something where, you know, this particular rule, it's not going to be the case that
somebody's going to say, oh, we learned more about the immune system.
So that model of the immune system is irrelevant now.
You know, the things you measured about this are irrelevant.
That's not going to happen because this is, you know, we're at the foundation, so to speak.
This is an ultimate abstract thing.
And so anything you build there is a permanent thing.
And, you know, whether you happen to find, whether you do, you know, there were many naturalists
before Darwin who went and, you know, collected lots of critters around the world.
You know, Darwin realized having collected lots of critters that there was a bigger picture
that he could build. You know, it was still useful for people to have collected all those critters.
And, you know, Darwin and everybody else doing evolutionary biology used a bunch of the information
that had been collected by those people. It was, you know, it's a different thing to kind of integrate
all those things, have the sort of philosophical integration to be able to come up with the bigger
theory. But that's a much more difficult thing to do than to add these kind of solid bricks
to science. And I would say that, you know, as a person who's done lots of Ruliology in my time,
it's, you know, if you have a certain turn of mind, it's a lot of fun because you just, you keep on
finding stuff. You keep on discovering things. I mean, you know, I'm sure back in the day when, you know,
the whole planet hadn't yet been explored, you would go to, you know, someplace in the center of
Africa, and it's like, oh my gosh, there's a tree that does this or this. And this is exactly the
same thing. You know, every day, you see lots of things, like in this stuff about Lambda
calculus I've just been doing. There's all kinds of weird stuff. I've never seen it before.
I don't think anybody's ever seen it before. And it's, I'm sure nobody's ever seen it before.
And it's remarkable. And it's interesting. And it's kind of, it feels, you know, it's
it's an it's an it's an it's an explorational kind of thing it's kind of you get to see for the first
time something nobody's ever seen before and something that is that you know you kind of know
is going to be a a permanent thing that is going to be a thing that is never going to change it's
never going to be oh it doesn't work that way anymore it's it is it is what it is so to speak
and i think that ruleology is a is a is a great example of a place where there has been a
fair amount of sort of quotes amateur ruleology that's been done over the years. It's not been
as well organized as it should be, and I fought myself for that in large measure. I mean, back in
the 80s, I got a bunch of people interested in this, and a bunch of people, both professional
scientists and amateurs, started studying these kinds of things. And I started a journal that
collected some of these things, complex systems. But I would say that the kind of the rhythm of how
to present ruleology and so on. I didn't really develop as well as I should have done,
and I'm now hoping to do that. And it's a question of, you know, for example, back in the day,
when people started having academic papers back in the 1600s, if you read those papers,
they read like today's blog posts. They're much more kind of anecdotal. You know, I went to the
top of this mountain. I saw this and this and this. And they have, they're more personal.
They're actually, I would say, better communication than what?
what one gets in sort of the very cold academic paper of today, where, you know, particularly I would say math papers are one of my favorite, non-favorite examples, where it just starts, you know, let G be a group.
It's like, why are we looking at this group? Who knows? It's just, you know, because it's like it is beneath us, or it is not appropriate, or it is kind of not professional enough for us to describe why we're doing this.
but so you know the back in the 1600s when when academic papers kind of originated they were like you know like the blog posts of today so at least my blog posts today where there's both the content and a certain amount of the kind of the wrapping of why we're doing this and kind of um you have the purpose in mind and you're conveying it and you're showing how what you're doing is connected to that yeah right and it's not and it's sort of telling a story more so than it's just saying fact fact fact fact
and, you know, I'm just filing the facts, so to speak.
I think with Ruliology, one of the things that is interesting is it's a place where, you know,
you discover something interesting, you want to just say, this is my discovery.
And you, you know, you want a way to sort of accumulate lots and lots of discoveries
without having to always feel like you have to wrap a whole kind of academic story around it.
And, you know, the academic system has not, is not well set up for that.
The, it's, you know, the academic system, it's like there's a, there's this unit, the sort of unit of academic achievement, which is the paper, so to speak.
And, you know, just there's this particular thing that I observed in these, these particular characteristics, that's not so much the kind of thing that one sees there.
It's the kind of thing that one should be accumulating lots of in ruleology, and it's something that is,
very accessible to sort of well-organized people who, you know, who want to sort of work
cleanly on this without, you know, as amateurs, so to speak. I think that's a powerful
thing, and I'm hoping to have the bandwidth to kind of put together a properly organized
kind of ruley logical society or something where we can kind of accumulate this kind of
information. I think it's a thing where, I mean, one of the things I do and the things that I
write about science is every picture in everything I write, you can click that picture, you'll get
a piece of Wolf and language code and at least if our QA department didn't mess up, it will
forever produce the picture that I said it produced, so to speak. And, you know, I mentioned the
QA department because it isn't actually trivial to have, you know, you make some piece of code
and you have got to make sure it keeps on working. We've been very good in Wolfram's
language and maintaining compatibility for the last 38 years with the language, but if I use
some weird undocumented feature that day, that might change. I don't. So that usually isn't a
problem. But that fact that you kind of have it as a constraint for yourself, I like to have it
as a constraint for myself. The things I write should be understandable to anybody who puts the
effort in to understand them. And reproducible. Yes, but also that they're understandable, not only
to the humans, but also to the computers, so to speak, and that everything I do is, like,
you can immediately reproduce it. That's, you know, turned out in practice to be a very powerful
thing, because people just take, you know, the code and all the visualizations and so on that
I've made, and they just go and start from there. They kind of start from that level of the
tower, so to speak, rather than having to climb the tower themselves. And it's, you know, I think
that's a powerful thing. It's very, it's pretty much undone in most.
of academic science, partly because academics, well, they don't tend to package the kind of code
in a form that will actually reproduceable and runnable. They do more with our technology than
anywhere else, but still it's somewhat inadequate. And also, I think the motivations on the part
of, you know, the typical academic scientist is kind of in the game of academia, so to speak,
which involves, you know, I'm going to publish my thing, I'm going to publish another thing
that leverages the thing I just published, and, you know, I'm going to get as many papers as
possible and so on. For me, the calculation is actually rather different because for me,
I'm trying to do a bunch of different things in my, you know, finite life, so to speak.
And for me, I'll write something, and I really don't want to write something about the same
topic again. It's kind of like it's a write once type activity. And so as much as possible,
I'm like, I'm going to write this thing and, okay, world, here it is.
I hope you can do something useful with it because I'm not going to come back to this particular thing.
I mean, I find actually that the, you know, having, I do end up sort of building on the things I've done,
but not kind of writing sort of an incremental version of the same document again.
I find it, maybe it's just me, but I just, I can't bring myself to do that.
I feel the same way.
And it's kind of, it's actually very frustrating when I,
When I do a project, I like to kind of pick all the low-hanging fruit.
And I know that any fruit I don't pick first time around, I'm not going to come back and pick.
And it's just going to sit there.
And it's going to be frustrating to me because it's kind of like, here's this thing.
And I just figured out a little bit more, but I have no place to write that down.
Actually, one thing I've been doing recently is in the NCS book, there were many notes at the back.
and many 100-page things that I'm writing today
are the expansion of three-paragraph notes in the NKS book.
So, for example, the thing that I've been doing the last few weeks
is, in many ways, an expansion of a small note in the NKS book
that will turn into a 100-page document.
And so that's sort of okay by my standards
in the sense that I didn't really work it out in enough detail before,
and now I am, and that's not kind of recapitulating
something I've already done.
Anyway.
Stephen, it's been wonderful.
Thank you so much for spending almost three hours, two and a half or so.
Oh my God, it's that long.
I just start yacking in it.
It keeps on going.
Well, this will be released in approximately two weeks, and maybe by then people will
be wondering, okay, I've heard about Ruliology.
I've heard that I can contribute by looking at Rule 2 million and 2.
But specifically, I want more detail.
So you mentioned there are 40 high school projects.
maybe within two weeks
there'll be a blog post out
for people who want more
and they can say
it won't be two weeks
I want to do it
but it won't be two weeks
unfortunately
whenever it's done
I can put a link on the description
I'll update the description
so if you're watching this
whenever you're watching this
check the description
or maybe I'll have something
on my substack
about how one can possibly use
Ruleology
as Stephen would hopefully approve
sounds good
hi there
Kurt here
if you'd like more
content from theories of everything and the very best listening experience, then be sure to check
out my substack at kurtjimungal.org. Some of the top perks are that every week you get brand new
episodes ahead of time. You also get bonus written content exclusively for our members. That's
C-U-R-T-J-I-M-U-N-G-A-L.org. You can also just search my name and the word substack on
Google. Since I started that substack, it somehow already became number two in the science category.
Now, substack for those who are unfamiliar is like a newsletter, one that's beautifully
formatted, there's zero spam, this is the best place to follow the content of this channel
that isn't anywhere else. It's not on YouTube, it's not on Patreon. It's exclusive to the
substack. It's free. There are ways for you to support me on substack if you want, and you'll get
special bonuses if you do. Several people ask me like, hey, Kurt, you've spoken to so many people
in the fields of theoretical physics, of philosophy, of consciousness. What are your thoughts, man?
Well, while I remain impartial in interviews, this substack is a way to peer into my present
deliberations on these topics. And it's the perfect way to support me directly.
Kurtjymungle.org or search Kurtzimungal
substack on Google. Oh, and I've received
several messages, emails, and comments from
professors and researchers saying that they recommend
theories of everything to their students. That's fantastic.
If you're a professor or a lecturer or what have you
and there's a particular standout episode
that students can benefit from or your friends,
please do share. And of course, a huge thank you
to our advertising sponsor, The Economist.
Visit Economist.com slash Toe, T-O-E
to get a massive discount on their annual subscription.
I subscribe to The Economist, and you'll love it as well.
Toe is actually the only podcast that they currently partner with,
so it's a huge honor for me, and for you, you're getting an exclusive discount.
That's economist.com slash tow, T-O-E.
And finally, you should know this podcast is on
iTunes, it's on Spotify, it's on all the audio platforms. All you have to do is type in theories of
everything and you'll find it. I know my last name is complicated, so maybe you don't want to type
in Jymongle, but you can type in theories of everything and you'll find it. Personally, I gain
from re-watching lectures and podcasts. I also read in the comment that Toll listeners also gain
from replaying, so how about instead you relisten on one of those platforms like iTunes, Spotify,
Google Podcasts, whatever podcast catcher you use, I'm there with you. Thank you for listening.
