The Origins Podcast with Lawrence Krauss - Stephen Wolfram on Math, Philosophy, & More
Episode Date: February 21, 2022In this episode of the Origins Podcast, Stephen Wolfram joins Lawrence Krauss for a fascinating conversation around Stephen's upbringing, his education path, Mathematica, and what he's working on now.... They also cover various concepts around symbolic manipulation and the importance of knowing how to type. Stephen Wolfram is the creator of Mathematica, Wolfram|Alpha and the Wolfram Language; the author of A New Kind of Science; the originator of the Wolfram Physics Project; and the founder and CEO of Wolfram Research. Over the course of more than four decades, he has been a pioneer in the development and application of computational thinking—and has been responsible for many discoveries, inventions and innovations in science, technology and business. Get full access to Critical Mass at lawrencekrauss.substack.com/subscribe
Transcript
Discussion (0)
Hi, I'm Lawrence Krauss and welcome to The Origins Podcast.
This episode is with a fascinating individual, Stephen Wolfram, who's had many different careers.
Stephen began as a young scientist, a very young scientist, self-educated, basically did without many degrees,
and went on to do a PhD at Caltech after educating himself, and I think got it when he was 21,
and he was working with Richard Feynman.
Then he went on to continue to do physics and went to the Institute.
for event study among other places, but decided to branch out. He's always been kind of an
iconoclastic individual and decided that what the world needed was a new way of doing mathematics
on computers. And he created what was one of the first symbolic manipulation programs,
something that allowed you to do not just number crunch with computers, but actually do symbolic
manipulation, do algebra. And Mathematica, the program he created and the company he leads,
became really the prime way that most scientists, most physicists at least now, do complex algebra.
They use mathematics, they do it, as well as much more.
But Stephen didn't rest on his laurels of just doing mathematics.
During that time, he's always been interested in doing research
and falling up on ideas of something called cellular automata
to think about new ways of trying to understand fundamental physics.
And he's made great claims about what his new way of doing science
as he talks about it might do for understanding physics.
claims in fact that he can really reproduce all of fundamental physics with his symbolic manipulation
and these cellular automata ideas. And I wanted to talk to him about that. And we did. We talked about
that. We talked about his early history in physics. We talked about many things, including how important
it is to know how to type. And it was a fascinating conversation that I hope you'll enjoy.
If you're watching this on YouTube, I hope you'll consider subscribing to us on YouTube because
it'll help us, but it'll also help you because you'll learn about new
If you want to watch this without advertisements, I hope you'll consider subscribing to our podcast on Patreon, which will help support the foundation that runs this podcast, the nonprofit foundation, the Orchitz Project Foundation.
The funds from Patreon help it continue to exist and do the programs it does.
So I hope you'll consider becoming a member of our community that way.
Either way, no matter how you watch it or listen to it, I hope you enjoy this episode.
And I'm pretty sure you will.
Take care. Well, Stephen, thanks a lot for coming to do this podcast with me. It's been a while since
we've seen each other, but it's always good to see you. Yep. Nice to see you, at least in virtual form.
In virtual form. It must have been, it must be like a decade since we actually are physically.
Yeah, physically. Although it's, from what I understand, we'll get to it, that there's really no
difference between the virtual and the real, if I can understand some of your work. But we'll get to that.
But it's been at least a decade.
But we go back, we go back, I was trying to think.
We go back 40 years, actually.
Probably.
Yeah.
Yeah.
Well, you were both in the particle physics business.
Yeah, we were both in particle physics business.
I'll go back there.
But I want to go back even further as we delve.
Since it's the origins podcast, I wanted to begin with your origins, which are interesting.
And I learned a little bit more.
I knew some things about you already.
but in the in the lore of Stephen Wolfe from I've learned some more.
Maybe some of it's true.
One thing that's interested me, so your parents were, I was trying to,
I always like to figure out where people might have gotten the interest in science or things.
Your mother was a philosophy fellow at Oxford, right?
Yeah.
Translated into an American, it would be philosophy professor.
Yeah, yeah.
They didn't call them that.
They call them philosophy dons in Oxford.
Absolutely.
So did she have an answer to she had a PhD in philosophy? Did she have a PhD? A PhD was in anthropology, because they didn't actually do philosophy PhDs back in those days. Philosophy was thought to be a field in which you couldn't get to be a doctor of philosophy, ironically enough. Yeah, that's right. It was probably a good idea, actually. Yeah, that's probably a very good idea. So she, so it was anthropology. I wonder first whether her interest in philosophy might have been formal logic and that end of philosophy, but perhaps not. She wrote actually a reasonably one.
well-known textbook on philosophical logic, which is different from formal logic. So she was not
a mathematically oriented person. She was more of a kind of linguistic philosophy kind of person,
and she worked a bunch on those kinds of things. And she, I mean, no, my interest in science
came from being orthogonal to what my parents did. Actually, two things that came from. It came from
being orthogonal to my parents, and it came from the space program, which was kind of the big
thing in the 1960s when I was growing up, was kind of, you know, I was young, I was interested in
the future. The future seemed like it was things like the space program. Sure. So that's, you know,
got me interested. I mean, having a philosophy professor as a mother has interesting features,
like, you know, it's like you explain something, not that she knew much about science, but I would
explain something to particularly to philosophy friends of hers. And they were always
say, well, how can you know that? Oh, great. And I actually, there was a, I remember a philosopher of
time that I remember when I was a young lad, probably, I don't know, 10, 11 years old. You know,
I was having, I remember this, this big argument was a woman who was a quite well-known
philosopher of time, as it turned out. And it was like, you know, look, I understand relativity.
This is how things have to work, et cetera, et cetera. And yeah, that, and she was like, so one of
the things that came up there was, was I'm like, okay, there's this, you know, time dilation, all that kind of thing.
And she's like, how do you know that a human, you know, a biological thing is going to actually
show time dilation? It's like, okay, you can figure it out for a clock, et cetera, et cetera, et cetera.
And I was like, but it just has to work that way. I didn't really quite know why.
Now I think I do finally know why, but that's, that's 50 years later or something.
But I was kind of, and then later on when I was like thinking about that,
and I was learning about GPS satellites and things.
I was thinking, gosh, we finally have a way to actually see time dilation.
Let's use the GPS satellites.
Yeah.
Whoops.
They back correct for time dilation.
That's the whole point.
Yeah, they didn't.
We wouldn't be able to get to the nearest theater or anything else.
Yeah, yeah, right.
It's kind of amazing.
They don't obey, you know, time dilation.
They are, you know, and the question is the rat or the human going in the spaceship,
why should they any more obey time dilation than the GPS satellite?
So that's an example of my kind of, I felt a bit silly, you know, 40 years after that conversation,
realizing that the insistent young, you know, science-oriented 10 or 11-year-old was like,
no, no, that's how it has to work.
And actually, it's a little bit more subtle than that.
Yeah, well, it is.
You know, it's funny when you talked about a philosopher at time, which is an amazing concept to me,
But then occurred to me that in some sense, at least reading some of your recent stuff,
that's kind of sort of what you become in a way, at least part of what you've become,
is a philosopher.
Yeah, right.
I'm learning a bit more about what time actually is.
And we'll get to that eventually.
And you'll explain to me because, yeah, I have a con, I think I understand having read your stuff,
but I'm sure I could be, that understanding could be approved upon.
But it was nice that you had those discussions.
But it is interesting that so she was more linguistic.
And your father, you said,
became a novelist. Neither of them, as you say, were mathematical at all.
No, no, my father was, I mean, mostly a businessman. Yeah. And wrote novels as a kind of a hobby.
And that was, you know, they were, look, I was a first child. I have a younger brother who's
10 years younger than me, but I was effectively an only child and, and probably a strange child at best.
I suspect. But the fact that if you really had those conversations at age,
11, if it was really those, I mean, that's kind of a useful thing because asking, if they ask
you questions, how do you know that? That kind of promotes at least a kind of scientific thinking.
I mean, how operationally do you know that? And yeah, yeah. That I would always say at the time,
it's like, look, philosophy is just crazy. You know, how can you guys have been debating the same
questions for 2,000 years? This is just a stupid field. I have great sympathy of that. In science,
we make progress, well, yeah, yeah, well, that's what I've said in the past.
and I've gotten a lot of hate mail for that.
Well, I think it's a subtle issue.
And I think one of the things that's been a big surprise to me, actually, in recent times is, from science that I've done, I finally understand a bunch of what philosophers were trying to say, often in terms that they didn't have.
They were trying to describe things in terms that were a thousand years after their time, so to speak.
And so what they say sounds kind of goofy to us today.
But as you actually understand what really seems to be true scientifically, it starts to seem a bit less goofy.
But, you know, I'm kind of one of my early memories or something from sort of the,
what will you do when you're grown up type question?
I probably was about five or something.
And I was at some party with a bunch of adults with, and, you know, I was probably
one kid at the party and there was a bunch of philosophers.
And, you know, some old white-haired philosopher comes over to me, as I would do probably
in the current time and says, you know, the kid's probably more interesting to talk to
and a bunch of these middle-aged adults.
Yeah, sure.
So I have this long conversation with this guy.
I forget about what.
And then he's walking away and he kind mumbles to himself.
I can hear him sort of mumbling, you know,
one day that child will be a philosopher.
It may take a while.
I suppose I was a compliment.
That's very good.
Yeah, I suppose, yeah.
50 years later or something, it's so more.
Maybe that's right.
Well, by the way, you were your five years younger than me, I think.
So you would have been a little young for the Apollo landings, but, but, I watched those.
Yeah, I stayed home from school. Oh, you too. I stayed up all night long. I had a little command
center downstairs in the basement. Oh, okay. Well, for me, it was like two in the morning, I think.
The setting foot on the moon was two in the morning British time. Yes, I did watch that.
In fact, I used to keep very close track of that. You know, I was always a producer of lots of written
material. So I still have all of the, all the kind of notes that I took about the precise things
that happened in all those spacecraft, which was, I don't quite know why I was so into it,
but it was kind of a, well, it was a kind of, this is something about the future. Now, thank
goodness, I didn't. I didn't. Yeah, right. Well, it was, but it was, it was, thank goodness I didn't
sort of stay concentrated on space because then I would have been hibernated for 50 years until people
actually started taking it seriously again. But, no, I mean, I, it was, it was, I mean, it was,
You know, that was, I started kind of, oh, I guess I got interested around probably age 11 or so in 10, 11, something like that.
I've been sort of in this, I'm going to design spacecraft.
I'm not clear what that meant.
And so then it was like, then I have to learn physics, I guess.
And then I got really interested in physics.
And I started doing things like I have this artifact that somewhere on the web from when I was like 12 of this kind of sort of concise.
directory of physics, which is all these sort of physics.
I look to your little scrapbook.
I've been going through it today, you know, delving into it.
It's amazing.
Well, what's kind of funny about that to me as an observer of the human condition is, you know,
you look at that.
It's a bunch of facts about physics and tables of data and things like that.
And it's like, and then I look at WolframAlpha.
And it's like, oh my gosh, I've been doing the same thing all my life.
It's kind of a, and actually when I kind of resurfaced that.
thing from when I was 12, sort of one of my first instincts was take some of those numbers,
type them into Wolf and Alpha, see what was right. Yeah, no, the connection was not lost on me,
in fact. It's been fascinating for me to look back at some of that, knowing you as I do and see
and see some of that. I didn't, I'm not surprised, but I didn't realize that you were so
prodigious and no-taker and recorder of things. I have the only the person I know who does that with
my friend Alan Gooth basically every night more or less records everything that's happened during the day.
Oh, wow. Okay. I didn't know that about him. Okay. Yeah. Yeah. Well, I mean, I assume he still does. When we worked together, it was all those that way. And it was remarkable. And sometimes frustrating because he'd often be behind because he was trying to keep track of every single thing. And it's, you know, most of what I keep track of, you know, I've recorded probably more sort of personal analytics stature on myself than I think anybody else.
All of that is passive. All of it is passive. I mean, if I had to lift a finger to do it, I mean, I lift my fingers and type keystrokes, but they're just passively recorded. It's not, you know, if I actively did it, forget it, not going to happen.
Yeah, then you would tell you this. By the way, what's the log, just in case people wondered, apparently you've recorded how many keystrokes you've typed in your entire life.
Yes, I don't know if you have a little, but, but did you have a little thing on the upper left-hand corner of your screen or anything? What's the number down?
actually it's an interesting idea i don't have that i it's some i have uh every day i do know how many keystrokes
i've typed each day so for example i think i think yesterday i i best today was a 50 000 keystroke day
so that was okay that was a that was a decent day is an average day for you 50 000 or a good day it's
it's actually higher than average i was i was writing some stuff and um you you do write in in you do
when you write it's never a sound bite let me put it down well i know i know i know i you know i you know i you know i you know i
I wish I could write shorter.
It was, you know, when I was working on my new kind of science book for a decade,
that book is in a sense, even though it's a big book, 1,200 pages long,
it is sort of a minimal length book in the sense that every page,
it's kind of compressed as much as I can compress it.
I kind of decided that wasn't the right optimization.
And I didn't want to spend another, you know, 10 years as a hermit writing things.
So now when I write stuff, I write, you know, at sort of output.
I write it as I think about it.
and I write whatever I'm going to write.
And, you know, I could probably compress it by a factor of four or something,
but I figure it's better to write it more at more lengths.
There's probably more stuff in there anyway.
And then I'm actually going to get it done rather than saying, well, one year I'll get it done and never get it.
Yeah, it takes, you know, as everyone knows, it takes a lot more time to write something shorter.
You know, there's that famous letter.
If I'd had more time, this letter would be shorter.
Yeah, I forget one of the British, someone famous said that,
and I'll no doubt someone to let me know who said it.
So it was, I was trying to think where the interested in physics came in,
but there was an early interested in math.
Now there's this, you know, you said somewhere, oh, I wasn't very good in arithmetic.
I actually looked at your grades there in one of your things.
They weren't too bad, actually.
I mean, it said, you know, you had trouble with your times tables or something like that,
which now, of course, can be done passively too because mathematical I'll give you that.
But, but, but, but there, what turned you on to mathematics then?
Was it the, was it, I need nothing.
Nothing.
Because I was, you know, I was interested in physics.
Mathematics was sort of a necessary evil for doing physics.
Yeah, exactly.
So I was wondering, what you needed to do physics, so you learned it, basically.
Right.
I mean, you know, and I actually didn't like learning it, and that's why I got computers to do it for me.
And that's, that's kind of the, but, but subsequently, what,
what has happened is that I got interested in kind of what is the essence of things,
what is the sort of ultimate abstractions of things.
And from that, I've gotten very deeply pulled into sort of advanced areas of mathematics
and kind of abstraction in mathematics and so on.
In fact, one of my current projects is finally to understand what mathematics is.
We can maybe come to that.
Okay.
I'll put that at the end there.
But I think the, no, I mean, for me, I was never, you know,
I think the way one gets taught mathematics in school and so on,
at least in British schools of the time,
it was an awful lot of math trickery,
which I was never really into.
It's like,
there's this particular integral,
and you can do this particular one
because there's this cool trick for doing it.
And, you know,
what I learned at some point in like doing integrals,
because I wanted to do them for physics,
was these big industrial machines.
Like you turn an integral into a product of,
well, nowadays it will be made,
G functions, but then it was like polylogorisms and other such things, all these kinds of exotic
functions. And, you know, you'd basically make a completely boring to implement industrial
machine that will grind through all these integrals. And, you know, mostly that was, that was, you know,
set up to be easy to do on a computer. But even by hand, I would do those things. And I think,
you know, back in those days, well, I even did, you know, I sort of made it through my sort of first
year of physics undergraduate type thing. And I think I even came top in the exams, which was,
which was a tribute to the exams more so than to me, I would say. Because it's, I mean, by that
point, I was kind of able to do sort of professional grade physics stuff. And the only question
was, you know, if you could get to the answer, but using completely alien methods, was that okay?
Because I certainly couldn't get to the answer using the, you know, there's this trick for doing
a trick substitution of this and that kind. And it just happens to work for this integral, not my
kind of thing at all. Interesting that you say that. I'm going to jump ahead. I, I, because I can't resist,
because you're old pal who I knew as well, Richard Feynman, and you know, I, I not only knew him,
I wrote a book about him, he was famous for developing tricks to do integrals. It was one of
things he, in fact, is an essential part of, in some sense that using Feynman diagrams
to understand, to calculate in particle physics. In order to use them, he had to develop a lot of tricks.
to do them.
Well, yeah, I talked to him a bunch about that because when I was working on SMP,
which was kind of for honor of Mathematica and Moulin language and so on, he was, you know,
I was talking to him a lot at that time and he kept on telling me, you should do this,
you should use this method, you should have, and his tricks were a little bit more general
than tricks.
I mean, he had, you know, definite methods of, you know, you would take this general thing
and differentiate with respect to a parameter and this, that, and the other.
I mean, I know, you know, I noticed a few years ago.
ago, one time I was at his house and he said,
and I got these notes about how to do integrals
for Feynman diagrams.
And he said, they were made sometime in late 50s, I think.
Said they'll be more used to you than to me.
So it gives them to me.
And it's like, well, I give them back sometime.
I still have them, of course.
Of course.
Yeah.
I mean, it's kind of the, the, but it's interesting
because that was, he was a very, in a sense,
he was a very low tech mathematician in the sense
that those, those notes, they're really
about polylogarisms and things like that.
and there's all kinds of fancy theory of polylog rhythms now.
That wasn't his thing at all.
No, no.
He was using them to get results.
But also his approach to mathematics was very much the most powerful 19th century mathematics, so to speak.
He really didn't trust 20th century mathematics.
Interesting.
But I think that his, no, he was, I mean, I was always, you know, we tried to do some work together various times.
We worked on quantum computing, actually, back in 1981.
And it was always an interesting experience because he would do these calculations of, you know, spin chains and this and that and the other.
And I'm like, I have no idea why that result is correct.
And I would do some computer calculation and show it to him.
And he says, I have no idea why that result is correct.
And so it was a little bit challenging to communicate there.
But it was, no, he was, I mean, the most impressive thing, as far as I'm concerned, is, you know, he would go through and he would calculate stuff and he would actually get the right.
answer. Yeah, yeah. Which for me, without kind of, you know, I can, I can get a computer to get the right
answer. For me, by hand, no way. I'm going to get lost in some whatever. But then, you know,
the thing that was always a sort of paradox of Dick Feynman was that, you know, he would, you know, he would
come up with this, this, you know, calculation. Then he would say, that's not impressive.
Everybody can calculate stuff, which isn't true, of course. And he would say, I've got to come up
with some grand intuitive explanation, that's really going to impress people. And I remember,
like with the part-on model and field theory and so on, I remember, you know, he told me he'd worked
out all this stuff about scalar field theories and how these part-ons would work in scalar field
theory. And he just tells people, oh, there are these partons and there, these point particles
and so on. And it's all obvious how it works. And people are like, why, why does it work that way?
And he never told anybody. But he worked out this whole theory behind it. I mean, that's famous about
find me and he'd love to be have be appear to do things by magic but then when you you you know when you go
back and you look at the notes there was you know 30,000 pages of notes he'd done he'd worked it all
out and that's to me one of the more amazing things about him he said he'd and that's why he had this
incredible arsenal it looked like he was pulling things out of thin air but they weren't pulled out of
thin air they were based on years and years of calculations which which he developed and retained and and and
and could use that incredible arsenal to make things appear
magical. And it was, yeah, when I was writing the book about him, it was interesting to learn those
details. But he certainly loved the magic. He certainly loved to appear. I mean, you know, there's,
I think it's a famous story about him, but, you know, it was one of the things I put at the beginning
of the book. You just reminded me of it. You probably know this story, but when he's a kid and he would,
and he would get Mike Money by fixing radios, do you know this story? It's just perfect. It's a perfect,
it was, it was Dick Feynman emerging right then. So he would, it was back in the Dave,
of tubes before they had, you know, panisters and stuff. And so he, or solid state circuitry.
And he, someone brought this, this, this radio in that was making this, this awful noise.
And, and, and, and he, he, he was a little kid, right? And he walked around, he walked around, he
walked around, pondered, looked out of the sky, pondered. And then, and then, you know, switch two
tubes and the noise stopped. And, and, you know, he knew, he'd known right away, as he, as he, he'd known, right, as
he said. He'd known immediately what the problem was, but the whole idea was to make it appear as
if this magical insight came to him. And, you know, he was a showman from the beginning. But
the difference between him and many showmen is that he had, it was more than just show. And he, and,
and, and remarkable. But it's interesting to me that, that, that, that, that you had two very
different ways of thinking about at least how to do calculations. Right. But, you know, I, I think
that one thing about, about Feynman, that, that is, you know, that, that is, you know, you know,
he genuinely didn't think it was impressive that he could do these calculations.
And, you know, I know it's something I've slowly learned about some things in my life where it's like, that, that's easy.
You know, that's not impressive.
I'm not going to tell people about that.
That's easy.
And it turns out, you know, years later, you realize, actually, you know, most people can't do that at all.
Yeah.
I mean, I just realized that actually about something just a few days ago about something that I kind of had always assumed, well, I'd never really thought about it.
I'd never thought, why is this hard?
And, you know, the, the issue I can describe, it's, it's like when you, you know,
we're jumping years later, but, you know, studying complexity and things like this,
making models of things.
And it's, it's always, I realize there's this kind of process of meta-modeling.
You know, you have a model of something.
You say, what is the underlying essence of what's there in this model?
And that's something, you know, when I worked on these simple programs,
as models of things and so on, a lot of what's going on there is you say,
there's a model, which may be a quite accurate, detailed model, but what's the essence of what's
happening there? And I kind of realized, I mean, that's something that I've sort of naturally been
interested in and found. But then I realized, actually, that's the same thing one does in computational
language design is you have to try to drill down to the essence of things. And for me, that's just
something I naturally like to do and end up doing. And it's, you know, and I'm like, why do other
people not do this, well, I've just spent 40 years doing that kind of thing. But it doesn't,
but to me, that's sort of an obvious thing one does and doesn't happen to be so obvious to other people
because it just hasn't been what the pattern they've taken. Yes, I can, but I guess that's one
reason why you and I at least were doing particle physics, I think, early on. And, and, I mean,
my interested in particle physics, I mean, to the sense that physics is reductionist and, and
fundamental physics is reductionist at the level of particle physics.
The idea is the same thing, is that you're looking for the essence.
You're looking for the fundamental laws.
And that was certainly attractive.
That's why I became a particle physicist.
I want to know what were the essential laws, the fundamental laws of nature, the fundamental
rules.
And I suspect we had the same taste of that.
I had the same interest.
I mean, what I've now, you know, now I'm at a machine code way below particle physics,
so to speak.
But, you know, particle physics was good in its time.
I mean, I realized actually one strange thing when I was like 12 years old, I was learning physics and I had this series of books, the Berkeley Physics course books.
And Volume 5 is about statistical mechanics.
And on the cover, it has this series of frames of kind of collisions of, you know, idealized billiard ball type collisions.
It's kind of supposed to be illustrating the second law of thermodynamics, the law of entropy increase and so on and so on and so on.
So for whatever reason, I was really interested in those.
pictures and in how Second Law Thermodynamics works. And that book, I can't quote it after all these
years, but it's one of these books where it says, you know, here's the derivation. Then it says,
oh, by the way, we can run this derivation in reverse time. This point is often puzzling to the
student, so to speak. It's kind of like, but, you know, some of my earliest sort of computer
simulations were an effort to reproduce those pictures. I later, many, many years later,
I talked to the people who made those pictures. Those pictures were a fake. But it didn't matter.
to me. Well, they were, you know, the Berkeley series. It's interesting that it got you,
that they had the pictures. Those were an amazing series of physics. They didn't, they weren't
always successful in teaching because they tended to be too hard for most students, but they're
fantastic. My favorite is still probably at Purcells, which is on electricity and magnetism, which is,
right, I remember that one, orange cover, as I recall. Yeah, yeah. I think so, yes. And it, he was an
amazing, I worked when I was at Harvard, he was there. And every time, he was one of those guys where every time
I talked to him, I wanted to grow up and be a physicist, even though I learned it was.
Right. Yeah, but you know, what's funny is that that, in a sense, second law of thermodynamics
and, you know, how do you get sort of continuum behavior? How do you get randomness from
these kinds of things? That, you know, I started with that. I then got interested in particle
physics because particle physics was kind of a happening area in 1973, 1974, those kinds of times.
Yeah. And then years later, I kind of.
got back to... I was going to say you, I was going to say you're right back to, in some sense,
there's a lot of, there's a, the, it's, when I read your stuff now about, about your, about your,
what you're trying to work on, it's very reminiscent of statistical mechanics, of the arguments of
macrostates and microstates. Yes, it is. And, and, and I, I was struck by that. And you, and to some
extent, you even say that in, in at least one of the pieces I read by you, and that, that, that, that, that, that, um, that there
can be many, you know, as, and again, I'm jumping way ahead, but just as the particles in this room
can be in many different micro configurations, but there's still the same temperature and pressure.
And if all I'm measuring is a temperature and pressure, I don't worry about all those configurations.
And in some sense, if I get what you're saying, that we, who are, quote, computationally bounded
and we'll get to that, can't experience all of the different things that are going on.
And we interpret the fact that we can't, we can't, that it's, that we, it's, comprehensive,
computationally reducible that you can't follow all the particles and that we sort of summarize things
ends up being our our view of time.
Anyway, but we'll get to that.
So, but the arguments are, you know, I was trying to, trying to digest things and, and trying
to get to the gist of what you were going to.
But we're jumping way ahead, but that's okay.
Right.
Back to origins.
Okay.
Well, I guess big actions.
So we, you know, particle physics, so we had the both of the same interest in part.
I was going to ask why particle physics, but I think we've already answered it.
Your interest in particle physics, the same as mine.
it's the fundamental essence.
The difference was, I should say,
that you knew what particle physics was
when you were 11, 12, or 13.
And I certainly, I was probably 16,
well, I was older before I really knew
what particle physics was.
I knew I was interested in fundamental things,
but I didn't know what the fundamental things were.
And I was, and to give you credit,
I did look, you know, because I was skeptical,
at the books you wrote on various aspects of particle physics
when you were 13 and 14.
and unless, I mean, the skeptical one of me might say, well, maybe you were just copying things from some text,
but it looked to me like you actually understood it.
And it's very impressive, Stephen.
I was, you know, the understanding the weak interactions.
And in fact, quantum field theory, and I was going to ask, how did you learn it?
Now, before I get there, I wanted to say, it seemed to me that you weren't good at arithmetic times tables,
but calculus, which is the essential tool that you kind of need for physics, I mean, arithmetic is
useful too, but was something that you grasp on too early, if I'm reading, you know, reading this stuff.
And so how did you learn, what led you to learn calculus and then quantum field theory? Let me ask that.
Well, so, I mean, the first thing was the first meta-discovery, I suppose, age, I don't know, 10, 11, something like that, was you can just learn stuff.
by reading books. That's a, you know, that's an important meta discovery. Yeah, it really is. You know,
I went to very good schools and, you know, I learned, I suppose the things, it's sort of interesting that,
you know, at the time I was learning, you know, Latin and Greek and God knows what else. And I was
like, this is always going to be useless to me. And now, you know, right behind my desk, I've got,
you know, Latin dictionary, Greek dictionary, and I'm always trying to make up words for things and so on.
But, you know, I was, but what I learned in school kind of rapidly diverged from what I
was interested in learning and sort of my my kind of hobbyist physics activities, so to speak.
And, you know, I just, I just read books. And I suppose one of the things was that I never did
exercises in any book. I mean, these books had exercises. I never did them. It was always like,
well, I just wonder about this question. And can I address this question? Can I understand
the answer to this question? And it was kind of like, well, you know, let me learn this piece and
that piece and the other piece. And I'm sure, you know, for years, there were.
lots of places where I'd learned physics and other things where there were holes.
I just never cared about that thing.
And so that was a, you know, in many, you know, many, many years later,
there would be situations where I would like realize I just don't know anything about that.
And because, you know, and if I'd gone through standard kind of schooling,
that would have been, you know, I would have necessarily done a class about that particular thing.
But it was a very efficient way to kind of get to the frontiers.
If you say, this is the frontier I want to get to.
and then you learn all the pieces to get to that frontier.
And just the necessary pieces.
And just the necessary piece, it's not all the pieces, but the necessary.
That's right.
That's right.
I mean, and then, you know, and gradually it sort of fills out and you kind of make use of things.
But I think in, no, actually, I was recently reading my description of quantum fields from when I don't know, I was 13 or something.
Actually, it wasn't terrible.
It's actually not some.
I looked at it too.
It wasn't bad at all.
I was impressed.
because I was actually skeptical.
Yeah, yeah, I was skeptical, Stephen.
I mean, I know you and I, you know, I appreciate you, but I still figured out well, you know, I'd heard, you know, these different things you've done when you're younger.
And I thought, okay, well, there are a lot of young kids who think they've done something good.
But they were really, they were really good descriptions.
But you said something, though, that I'm sure I don't, well, I don't think it came out the way he meant it.
So we'll see.
You didn't do the exercises.
But again, quoting Feyman, who once said, he who can do nothing knows nothing.
you may not have done the exercise in the book,
but you can't learn the physics passively.
You can't just read the textbook
and not at least work some things out.
Because only when you work it out.
So it may not have been the exercises in the book,
but you had to work out things
in order to figure it how to do stuff.
I was terrible at reading the books as well.
All I was doing was this is a thing I want to figure out.
Now let me try and figure this out.
I'll read the parts of the book I need to figure that out.
And then you'll do the work.
Yeah, right.
I mean, do the math.
Right, right, right.
I mean, you know, so what happened is like 1974 was when the Jap side particle was discovered.
And it was when, you know, a plus and a minus annihilation cross section was going up and all these kinds of things that in those days I thought were exciting.
And, you know, and so I started trying to figure out, you know, could I have some theory about this?
And eventually I came up with this theory about strongly interacting electrons.
I read that paper, by the way.
I just looked at it.
It was a lame paper.
Yeah, it was, yeah, it was a lame paper, but not for, but for a young person, it was.
For a 14-year-old, it wasn't quite so lame.
But on a grand scale, it was kind of lame.
Although, you know, as I noticed recently, you know, there I was talking about, you know,
maybe electrons have a size of 10 to the minus 18 meters.
Now I'm saying maybe electrons have a size of 10 to the minus 81 meters.
So it's kind of like, it's, again, nothing, nothing really changes but the details.
Yeah.
No, I think, you know, and it was the process of writing that paper was kind of interesting
because it was like, you know, I'd seen a bunch of these papers.
I'd read a bunch of papers in physics journals.
You know, I used to bicycle to the local university library
and go look up physics papers.
That's great that you could do that.
Right.
Well, it was, you know, in those days, random, you know, 13-year-old or so
and showing up at the university library, nobody, you know,
there was no kind of grand security or anything.
I don't think anybody, but I think that in, you know,
That was so, you know, I'd read a bunch of these things and I thought, well, I've got some
interesting things to say when I tried writing something about it.
And I had no, you know, idea about, you know, the academic system and the whole whatever.
And it's just like you look at the journal, you can see, you know, I can plainly see.
This is the redress you send it to.
Off you send it.
And, you know, steadily it gets, you know, you get back stupid referee reports.
And then you kind of, which might have caused me to just say, forget it, I give up.
But it was like, no, you know, what, you know, this is just dumb.
I'm going to keep pushing forward, so to speak.
So that was a, and my papers steadily got better, I would say.
Did you follow, in writing the papers, I mean, again, because I've had a lot of students
and you have to sort of train people how to write scientific papers.
But again, one way to do it is look at the scientific papers and try and mimic the style,
at least, so you don't write something that sounds like a high school composition
or you realize that in the scientific paper you don't put everything you know,
And so did you mimic the style of the papers you read or not?
A little bit.
I mean, you know, one of the things I, you know, I guess I was always, you know, in school,
writing was one of my sort of one of the things I didn't do badly.
And I was.
Yeah, your early grades were very good.
I know.
Yeah, right.
It's some, you know, and I, I, so, I mean, what was interesting about scientific papers
a very different style from writing other kinds of things.
Sure, sure.
But, you know, once I got kind of the basic idea, it wasn't,
wasn't so hard to do that. I mean, actually, I remember one paper I wrote must have been when I was
18 or 17 or so was about cosmology and particle physics, okay? And you say you don't put
everything you know, well, I gave a pretty good summary of cosmology and particle physics. The big
version of that paper was never published. Yeah, sure. Because the journal said, oh, oh, this is all
well-known, blah, blah, blah. Actually, it wasn't well-known at all. It was a pretty clean description
of how, you know, how things work with particles in the early universe and so on. And I published a
shorter version of that paper.
I've seen that version.
You know, that was much less interesting, really, than the full version.
Because the full version also had had a bunch of things about kind of the intuition behind
expanding universes and, you know, particle interactions and so on, which was gone in the
small paper.
So it was a piece of, you know, one of those pieces of academic feedback that was, well,
hopeless.
Well, yeah, I mean, that's just the way it is, though, in journals.
You have to be terse and you can't put in.
It's a shame that it'll probably mean.
many of the insights. Well, you know, I guess
you put them in a thesis. It's interesting that
you know, that was the air, I read it because
at that time, or shortly around then
at Harvard, I was working on some
similar things. I'd already moved into particle
astrophysics and was thinking about, well,
wasn't that particle cosmology, thinking
about interactions in the early universe and what particles
and how you could predict what particles
might be left over. And there's a lot of
similarities to that, to that early paper
of yours. But it's still
very, yeah, to be interested in it that
early, let me ask you, you said he went to good school,
What's the Dragon School?
It's a elementary school.
It's a private or, I mean, I know how things are done.
It's a public schools are private in England, so I don't know.
Yeah, right.
It's a private elementary school in Oxford.
And it's, since I was, you know, at the time, it was just a good local school in Oxford.
It's become much more famous because it's had a bunch of famous alumni since that time.
But it's, I think for the year I was there, the ones that are probably listed in the, it's probably me in a chapter.
called Hugh Lorry, who's an actor.
We were friends when we were like eight years old or something.
And I actually was really funny because I had no idea what happened to him.
And I have kept track of what happened to most people I knew when I was in elementary school
and kindergarten and so on.
And with Hugh Lurie, I kind of knew he'd become an actor, but I didn't know he'd become
a particularly famous actor.
And so then I find out he's become a famous actor.
And he's doing this show and he's called House, so the American television show, right?
Great show.
And so, you know, I say, okay, I better, you know, watch a little fragment of this.
So I switch it on and there he is and he's got his headcock to one side.
And I'm remembering that's exactly what he did when he was eight years whole.
Really?
He would walk around just like that.
So it's, you know, people don't change at that level.
But no, it was a school.
One thing was interesting about going to school in Oxford and even when I was in kindergarten,
you know, these were, it was an impressive group of kids.
You know, if you know what they've done now, it's kind of like, you know,
they've done all kinds of impressive,
mostly academic types of things.
And that's what you get, I guess.
If you get to go to school with a bunch of professors' kids.
Yeah, yeah.
That was a, you know, for me, as a, for a long time,
I used to say, you know, the smartest group of people I knew were the group of people I knew in kindergarten.
Well, it's, but often, you know, I mean, I tell kids one of the,
at a later stage that it's your peers, you know, when you're at university or anywhere else on the whole,
that you can get a good education anywhere,
a bad education anywhere,
but if you're going to choose,
try and choose a place
where your peers are going to challenge you, at least.
And because that's where you'll do a lot of your learning.
Now, obviously in kindergarten,
it's not necessarily that,
but it is interesting to me to think how different,
I'm always amazed when I talk to some people,
because our backgrounds are so different,
because neither of my parents sort of finished high school.
And I've a friend of mine who's a physicist in Vancouver,
well, I put his name, Ian Affleck, but he showed me a poem he wrote when he was in, I think, in
kindergarten.
And he said, when I grow up, I want to be a doctor of philosophy.
And I thought, wow, I had no idea what a doctor of philosophy was, probably for so many years.
So such a different upbringing.
Where did you grow up?
What did you grow up?
I grew up in Toronto.
I was born in New York, but I grew up in Toronto in Canada.
And, you know, went to public school and high school.
school. There were maybe some private schools in Canada, but I wasn't really aware of them.
And I never even thought of going to the United States to school. These things never occurred to me.
My parents wanted because they hadn't gone to school and because I think because I had a Jewish
background, my mother wanted me to be a doctor and my father and my brother to be a lawyer and my brother
became a lawyer. My mother was for many years not happy that I wasn't a doctor. But anyway,
that's enough.
some kind of doctor.
Yeah, I know, but it wasn't the right kind.
I know, I know, but it wasn't the kind she wanted.
But, yeah, no, she's gotten over that.
But Dragon School was one of the few schools you actually graduated from.
Is that right?
In England, they don't have the notion of graduation.
Yes, I went through all the years of it.
And I noticed when I was reading, I don't know whether it was Wikipedia or some other thing.
It always says, and Stephen prematurely left.
So you left Eton early?
Yes.
I mean, that was really what, you know, I went to Eaton, which is sort of this, you know, school that was founded before Columbus came to America, so to speak.
But it was a good school actually when I was there.
I mean, I think it had gone through phases when it was kind of crazy.
But at the time when I was there, it was kind of, you know, I applied there because it sort of had the best scholarships.
I didn't really financially that much need a scholarship.
but it was it was it was kind of a it was this very nice interesting group of people who were the the scholarship kids at Eton and it's kind of a very small group and it's a you know if you if you look at what happens to them they either go spectacularly or they crash and burn yeah yeah and it's it's the um but you know it was it was an interesting group and um but you know i went there and um i was kind of doing the sort of side thing of doing a bunch of physics and
learning about those kinds of things. And I got reasonably proficient at that. And so by the time I was
like 16, it was like in England at the time, you could have this, if you got a scholarship to Oxford or
Cambridge, you could avoid doing the whole standardized government exams, et cetera. So that's what I did.
And that was my kind of, you know, people told me at the time, one of the things, which was sort of
interesting piece of bad advice, was, you know, oh, you shouldn't go to college early. You'll be so
socially disadvantaged, et cetera, et cetera, et cetera, which was kind of a, you know, in the,
what do you know how to do and not know how to do? You know, years later, I've spent a bunch of
time, you know, starting companies and stuff like this. And, you know, I realize looking back
when I was a kid, I was always the organizing kid. And so it was kind of, you know, this,
this, oh, you'll be so disadvantaged and not interacting with people and things, not really quite
right. If you look in more detail, it was kind of a, you know, I was the kid who was organizing the
group of kids to do this and that and the other. And actually, one of my, yeah, it's some,
it's always fun to see some of the people who I sort of organized as kids to do things.
Some of their later professions ended up being kind of what I organized them to do in some
little project of mine, which is kind of nice to see.
Well, that's interesting. I, you know, one tends to think that socially,
Well, you were a little young.
You didn't find so.
I think it's different in England than it would have been a very different experience for
in England in the U.S.
Because in England, you know, with a tutorial system, you can be more independent-minded
and more independent socially, too.
I mean, in the United States, where it's large classes and everything else,
you really can't, you're not encouraged to be so independent.
And so even if there were social,
age issues having to do with puberty and everything else. I mean, it may not have been so
noticeable in a system where, which was designed for if you could, if you could learn on your own,
and if you could work on your own, you could probably flourish more in an English system. Do you,
do you think that's right? No, I mean, I think that, you know, the U.S. has definitely gone for the
full-service university, you know, everything is at the university. I think that was less so and probably
even still less so in the UK. I mean, look, I, you know, for example, the way that the
Oxford system worked, I went to college in Oxford, was that, you know, if you, you know, all you
actually had to do was the exams at the end of the year.
Yeah.
And, you know, I tried going to some lectures and I didn't find them at all interesting and I stopped
going to them.
And I went to a few graduate lectures that were some of them are pretty good, actually.
And I had kind of made this deal with this group of experimental particle physicists that I would
use their computers and all their ARPANET connections and things.
and in return for me doing some data analysis for them.
And so that was a pretty good deal.
And it was also the only place that I knew of in the UK
where there was air conditioning was in the computer room.
Ah, so that's great.
That was an added benefit.
Well, we're going to get to computers in just a second.
So it's almost a good segue.
But the interesting thing you say about that,
that it's intriguing to me that you, that
the exam thing because, you know, that, in my own life, I can appreciate that.
Because the only time, I mean, I had deals with professors and undergraduate where I could skip things.
But the one thing I really liked about doing my PhD at MIT, and I don't know if they still do it,
was that, you know, there are all these hoops you have to jump through.
And there are all these graduate diagnostic exams and graduate qualifying exams and et cetera.
And you're supposed to take courses for two years to take them.
And at the time I gambled that I could learn enough on my own.
And I did them in my first term and passed them.
And now, of course, it meant that there were gaps like they probably were with you.
But it meant that I was at a stage where in principle I could have then taken off early.
What I did was in waste time for a year or so.
But it's a nice thing.
At least you get understanding that getting through the hoops is one thing.
And then being able to do other things is something else.
that just the passing the exams is useful for passing the exams,
but really what's important is what you do after that in some sense.
Well, yeah, right.
No, I think, I mean, look, it was, it was, you know,
the fact that I did well in the physics exams, as I say, is, you know,
I'm sure in modern times it wouldn't even work because, again,
it's kind of like, like actually one of my favorite, okay, I have to,
this is just one of these crazy stories.
When I was doing this was when I was probably 14 or something like that,
I was supposed to do some standardized government O-level, you know,
standardized exam type thing.
So let's do one on physics.
So I'd done absolutely no preparation for this.
And I had no idea what the, you know, what the syllabus was and so on.
And so I go do this exam.
And one of the questions on the exam is name two differences
between the effect of electric and magnetic fields on electrons.
Okay?
So I'm like, okay, you know, it's E time, you know, charge times electric field.
It's, you know, charge time V cross B.
That's one difference.
What on earth.
And I realized, you know, I knew at the time.
So I wrote down something, you know, I said, well, electrons have magnetic dipole moments,
but they don't have electric dipole moments.
And I said, probably not what you were looking for, more or less.
Well, we, did they grade you on that?
Did you?
I have no idea.
I mean, you know, I got a fine grade on the whole exam, so I don't know.
But I was just, it was one of those cases where, you know, it's, it's not clear that knowing,
knowing the, you know, the big story, so to speak, actually helps you in passing the exams.
And I think in some of these things, probably in more recent times, it would be more like,
well, did you do the course, the way the course was supposed to be done rather than do you know the
material, so to speak.
Yeah.
No, no, that's an interesting.
Yeah, it's interesting.
At some point, everyone.
Well, at some point, students have to make the transition from doing well in coursework to understanding things and to learning themselves.
I mean, you made that meta-discovery that you can actually learn things by reading books.
But of course, that's a discovery that everyone who then becomes an academic of a graduate student has to learn that ultimately, what's important is what you learn yourself, which means you're not going to class.
And the way you get it, well, books or paper or articles.
to be able to, that's a very different kind of transition in learning.
It's great that you learned it early on.
My experience, both for myself and students,
is that that transition is not so easy for students who have excelled most of the time
by going through classes and then get to a point where they suddenly have to read papers
and learn things in a different way.
It's a very different experience.
Well, right.
And it's a different set of skills.
And unfortunately, one of the issues is, you know, in an education system where, you know,
you have to go through this particular sort of columnator, so to speak, to get to the point where
you're allowed to do the kind of more researchy kind of thing, you know, people will kind of die
on the vine before they get to that point, even if they would have been good at that.
And probably that would have happened to me. I mean, I'm very glad I got through the education
system as quickly as I did, because I'm not sure I would have survived it. Otherwise, I mean,
you know, the other thing in the kind of what one's good at and what one knows one's good at,
It's like, you know, I think, you know, early on I was always like, well, let me figure out what question I want to ask.
And I got, you know, in retrospect, I was pretty good at that, figuring out questions to ask.
And, you know, I always thought that was kind of like, oh, that's obvious.
Everybody does that.
But in fact, they don't.
And in fact, you know, in academic research and sort of successful academic research, to my mind, that tends to be the bigger determinant of, you know, of real successes,
when you actually solve the right problem,
not what about the mechanics of solving the problem?
One of the things I've always been disappointed about
in a lot of the education system is the fact
that that strategy of what to study
is absolutely not there.
Because that's not what people teach.
People are teaching, there is a trade,
you're gonna do this particular thing,
and this idea of, well actually,
there's this general area,
what's a question that might be interesting
is, tends to not be,
tends to not be taught.
And I, you know, again,
I always thought that was a kind of triviality,
except that it turns out, you know,
that's the thing I've been,
that's the thing I've been doing all my life, so to speak,
and it's worked rather well.
But it wasn't, you know,
it only in many later years did it become obvious to me
that that wasn't quite as,
as easy a skill as one might think.
Well, it's a very important, you know,
anyone who's listening to me talk on these things
will have heard me said.
And I've answered this question a lot,
that to me,
all of education comes down. The one, and the biggest disappointment in education is that we don't
get kids to ask, by asking questions. Questioning is what we should be teaching, how to ask questions,
and encouraging that kind of thing, and encouraging not knowing, by the way, including being a teacher
and finding out how to answer those questions. But learning how to ask questions is vital. And it's,
I agree with you, it's one of the biggest shortcomings in education, in my opinion.
But it's a tough business. You know,
I've done this thing.
For years, I've done these things of, you know, with groups of kids and so on,
it'll be like, ask me anything about science or whatever.
Yeah, yeah, yeah.
I started doing that, live streaming that as well in last year or so.
But one of the things that's always striking about that with kids is there'll be some question.
And question number one, it's like, okay, that's standard high school physics or whatever.
I can answer that.
Yeah.
Question number two, it's like, well, I happen to know the answer.
That's a frontier physics, you know, frontier research.
question. I know the person who is the world expert and I happen to run into them recently and I
know the answer to that. And then there'll be another question where it's like, I know nobody knows
the answer to that question because, you know, I'd been curious myself and I've looked into it and
nobody knows the answer. And I, you know, I think that's really, you have to know a lot to be
able to parse those things out. I mean, it's gotten easier with, you know, the web and all that kind
of thing. But it's still, it's still a, you know, it's surprising for, you know, for a kid,
it's like a good example.
Actually, this was one of my kids asked this question
when he was pretty young,
was when there were dinosaurs,
could the earth have had two moons?
Okay.
It's a great question.
Right.
And it's a very non-trivial question.
The answer was probably no.
But I asked a bunch of people
who know about celestial mechanics and so on
over the course of years.
And at the beginning of asking that question,
they were like, we just don't know.
And then more recently it's like, well, there are simulations that have gone far enough.
It'd be hard to have a stable orbit like rotational period with two moons like that.
But anyway, I would think.
Right, right.
Well, I mean, it depends how big the moons are.
Yeah, of course.
Another one that some kid asked me actually recently is why does the moon not have moons?
I know the answer of that one.
There aren't stable orbits.
Yeah, there are.
But in fact, it's even hard to get a spacecraft to orbit stably around the moon.
There's another answer, which is perhaps not a useful answer.
If it did, it wouldn't be a moon because then it would be a,
a planet. But, but, but the planet is the thing that, well, it would be, the only way it could
have moons is if it dominated the gravitational influence in its surroundings. Otherwise, as you
say, there'd be instabilities. And that's one of the definition of a planet, I suppose, is that
it dominates the gravitational, uh, influence in the region. So it's, the moons don't just, but remember,
the lunar reconnaissance orbiter is a moon of the moon. Yeah, yeah. It's true. But it's so it just doesn't
happen to last for billions of years. Yeah. Yeah. Yeah. Yeah. Yeah.
Yeah.
But we're off in a very different way.
We're off in a.
It's all right.
I didn't know where we're going to go.
I was going to, you know, I was about to, oh, yes, actually, I'm going to ask this.
I'm going all over the place.
And we will eventually get to where I want to get, but I hope.
Two things, though.
One, since your kid asked about the time when there were dinosaurs, you wrote, I'm going way back again.
I think when you were six or seven, there's a picture of a, of, of, of, of, of, of, of, of, of, of,
spikes on a stegosaurus, a drawing you wrote yourself.
And over that, you put the dawn of reason, because you asked yourself, how many spikes are there?
You want to explain why that was the dawn of reason?
Oh, I don't know.
That was just me, me captioning.
Actually, I think I was going through quickly captioning these pictures.
But it was kind of, you know, it's the first piece of evidence of actually kind of thinking about things quantitatively that I could find.
I mean, you know, I had.
I was some I had a you know it's always interesting to look back on one's own education as one you know I've been sort of involved in education I have I have four kids I've you know kind of seen a bunch of bunch of things that that go on it's some you know and I think back to my own kind of education and I also think back as I look at people that I meet who are now young you know sort of what's the trajectory so to speak and I realize there are things that I think.
things that I would do when I was, you know, six, seven, eight years old, which it's like, oh my gosh,
that's the same kind of thing that I'm doing today. Like, I remember the realization that you could
take two rulers and you could run one against the other and you could make an addition slide rule.
I mean, I didn't know what, and this was kind of my, this was how, this was one of the many ways
that I failed to learn arithmetic was that, you know, teachers are probably wondering, why does he
have two rulers on his desk.
Because that would allow you to do things without having to do things.
I see, yes.
But then there's a trend because then I've spent a large part of my life building tools
to let one do stuff without putting in human effort, so to speak, to do it.
Now that now we have the good segue because that Steggy thing was an aside, but I couldn't resist.
But see, I'm trying to build for people, maybe it's not useful because we're all over the place.
but I want to talk about the context of what you've done and what you're doing now.
And so there's three components.
There's physics.
There's mathematics.
But the other component is computers.
And I want to find out what got you interesting computers and when I know I saw an inkling of that because I think I saw a computer tape in one of your scrapbooks there as a young person.
So you were clearly, what got you interesting computers and when?
Let me ask that question.
Okay, so I, you know, I first saw a computer when I was like 10 years old.
It was a big mainframe computer at a distance.
I first got exposed to a computer close up when I was 12 years old when I went to high school
because my high school had a computer, which was a thing of the crazy British computer that
was the size of a desk and you programmed it with paper tape and it had a very arcane machine code
and so on.
But it was, and then the question was, actually, the first of the first of,
The first really serious thing I tried to do with computers was to simulate that bunch of gas molecules bouncing around on the cover of the physics book.
And I was like, let me write a piece of a program to do this.
Well, that computer didn't have floating point of arithmetic.
It didn't have lots of things.
The real irony is the program that I wrote was basically a cellular automaton program, which is, you know, this kind of simple program that I investigated years later.
and but for a little coincidence of invariances and things like this,
I would have discovered, well, if I'd known what I was looking for,
I would have discovered tons of things that I discovered like a decade later
right back when I was first doing this when I was 13 or so.
But, you know, I started using computers.
I wanted to do these kind of physics simulations,
but then I got into actually doing things with the computer for its own sake
because it was a quite primitive creature
and I was trying to write utility programs and so on.
I was very proud of my paper tape loader, which was a, so, you know, the paper tape would run through
this optical reader and it would run pretty fast and it would, you know, wind up in a, in a,
wooden bin and you would rewind the tape and then run it through again. But if there was ever,
if it ever, if that paper tape ever picked up a little piece of confetti that would kind of fill in
one of its holes, then it would just get the wrong data into the memory of the computer.
Sure.
And so the question was, how do you deal with that?
And so I was, in retrospect, I ended up inventing some error correcting code that would figure out, as the thing was reading, you know, it would accumulate data to figure out that it would have check digits and things.
I was very proud of that.
It was, it was, you would literally pull the tape back in the tape reader and it would start reading again.
It would re-synchronize itself and so on.
But that was my, that was my first piece of system software, so to speak.
Oh, okay.
And when was that?
Probably in 1973, 1974, when I was thinking.
13, 14 years old.
That was, that was, um, and actually, it was one of these things where, where, um, yeah,
I mean, that, that was just something I wrote.
I guess other people would use it to that point.
I mean, it was probably my first piece of software.
Now that I think about it, it's probably my first piece of, um, you know, software tooling
that ended up getting a user base, so to speak.
Oh, you, I'm probably not a very big user base.
But, you know, but it intrigued, so was it, the reason I, I want to really go into this,
because what you're trying to do now is so integral related to programming.
In fact, the universe is a program that I really want to try and understand this a little bit more.
My first assumption, based on earlier discussions,
is that you would have been fascinated by computers because they would allow you to do things that you didn't like to do.
But was it that that intrigued you about computers?
What was it in particular?
Or was it the power?
I mean, I remember how seductive that.
We had punch cards when I was a kid in school,
and it was neat to be able to see that you could make this thing
come up with answers that you might not have gotten otherwise
and maybe not even know how it did it.
But do you remember what it was that was so seductive to you about it?
Well, I mean, look, I like technology.
And, you know, technology was kind of,
I liked the future, and technology was part of the future.
And that was one reason why I like computers.
I like computers because it was, you know, I was interested in doing these things like physics.
Now, at the time, you know, my very first computer, I couldn't do serious mathematical computations on that computer.
That was, you know, that was, it was, it was, was, it was, was, was, was, was, was, was, was, was, was, was, was, was, was, way too difficult on that kind of machine.
Then, but I think that, um, uh, sort of computers for their own sake.
Well, you know, I did things like, okay, I, I said I was an organizer kid, right?
So they would have, you know, like open days at the school and things.
And so I would always organize the computer exhibit for the open day.
And so I wrote a bunch of little computer games and things like that.
And the people would come in, you know, this was 1973, 1974.
And all these, you know, various parents and so on would come in and say, oh, it's a computer.
And they would, you know, it was very, they'd never seen a computer before.
And they certainly never, you know, I had some games that were, actually his one that was,
it would print out on teleprinter.
It would print out two letters.
And then you would have to press a button.
depending on which letter was earlier in the alphabet.
It turns out if you run that fast, people get it right less than 50% of the time,
systematically get it wrong.
At least that was my experimental psychology observation of age 13 or something.
And so that was that was my, the sort of my, probably my proudest exhibit for the computer.
Sounds like a science fair project, actually.
Yeah, right.
I probably got a lot of good data.
Unfortunately, I didn't really have a way to collect that.
that danger at the time. It was more just observing what people did. Well, you know, it's interesting
because I want to now, we'll talk about one thing we converge, because I'm, I'm proud of this,
although you're not, they have the same memory of me. But I will say, by the way, my very first
real science paper, after I got my PhD, when I was at Harvard, was actually done on
computations. It was numerical integrations using a HP15C. I, my colleague, you know, all my colleagues
had access to mainframes, but I realized I could do this numerical integration. It would take a night
for the calculator to do it, but I didn't have any, you know, I didn't need it. I didn't need it in 30
seconds. And so it took me a long while before I made it back to larger scale computers. And what,
and my first computer, and the reason I want to bring this up is I do believe, I've read that you've
logged how many mouse miles you've also done. Okay. But I believe I, is this not true that I
introduced you to the Macintosh? Do you remember at Harvard? I had the one of the first Macs at
Harvard. You came visiting and you were very skeptical and I brought you down. I had a, because it was like
a portable because it was 23 pounds and I had in my office. And I think, and you came in and I think
you were very skeptical of it because I had the mouse and everything. But I, I believe that I
introduced you to the Mac and I'm going to stand by that. That could very well be true. You know,
I'll tell you, by that time, I was mostly using these Sun workstation computers.
Yeah. And I never, you know, I had not used before Mathematica came out in 1988, I had never really used a personal computer in a serious way. I had had personal workstation computers. Yeah. But, you know, the, and really even at that time when after Mathematica came out, you know, it ran on the Mac, but I never used it on the Mac. I would use it for demos on a Mac, but I would actually use it on some workstation. But that's, that's a, that's an interesting story. It's, it's ploy. It's, it's ploy.
It's, I mean, that must have been, well, the Mac came out in 1984, so that must have been, that must have been, it was 84. I got it. I got one in January, oh, a little after January 84. I was one of the first, there was a lot, there was a, there was a, there was a lottery, because there's some people who wanted it. And, and I wanted, I remember I told my, my friend who was Shelley Glashow about it. And it really upset me because I really want it. I said, there's a lottery. And he, he said, oh, really? And he put his name in. He got,
to be like number one and he got and he got and he already did with it. Did he use it? No, no, that's what
pissed me off because I really wanted it. But I and I just won a prize as the gravity research
standards and it was exactly, I was trying to figure out how I could buy it and it was exactly
the same amount as the Mac. And so I literally won the prize and I and I and I and I gave the check
back in. But yeah, so I had won maybe by April 84 or May or and you came shortly there afterwards
when I was at Harvard and and and I was very proud of it. In fact, I spent most of my time.
would want to come in my office and play and see it because it was very different than other computers.
Right.
And you came in and you might have been mentioning this.
This is vaguely coming back to me.
That's a good story.
One of the things about the older generation, the Shelley Glashard generation and so on,
most of them didn't know how to type.
And I remember, you know, like Marigelman, for example, I remember interacting with him about
computers and things.
And he never wanted me to see him type because he couldn't type.
And for a long time, I was proud of myself because I, you know, I used a typewriter from when I was early kid.
Yeah, you can read.
I can see it in your papers.
Yeah.
Yeah, yeah.
Right.
And I, you know, I got pretty fast at typing.
And actually, actually, what I really got faster doing was typing with two fingers because my fingers were not strong enough to do the full typewriter thing.
And then at some moment, I did the 10 finger thing.
And I was fast typist.
And then I used to think that's a great advantage that I have in the world.
And then it all went away.
Well, yeah.
The two-figure thing.
This is what really amused me.
You know, my mother...
The two-figure thing came back with phones.
It now is useful to be able to type with two fingers.
Oh, yeah, that's right.
But now with thumbs rather than four fingers, generally, however.
Well, right, I've never got into the thumbs.
You know, this is probably a, you know, this is a disease from my early time.
It's, you know, index fingers.
Actually, I still use the index finger.
I hold one hand and do it, yeah.
But I will say, my mother, you know, again, I told you my parents didn't go to college.
But the one thing she insisted I learned how to do was type.
because she said, you'll have essays.
So I took typing class.
It was optional.
And, yeah, and I've always felt it was one of the great gifts she did me
that I learned how to type early on.
Wow.
Yeah, yeah, a lot of people don't know how to do that now.
Anyway, look, I want to get, well, we've now put together the pieces that will lead you
to where you are.
I was going to say, let me just say, and I don't want to go into this now, you didn't
complete Eaton, then you went to Oxford, and it says you also left that prematurely as well.
And so you keep leaving schools prematurely because you've, I guess, felt you'd gotten out of them what you did.
And then you went to Caltech and actually did get a degree there, finally, although in a very short time.
What happened was, it's easy to describe it.
I mean, I, you know, I got to the point I was writing a bunch of physics papers.
And it's like, okay, I can go to college and do physics, but, you know, I'm writing physics papers.
So what's the point here?
Yeah, yeah.
And so, you know, it was kind of like accelerate things to the point where I'm done with the education processes.
quickly as possible. And that, you know, that worked rather well, actually. It was, it worked rather well.
It only works well if you have people who recognize it, like the people who happen to give you the
good grades in that, and that physics test. Was it, what was it about Caltech? I mean, there are a lot of
places that would have required, I assume, there are a lot of graduate schools that might have
required more formal degree. Is it because Caltech was so small that they were able to, what was it
allowed to get to Caltech? I mean, look, I, by that point, you know,
know, one of the things that was both a good thing and a bad thing is by the time I was like 15,
16 years old, I was, you know, showing up at all these physics seminars in Oxford and things,
and I became a sort of known fixture on the, you know, on the international physics scene.
And then I was a, I worked at the Rutherford Lab in England and then I worked at Argonne National Lab
in the U.S. for a summer and so on.
So I kind of was in the swirl of the kind of physics world, which is actually, it turns out,
it's kind of disastrous in some ways because there are all these people, you know, I was a,
you know, I don't think I was a potential.
particularly, but I was a somewhat brash 16, 17-year-old.
And, you know, when you're in sort of the international community
and you're a brash 16-17-year-old,
you turn the clock 30 years later,
and people still think of you as a brash 16-17-year-old.
You were very brash when I first met you, too.
But yeah, yeah.
But the good thing is, is afterwards,
people don't think it's brash so much when you're older
and you say the same kind of things as when you're young.
Well, perhaps that's true, right.
But no, I think that, but, but, you know, so what happened, so I was, you know, I talked to people at these different schools and I was talking to Harvard and Princeton and Caltech.
And Harvard said, oh, if you don't have a college degree, you can't come as a graduate student.
Princeton said fine, Caltech said fine.
I decided I'd visited Princeton.
I hadn't visited Caltech.
And so I figured I'll go to the place I haven't visited.
More or less.
That's, that's more less.
Wasn't the weather, was it at all?
Was it, I mean, which is very different than English?
No, it wasn't particularly the weather.
I wasn't really paying attention.
I think Caltech also had a slightly, you know,
Princeton had a more structured,
oh, you have to do these courses and all that kind of thing.
And I was like, I don't really, you know,
I don't want to do this and I don't really need to do this.
And Caltech was quite flexible about that.
So it was, and I did.
I went to it.
I tried to go to a course that Dick Feynman was teaching.
And actually he told me after a while,
please don't come to this course anymore.
Really?
So that was a...
Why? Were you asking questions or were you, or didn't think it was appropriate for you?
No, he was the...
Yes, I mean, I wasn't being particularly brash, but it was like, I remember I wrote him up
something about the derivation of the Weinberg angle.
Yeah.
And he was like, don't come to my course anymore.
Oh, he says you don't hear it.
Oh, it's not useful.
Okay.
Well, and that was great about finding.
He would have not, yeah, held a ceremony or anything like that if he knew, yeah, exactly.
The quality of good teachers to know what kids need to know and what they don't know, need to know.
Okay, look, we've skirted around things.
We've already talked about fundamental physics and your interest, and that's when we, as I say, when we first met, almost.
We met at the time when your life was changing, it seems to me, which is around 83, 84.
Suddenly went from the kind of what, the standard kind of fundamental physics,
understanding the fundamental laws by mathematical quantum field theory to suddenly, I guess,
you're already gone to the Institute for Advanced Study, I guess, when I first met you at Harvard,
and you came up from there. And then you did cellular, you discovered, I don't know what it was,
then you discovered cellular automata, but that, it was, even then, it was knowing you at a distance,
that changed you, it changed your life, as far as I can see, it changed your direction,
It changed everything about the way that you thought about the world as far as I could see.
That's true.
I mean, look, the sequence was this.
I mean, you know, I've been doing particle physics up until basically I got my PhD in 1979.
Right.
A week after that, I was like, okay, let me, you know, plan the future type thing.
And so I realized, you know, I've been using all these computer systems for doing sort of mathematical computation.
And I realized, these don't do what I want.
How am I going to get something that does what I want?
Well, if you really want it, you should just do it.
yourself. So I started building this thing called S&P. And so I spent a couple of years,
you know, I was still doing some physics kinds of things at that time, but I was mostly
working on building this software system. And let me interject for the public who, I mean,
we've already gone so deep that people may have losses anyway, but, but SMP, I mean, it was in
my mind at the time, revolutionary, and there may have been other people doing it. But, you know,
computers were fine for doing calculations, you know, you plug, you'd, you'd,
program them and then they work with with with with with floating point arithmetic and but but
you didn't use them for symbolic when you sat on a piece of paper and did and did did did algebraic or
calculus calculations those were symbolic and computers just didn't do that and I do remember
vividly the utility and also being amazed that it was possible to do SMP was the first it was
that stands, I assume, for symbolic manipulation program.
Was that what it stood for?
I think that.
And the idea that computers could actually do mathematics
instead of just churning numbers.
They could actually do mathematical
and help you do real mathematics,
symbolic mathematics, was shocking to me.
And I remember my colleagues began to say,
this can be useful because, of course,
the one area where, and maybe this is the reason,
and this is the reason I guess you got into it,
The one area where you really, where you can get lost in the symbolic manipulations is the
calculation of what are called Feynman diagrams, which I know you did. And so, and that, I assume,
is that what drove you to want to do SMP? Is that, is your experience? I mean, there have been,
there have been, there have been earlier computer algebra systems, but they were always, they
basically had the feature that they could only be used for the babysitter. That is, they could only be
used with, with the help of the people who had originally created the systems. Exactly. And also, people,
I mean, it was surprising to me the extent to which, you know, again, it's one of these things where you never know what's actually hard and what's easy. You know, I learned enough about computers that I could successfully use these systems without a babysitter, so to speak. And then I use them to do useful things and I kind of outgrew them. And so I had to build SMP. I mean, what was in retrospect pretty interesting about SMP is it has a sort of a fundamental idea about how to compute that is sort of fundamentally symbolic, more even more symbolic. More symbolic.
even than just doing mathematics. It's really about symbolic expressions and transformation rules
for symbolic expressions. And actually, very recently, like the last few months, I finally
understood how to generalize. There were things I tried to figure out. So back when I was working on
SMP, there were all kinds of mysteries about how you, it sort of evaluates things, it transforms
things until it can't transform anymore. And that process of transforming until you can't
transform anymore. And what about if there are different paths for transforming things,
that is a whole tangle of difficult kind of ideas, computational mathematical ideas.
And what's sort of ironic is that at the time when I was thinking about those kinds of things for SMP,
I was also thinking about gauge field theories.
And it turns out that what I now realized is that the issues about all the different ways to do things
between evaluation processes and SMP and gauge field theories are exactly the same problem.
But it took another 40 years to realize that.
Okay.
We'll get to that because I know you keep talking about how you can basically get these laws that we,
and I mean, for Gagefield theory is a central, is a central symmetry and the central way of understanding all fundamental laws.
And I know there are lots of claims you make about that them coming out of doing.
Can't quite yet.
Can't quite yet get Gagefield theory, but we kind of know how we think it's going to work.
Okay.
The math is hard.
I want to challenge you on some of that in a moment.
But first let me be more collegial.
then we'll challenge.
But, and I should say, to not do short shrift, you're right.
There were people, for example, I think Garda Tufton, Tini-Veltman developed a schooner,
I mean, a program to do, to do some symbolic manipulation of confinement diagrams in order
for them to be able to do the kind of physics, which eventually led them the Nobel Prize.
So there were people working on it, but you're right, they had to be, they were specialized.
There was no one who developed something that a sombozo like me could come along and just use.
Right.
No, and the real idea of SMP, the important idea in the end was this idea of transformations for symbolic expressions, which is a very general idea about computing.
And that's, I mean, it's kind of an abstract idea that I think people haven't fully absorbed even now 40 years later.
But it's also the core idea that our whole Wolf and Language Mathematica stack is based on.
But I mean, that was, anyway, but in terms of the sort of the trajectory, it was, yes, I did that.
that was a very interesting experience
because building a software system
is very different from doing physics.
In physics, it's like, the world is the way it is.
You have to kind of drill down
and try and figure out what's underneath it.
When you build a computer language,
you're like, let me write down these primitives.
Now what can be built from those?
It's kind of very much a more sort of,
you know, you start from something.
It's like you start from these arbitrary things
and then you build up from there.
Rather than in physics, you kind of,
the world is the way it is.
you have to try and figure out it has sort of reverse engineer what's going on.
Now you've explained everything to me because now it all comes to clearly to me because
yeah, that that's a very important statement because it's clear that what you switch to and
it's an area where I'm not sure I agree with you with, but it's what you've switched to is
it's so natural to understand where cellular automata appeal to you and ultimately this new
kind of science and is you're now claiming and then with cellular automata it's some simple
rules and what can you do with them? So it's exactly, so that's what, it's clear why that
appealed to you because you'd been developing software, that had never hit me before. Right, right. Well,
actually, you know, it's always embarrassing when one tries to understand one's personal history,
because it was, it took me a decade before I realized that connection myself. But yes, that's the,
that's some, but, but yeah, I mean, so, so, so what happened is I was, the big thing that I've been
interested in for a long time and my early interest in statistical mechanics and so on was,
how does complex stuff happen in the world?
And so it was like I was studying reaction diffusion equations
and I was studying these other kinds of mathematical approaches to that
and they just didn't work.
And so I was like, let me see what is the,
what's the fundamental thing?
Let me drill down.
Let me understand what are the primitives
from which I can build up that phenomenon.
And that's what led me to, I mean,
originally I was trying to model,
I was actually looking at self-gravating gases
and neural networks.
And it was like, what's in between these two?
And I came up with these simple cellular automata, which are just these rows of black and white cells, which have simple local rules.
And cellular automata are good for many things.
Self-gravating gases and neural nets are two things they are profoundly not good for.
So it was kind of interesting that that was, you know, it was in the middle of those two.
Let me let me just stop for one second because I do want to, I mean, we haven't tried to define a lot for people who've had to follow through.
but you sort of define cellular automata,
but I want to make it quite clear,
they're seductive and interesting,
because they are that.
They're basically a set of squares,
blacks and whites,
and you, and you,
and there's a rule.
When you have, let's say, a black and a white together,
there's a rule for what the next role will be.
And we have two whites together.
There'll be, or two blacks,
or maybe four of them in a row.
So it's just a simple set of rules that tells you,
and then you proceed from one step to the other,
and what is surprising,
and seductive, but I'm not sure as profound as you think,
but we'll have to discuss that.
What's surprising and seductive is that from a very simple rule
of what happens if these two things are together,
maybe three rules, you produce these incredibly complex patterns.
So I just wanted to let people know what,
and I, you know, what cellular automata are,
and I guess I did notice in your scrapbook,
I guess it was a cover of nature or something,
but some of the early, beautiful, complex patterns
you can get from these simple set of rules.
which must have profoundly affected you.
Well, right.
I mean, you know, the big question was,
what secret does nature have
that lets it make all this complicated stuff?
And this was, well, you know,
it might be, you know, one's intuition,
my intuition have been,
you want to make complicated stuff,
you need to go to a lot of effort.
You need to set things up
in a very complicated way.
This was a thing where you just randomly pick these rules,
very, very simple rules.
You run them,
and, you know, you kind of automatically,
get this amazing complexity.
And at first, I was like, this can't possibly be right.
In fact, I remember Feynman, actually, was, I had an interesting sort of exchange with him
about this because this Rule 30 rule, which is a particular simple rule that produces this
seemingly completely random in many ways pattern.
And I remember when we both that, we were both consultants at a computer company in Boston
called Thinking Machines Corporation.
Way back then, yeah.
And, you know, I had produced this big printout of Rule 30.
And there was certain features of it that had some regularities and so on.
We're kind of crawling around trying to measure a bunch of things with meter rules and so on.
And Feynman takes me aside and he says, you know, look, I just want to ask you one thing.
How did you know that this rule was going to make all this really complicated stuff?
And I said, I didn't have any idea.
I just ran the experiment and that's what happened.
And he said, ah, I feel much better now.
I thought you had some kind of intuition that would let you figure this out.
No, no, no, I'm just a, you know, experimental scientists, so to speak, at that level.
But I think what was surprising to me, it's a very strong phenomenon.
It's a phenomenon where simple rules can do very complicated things.
It's a phenomenon that, you know, I've seen all over the kind of world of possible simple rules
when you go back and look and you say, didn't people already know this?
And the answer is, well, they did, kind of.
I mean, like the digits of pie, for example, you know, 3.141.5.
nine, et cetera.
You know, there's a rule for producing those digits.
But once you produce them, they seem for all practical purposes random.
Sequence of primes.
Same type of thing.
There's a bunch of randomness in sequence of primes.
But what people hadn't kind of gotten onto and it took me a while to get into was,
so what does it mean?
I mean, you know, so you generate, so there's this simple rule and it generates this
very complicated behavior.
For example, in the case of the primes, people spent centuries studying what regular
you can work out. The fact that the overall story is there's lots of randomness, that was not relevant. What was relevant was the stuff that could be attacked with kind of traditional mathematical approaches and things of saying, what are the regularities? So in a sense, what ended up with this book called New Kind of Science is, you know, what science do you get if you are really concentrating on this phenomenon that of what can happen with computational rules? And that's kind of the, that's sort of the, the, the, the,
the thing that you can say, we just don't care.
And people had seen these phenomena.
And they just said, oh, it's just noise.
We don't care.
We're concentrating on this particular thing, which is very regular, that we're looking for.
And, you know, it's a question of what you're interested in.
For example, talk about the second law of thermodynamics.
The second law of thermodynamics is the story of, you know, you start of a bunch of gas molecules,
and they're all in a very regular arrangement in a box, and then you let them run for a while,
and then they're all randomized in the box.
And the question is, what's really going on there?
Because among other things, the microscopic interactions are reversible.
So whatever process could happen that goes from that simple configuration to the apparently random configuration could also run in reverse.
Why does that not happen?
And in the end, it's this kind of fundamental computational phenomenon that explains how that works.
Well, yeah, though I think, you know, Boltzman spent a long time eventually killed himself because of it.
but I mean, trying to understand why that happened that way.
People didn't, you know, I think I finally figured this out in the 1990s, how this works.
And I think people haven't, you know, at the time when I figured it out, I don't think anybody cared anymore.
And I haven't really, I mean, here's the story.
It's kind of an interesting story.
So, you know, you've got these gas molecules.
They're bouncing around in a box.
They, you know, they are in some configuration where from that final configuration, you can always, in principle, go backwards.
out, oh, this final configuration was one that came from this simple pattern of molecules in the
box. This other one was one that didn't come from a simple pattern of molecules in the box.
Why do we, why do we, you know, why is it the case that we don't, for example, end up with
some configuration of molecules that will magically reassemble itself and unscramble the ag and things,
right? And so the answer, I think, is this phenomenon that I call computational irreducibility,
which we didn't really talk about yet.
We're going to get this.
You could start talking about it now.
Yeah, right.
I mean, so take this Rule 30 phenomenon.
Okay, so one of the things is you have a simple rule, the simple rule tells you,
step, step, step, you work out what the pattern is.
The pattern is complicated.
So you might say, you know, we're scientists here.
We predict things.
Let's go predict what is Rule 30 going to do.
And so you might say you wheel in all of your sophisticated mathematical apparatus and so on and say,
we're going to crack it.
We're going to figure out what it's going to do.
And you try doing that.
Actually, Dick Feynman spent a while trying to do that for Rule 30.
And he finally said, okay, I think you are onto something.
I can't, I can't crack it.
But the way that, so there's this question of, you know, can you do what one has been trained is sort of the effort in science?
Can you make a sort of, can you make a prediction?
Can you say, you know, you've got a two-body system, you know, Earth going around sun, you know, from Newton onwards, it was kind of like.
we can just use math to figure out where this, where's this going to be.
We don't have to trace every orbit.
So the question is, can you do that for Rule 30?
And the answer is, well, no, you can't.
And we have, in other words, it's something where the computations that you have to do
to work out what it's going to do, you have to kind of spend as much as it.
Yeah, computation, as far as I can tell, computation or reducibility is saying to, you have to,
you know, to figure out what's going on, you have to basically do what's going on.
You can't, there's no simpler way, there's no simpler way to, to predict other than,
than to just do the experiment, to let the particles do it.
There's no, no, no compactification of information.
Yeah, exactly.
That's what you call a company.
I took me a while to get that, but computationally reducibility is why.
Right.
And it's something that is kind of a, a, it's sort of a, a finer version of things like
Dirtle's theorem and a bunch of other ideas that, um, that have, have, um, a sort of,
It's a kind of fundamental fact about computational world.
It derives from an even deeper principle as far as I'm concerned,
which is the thing I call the principle of computational equivalence.
And that has to do with the following thing.
So you take some set of rules and you say,
that set of rules, when I run them, it will do some computation.
And you might say, well, let me rank these computations,
which one is doing the most sophisticated computation,
the least sophisticated computation?
The big surprise is,
as soon as you get out of a domain of ones that do obviously simple things,
that just make simple repeating patterns and things like that,
the claim of the principle of computational equivalences,
as soon as you're out of that zone,
they're all equivalent in the sophistication of computation they can do.
And that ends up being a big claim,
because it says that from Rule 30 to our brains to lots of things in physics,
it's all equivalent in terms of the sophistication of the computations it can do.
And that claim is what leads to this,
idea of computational irreducibility because when if you're going to figure out what's rule 30
going to do what you're basically saying is my brain is smarter than rule 30 i can jump ahead it has to go
through all its steps but i can jump ahead and that's what this principle of computational equivalence
says you says you can't do um so that's what leads to this idea of computational irreducibility
okay well we're going to get to computationary reduce i mean because the interesting thing i found
frankly, with
the writing you've been doing
in the statements about new kind of science
and the physics project that you've been working on,
is one makes
interesting claims
based on computational error of disability and bounded
computation about why the world may
be the way it is. But the
question is, but what physics does
is predict how the world
operates. And so there's a
there's a big gulf as far as I can see between general statements that may that are
tempting and seductive about that may make some that may seem plausible about general qualities of
the world but that's a big difference than than doing things and and I so you know if if you
jump ahead to our physics project for example you know one of the issues there is is you know how
do space and time work?
Yeah.
And so, you know, we can talk about it in more detail, but, but in the end, space and time are
being built from this giant hypergraph that's this kind of collection of points that have
certain relations between them.
Yeah, it's an abstract relationship between points.
And then the fact that when, well, jump ahead.
And my understanding of it is, and it's elementary, I'm sure, is that there are abstract.
relationships between points that and there are they're sort of they're kind of rules there are rules
that that govern the the inter the points are connected and there are many different ways they can be
connected and if you look at all different ways you produce a structure that has properties
that you would argue it's like space time is that is that not quite that's not quite right that's
not quite right. So, so the basic idea is you have this hypergraph that is basically how the atoms of
space, what the friend network of the atoms of space is. Yeah, yeah, what the friend is, it's not obvious,
you know, the concept that space is made of something is not an obvious concept. I mean,
that's not, you know, Euclid didn't have that concept. He had the idea, you just put things in space,
and that's been kind of the common idea in physics for a long time. This is kind of the atomic theory
of space, so to speak. Yeah, yeah. So it's an idea that still is yet to be,
Yeah, it's a proposal.
Let me put it that way.
Yes.
Okay.
Right.
So, okay.
So then, then, you know, you have this structure that is this kind of discrete structure,
just like molecules make up a fluid.
So atoms of space make up space.
And in this, in this theory, everything is space.
There is no, there's nothing in the universe other than the structure of space.
So, you know, electrons are some kind of complicated, twisty thing that is a feature of the structure of space.
and so everything is just the structure of space.
Now, how does time work?
Well, this hypergraph that represents the structure of space,
it is getting rewritten all the time.
There's rules that just say,
if you see a piece of hypergraph that looks like this,
turn it into one that looks like that,
and do that wherever you feel like.
So that's kind of the structure of space and time in these models,
and then the question is what then emerges from doing that?
And it turns out there are a couple of conditions,
and there's a complicated mathematical,
story behind it, which I would say is, you know, when people say how nailed down is the mathematics,
the answer is it's a bit less nailed down than the proof that, you know, that you can get
continuum fluid dynamics from molecular dynamics, which has been 150 years and not nailed down.
That turns out to be a, it turns out there's a lot of pieces of mathematics that one,
at the physicist level of mathematics, it's beautifully nailed down.
At the mathematician's level of mathematics, it's absolutely there's another century to go.
But in any case, the thing you find is, so you've got this thing, and it's these atoms of space
and they're being rewritten in all these different ways.
And then you ask, what's the large-scale behavior of that system?
Okay, so it's similar to what happens in fluid dynamics.
You've got all these microscopic molecules bouncing around.
You say, what are the fluid equations?
What are the overall equations that govern a fluid?
Okay, so what happens in our case?
What happens in our cases, those equations are the Einstein equations.
So in other words, what emerges from the, you know, when you zoom up from this microscopic
level with a bunch of conditions, which we can talk about.
With a bunch of conditions, yeah.
Which is somewhat technical.
But in the end, those conditions, I think, are inevitable.
I think those conditions are not that interesting.
I think those conditions end up being, for example, you have to have computational
irreducibility in the underlying dynamics of the system, which is something that's pretty
ubiquitous. You have to have another thing called causal invariance that I think inevitably
arises when you have observers in certain ways. But in any case, the details are slightly
complicated. Yeah, well, that's what worries me to some extent as a skeptic, is that what you put in,
you want to make sure that what you get out is more than what you put in. And you want to make
sure that the conditions in some ways you know it's like there are very as you know and
fehmann showed it in general i mean you can get general relativity just by having a spin two field and
and and and and and saying you know what there are lots of ways to get actually there aren't very many
ways to get but what there are what i'm saying is you got to make sure what one of the beauties of
physics it seems to me is that there are many there are number you can look at a problem and
what seems to be totally different ways of formulating things lead to the lead to equivalent
pictures. Yes, that's certainly true. And I mean, that's certainly something that we're seeing
very much. I mean, you know, things like spin networks, sort of various derivatives of causal set
theory, things like that. All of these things seem to kind of read on the underlying structure
that we have, which is, which is kind of encouraging all around. But I think that, you know, to me,
the thing that, you know, the test of whether, you know, you say, well, you know, we can give
mathematical arguments for the fact that we get the emergency regenerative, the emergence
of Einstein's equations, okay. But I think the thing that is looking, the most promising,
the most interesting right now is, well, you can actually use our models as a way to do practical
numerical relativity. So if you, if you, so usually if you want to simulate the merger of two black
holes or something, you'll do a bunch of symbolic calculation with Mathematica typically. And then you'll
turn the thing into this big piece of numerical analysis. You've got these big differential
equations and, you know, to make, to solve differential equations on a computer, it sounds like
you did that on an HP15C. But these days, people do that on bigger computers. You have to take
these continuous differential equations, these equations that talk about continuous variations
of things and you have to discretize them so that you can put them on a digital computer.
And so then you have to do that numerical analysis. And that's how people typically solve the
and standard equations to work out black hole mojas and so on. Sure. Well, so the alternative strategy is
let's say we have an underlying model of space and time. And that underlying model is intrinsically
digital. We can just run it on a computer. We run a big version on it on a computer and we say,
does that actually reproduce the same kind of thing that we get from numerical relativity? The preliminary
answer is yes. Well, that would be interesting. That would be fascinating because, you know,
So my young fellow named Jonathan Gorard, who's been working on this project with me, he has a paper about this.
And I guess this has turned into a whole bunch of people doing numerical relativity who are really looking at this in a serious way as a kind of a practical method for doing numerical relativity.
I mean, it's somewhat ironic to me because I think many of these people say, this is a good method for doing numerical relativity.
We don't really care what it comes from.
It's just, you know, this is a good way because what happens is in numerical relativity, you have to figure out,
you know, how do you add more mesh points to deal with the fact that things that space is changing
more rapidly? Well, in our theory, the reason space is changing more rapidly is because there are
more mesh points. It's kind of like the whole thing is kind of generating its own numerical analysis,
so to speak. So that's an example of how, you know, to me, that's an interesting form of validation
for models is what I might call sort of proof by compilation. If you can take some existing thing
and you've essentially got a compiler that goes from that existing structure like, you know,
two black holes merging or something, you turn it into your low-level code, and then you find out
that it produces the same thing that you had before. That's encouraging. We've now been able to do
the same kind of thing for quantum circuits. So that's a, and in fact, we now have a method for
optimizing quantum circuits that's a bit better than any method people have had by any other means.
And that's, it's basically compiling a quantum circuit down to these multi-way graphs that we have
and then going back and saying,
what does that mean for the quantum circuit?
So, you know, to me, those are encouraging,
you know, one thing you can do is the mathematical derivation.
You can always worry, oh, there'll be some limit that we took
that wasn't valid, et cetera, et cetera, et cetera.
But by the time you can actually do practical calculations,
that's encouraging.
It's even more fun when you can say,
and this is going to happen that you never thought was going to happen,
and then somebody can turn a telescope in some direction
and see, yes, it actually happened.
And I think our best, go on.
Yeah, no, I mean, I think probably our best, you know,
what's interesting about those kinds of things is a different kind of skill to figure out, you know,
the sort of phenomenology, given this theory, where's the place where you're going to see the magic,
you know, difference.
I think one thing that our model has in it that's pretty unusual is the idea of dimension fluctuations.
So, you know, we usually think space is three-dimensional,
but by the time space emerges as this limit of this big hypergraph,
there's no guarantee that it's precisely three-dimensional.
And the expectation is that in the very early universe,
the universe was infinite dimensional and gradually kind of cooled down to be three-dimensional.
And the likelihood is that there are dimension fluctuations left over,
whether those survive to recombination, I don't know.
And then there's a bunch of detailed mathematical physics and electrodynamics and so on
to say how does a photon propagate through those things, all those kinds of things.
Well, here, this is the point.
I think it's interesting because the question I have,
and the fact that you talked about dimensional fluctuation
is an interesting one, because one of the big problems that, again,
I don't want to keep coming back to Feynman,
I don't need to, but that he disturbed him about string theory
was that it didn't explain things, it had to have excuses.
But in particular, one of the big failings is, of course,
why is the world three-dimensional if there are many dimensions?
And it never really has answered that.
And although it would be a great development if you could find that.
Well, so you're not claiming, I mean, I'm assuming you have to impose it that there's three dimensions in here.
Well, no.
Three dimensions don't come out naturally from your model.
Come on.
So at this point, so here's the thing.
Yeah.
First thing is, I am very sensitive to this issue of what do you put into a model versus what you get out.
Yeah, sure.
I've been in the business of trying to find absolutely minimal models for things.
So I really pay attention to that issue.
The thing that has been unbelievably surprising to me is there are no cluges so far.
Every time, you know, it's something like how do gauge fields come out?
Well, you have to construct fiber bundles in this fractional dimensional space, blah, blah, blah.
And, you know, every time one of these, we've succeeded in actually constructing one of these things, it works.
It's just like physics.
And there's not been any one of these things where we say, oh, but it's naturally 26th dimensional and we have to curl up these dimensions and do some cluge.
Now, do we know why the universe, as we perceive it, is three-dimensional rather than six and a half-dimensional?
We do not know that.
Do we know why the electron muon mass ratio, the muon electron mass ratio is 206?
We don't know that.
The question is, what features of the universe are generic?
What features are specific?
What has turned out general relativity is generic.
Quantum mechanics is generic.
It looks like the sort of the merger of quantum mechanics and general relativity,
is generic. This question of, you know, how generic. For example, I have a slight guess
that gauge groups may be generic, that the, you know, subgroups of E8 might be a generic
feature of the kind of structure of the system. I don't know. But that's a question.
And the issue of why three-dimensional, we don't know. And I think that the thing that has come out
that's kind of more on the very advanced end of understanding what's going on is this question
of to what extent the characteristics of us as observers drive the aspects of the universe that we
perceive? And to what extent that is something that, you know, to what extent is the number
three, a consequence of some feature of the way that we are choosing to observe the universe
that the average alien intelligence, for example, would not perceive.
Yeah, I just read a paper by you on that. I just read your article about that.
And I mean, which where, well, which really comes back to what I said at the beginning, which is that it looks very much like, like one is saying in this that it's not that these atoms are real atoms of space.
It's a formal structure.
It's a formal mathematical structure.
And in some sense, it's like saying all of reality is a is an illusion.
Well, okay.
So this is the thing that I, you know, I was.
very, you know, I was saying at the beginning, it's like when I was a kid, I said the one thing I'll never do when I'm grown up is be a philosopher. Yeah. Okay. And, and, and, you know, I recently wrote something on the question of why does the universe exist. Yeah, I know, which is, I read that because as you know, I've written a book about it. You know, right, myself. So, so I, you know, I was very surprised to have anything to say about that. I did not expect to have anything to say about it. The number of people, as, as you probably know, you know, even in the history of philosophy and theory, and theory, and
and so on, the amount that's been written about that is rather small.
It's been one of these questions that's just a bit too hard.
And the thing that really surprised me is, I think, I actually have something reasonable to say about it.
And the thing that is, you know, that this physics project has kind of given evidence that there is a computational model of physics sort of all the way down.
And then the question, the big question is, okay, let's say we've got this computation.
model. We've got this rule. It reproduces our universe, et cetera, et cetera, et cetera. Then the big
kind of Copernicus style question is, why did we get this rule and not some other rule? Why did we get,
for example, a rule that looks simple to us as opposed to some unbelievably typical, you know,
incredibly complicated rule? That's a very, you know, we've been kind of trained in a sense to think
that there's nothing special about us. So how did we get the universe with the simple rule,
as opposed to the universe with the incredibly, incredibly complicated rule.
So I was really puzzling about that for a long time.
And then I realized that in the structure of our models,
it is possible to think about a universe in which,
instead of one thing I didn't mention,
is kind of the way that quantum mechanics arises in our models
has to do with the fact that there's sort of all possible histories are followed.
And that turns out to be an important thing,
and we then have to understand the kind of mind-twisting issue
of as observers embedded in that universe,
we have all these branching and merging histories.
Our brains are also branching and merging.
So sort of quantum mechanics becomes the story of how does a branching brain perceive a branching universe?
So that's kind of a complicated thing.
But what I realized is that not only can one think about applying a particular rule in all possible ways, you can think about applying all possible rules.
And so then the question is, what is that thing?
And you might say, well, if you apply all possible rules, science is off.
there's nothing, anything could happen, but it isn't true.
And the reason is that there's some structure that are, that it's a key.
Not only is it, not that anything can happen, but your claim is that it's vitally important
that you apply all possible rules and then you get a structure, only if you apply all possible
rules.
Am I right?
Because the reason for that is, it's easy to see.
It's like if you apply all possible rules to all possible things, some of the things to which
you apply those rules, you might have thought they'll just go off and do their own thing.
Actually, you'll get the same result.
Two different things, you apply two different rules.
They end up becoming the same thing.
And so there's this whole network of equivalences,
this whole collection of essentially entanglements that you generate.
And so this object, I think I'm going to call it the Ruliad,
the limits of this thing where all possible rules occur.
This object is, so the thing about that object,
that's kind of a weird thing,
is that object is a formally necessary thing.
That is, if you say there are, you know, it's just, it is something where there's no choice in it.
It is just all possible rules, all possible formal systems.
You put them all together, you get this thing.
It's not something where anybody had to choose anything.
And then so you have this thing.
And then the question is, well, where are we and where's the universe in all of this?
And the thing you realize is that as we kind of try and parse, if we are embedded in this thing,
we are trying to understand what's going on in this thing.
And we have to define this essentially collection of reference frames, this way of this way of parsing what's going on.
And sort of the big claim is that that there are, it is a generic thing that our way of parsing the things that go on will lead us to things that are like the laws of physics we know, that our way of parsing things, which in our way has certain limitations that are associated with us.
That's the key part, again, to get to for other people.
I mean, it seems to me that the thrust of what you're arguing is that is that there, you know, you.
Yeah, they're all possible rules.
There's computational irreducibility.
But the world, as we perceive it, is because we're computationally bounded.
And that's why, by the way, as far as I can understand, why we experience time in your argument,
but also why we experience the laws the way we do, because in this computationally irreducible,
all possible rules, Ruliad, I was going to say universe, but Ruliad, let's just say,
we are computationally bounded and we therefore experience the reality that we do with the laws that we do.
Which, which is that a fair summary of what?
More or less.
Yeah.
I mean, you know, it's kind of like you look at gas molecules.
You could say, you know, we experience gases as just, you know, pressure and temperature and things like that.
There is a different form of experience of gases that looks at, you know, I like that particular molecule.
and it's doing this dance with this other molecule and so on,
that's not how we experience it.
And, you know, our experience of the universe is very specific to our, you know, for example,
here's an example of something.
You know, we look around, you know, you're in a room.
It's some number of tens of meters across or whatever.
You know, we're seeing light that comes from the edges of the room.
That light is reaching us at the speed of light.
It's reaching us in, you know, some number of nanoseconds.
By the time it's reached, it's reached.
it's reaching us very fast compared to the speed at which we process that scene.
So to us, we synthesize our view of the world as there is this thing that exists in a succession of moments of time.
If we were much bigger than we are, you know, if we were the size of planets or something,
we would take the speed of light much more seriously.
And if we had the same brain processing speed, so to speak.
So it's kind of, you know, our experience of the world is pretty specific to our construction, so to speak,
in our size and things like this.
And so, you know, I think one of the things that there are, I think, two key aspects of the kind
of way we perceive the world.
One is that we have, we are computationally bounded.
We're not able to go in and sort of untangle what's happening to every atom of space.
And the second thing is that we have the idea that we have a definite threat of experience.
That is, we remember the past.
We know about the future.
We're thinking about things.
We're thinking about a single thread of time.
We're not, we have a single kind of.
thing we're paying attention to and so on. And I think that those are the two aspects of our
perception of the universe. And, you know, your average alien that we might meet and not be able
to understand might have a very different perception of those things and might have a completely
different model of physics. Yeah, that's what's just, yeah, that's a very interesting point.
It's one that I find very disturbing and, in my opinion, probably unlikely. Because what you're saying
is a diverging, of course, from where you and I came from, which is to say, in,
some sense that the laws of physics, the fundamental laws are universal and any and any, and they
are true properties of the universe and that any system will, of intelligent beings will derive
them. And what you're really saying is that, no, it's just a property of our consciousness.
And moreover, our consciousness is kind of just a property of a limitation of a complex,
computationally irreducible underlying system of abstract quantities.
And so it's all, the universe is an abstract quantity.
We're an abstract quantity, but not quite.
And it really becomes quite ephemeral and it really,
is intriguing?
But let me, let's get there a second.
Let me ask you a question, though, because I was really interested in your claim that this
is, once more, as an old-fashioned kind, and I'm an old-fashioned kind, a guy, I'm an old guy.
and I just say, well, okay, what can you do with this?
And I was intrigued.
I mean, that's much more interesting to me than often the philosophical questions is,
what can you do that you couldn't do before?
And I was fascinated by this statements that you might be able to improve on numerical relativity by this.
And I think that's interesting.
But could it be that just as sort of string theory may not describe the universe,
what it has provided is a set of tools that have allowed us to calculate certain other.
things in ways we couldn't have. So the utility of string theory is in my mind is that it's allowed us,
it's given us tools to calculate certain quant physical things that we might have not been able to do
otherwise. Is it possible that this is just, it's beginning to let's be realistic. It's beginning.
It's beginning, but is it possible that you're, it's beautiful mathematics. It is beautiful
mathematics. But it's it possible that your different kind of mathematics, your new different kind
science is nothing more than a might turn out to be not so fundamental, but rather just a more
interesting numerical computational way of handling physics problems. I mean, if it comes up to
that, would you be happy if it was just that? Well, I think, you know, look, this is all a big surprise
to me. Frankly, I didn't think this was going to work in my lifetime. So, so it's a, you know, it's, as far as I'm
concerned, it's all bonus, so to speak.
Okay. But, but, you know, the fact is that in the, you know, the question is, if we have a
model that sort of has structure all the way down, we can say, well, it's just a model.
And that's the nature of models. Models are formal representations of things.
Sure, sure. The only question about a model you can ask is, is it an approximation?
Or does it, is it the whole thing all the way down? Yeah. And what?
What it's looking like so far is it's the whole thing all the way down.
Now, an interesting question is how does it plug into lots of other mathematical physics,
you know, spin networks, causal set theory, you know, categorical quantum mechanics, all these
kinds of things.
Here's the really remarkable thing and the thing that I think people are in those different
fields are really excited about is we're sort of a rosetta stone for all of those fields.
That is what seems to be happening is we've got a machine code that all those different
approaches plug into. So, you know, in causal set theory, for example, that's an idea where,
you know, you're just saying, there are these events that happen in space time, and you throw them down
at random, and then they have certain relationships between them. People in that field have been
a bit confused about, well, you know, there are issues about how random can they be and why are they,
why do they satisfy relative socialistic invariants? And to be fair, it's interesting, but it hasn't
really gone very, it hasn't really done much. What's happened now is, this is another Jonathan Gorod
production. What Jonathan did was to show how our models provide an algorithmic way to generate
causal sets. That's not too surprising. What's that? I guess that doesn't surprise me. I don't know why.
No, no, it's not surprising. Because it's relationships between points, right? Exactly. Right,
exactly. So, but in causal set theory, that's a theory of randomly thrown down events. Now we have,
this is an algorithmic generation of these events. When they are algorithmic degenerated in this way,
all these things that people had wondered,
how does this work in causal set theory?
They just work.
And so now there's a whole big adventure there
in doing quantum gravity using that,
et cetera, et cetera, et cetera.
And it's really very beautiful.
And it's something where it feels like
one's kind of got this thing that's a machine code
that's underneath these very elegant pieces of mathematics,
probably also string theory.
I mean, we don't yet know about that,
but I'm pretty sure that's going to be.
In fact, the one ridiculous pun fact,
is that a sort of simplification of our models is not rewriting hypergraphs, but rewriting
character strings.
And I kind of, I was writing this section and something I was writing about this and I was
writing, you know, the case of strings.
And I thought, I can't write that because people are just going to be irreducibly confused.
And I thought, but let me actually think, what is the limits of these character string
theories?
And I realized, I'm pretty sure it's string field theory.
And that hasn't yet been proved.
But I think it's going to be the case that the pun is actually reality.
And so that string field theory ends up being a particular limit of kind of a simplified version of our model.
And so it's a very different way of it's a, it's a different way of doing science, which one can say is in contradistinction to the, the idea that you're, when we do physics now, it's, it's a central part of physics that our models are not complete.
In fact, the normalization group, that no one model describes the universe at all scales.
That as the scales change, the models change, and it's fine, and we live with that.
And it's a central way that governs the old-fashioned way of doing physics, which is the kind of way I think about it.
And this is completely different.
This is completely, and in some sense, it shares with string theory that same claim,
which string theory would say it is a complete model of every, of,
down to at all levels, there's nothing, there's nothing else.
We're a much more extreme version of that.
String theory is a much less extreme version of that.
String theory already has, you know, a lot of structure.
Yeah.
We're a much more outrageous version.
I mean, I think the thing to me are a much more outrageous version.
I'll agree with that.
I mean, I think that, you know, in the history of physics, you know, I was kind of
interested, I kind of traced this through as we were kind of presenting this project.
I was trying to get, when did physicists get so humble?
In other words, when did physicists stop believing that there would be an underlying theory of everything?
And, you know, the fact is that people believe that for a long time.
And it was only, you know, it's a comparatively recent thought that, you know, people like Descartes believe that there would be a fundamental theory of everything.
And I think that the concept that you can't turn physics into mathematics, that there is something, you know, almost theological beyond kind of beyond physics, there is something out there.
that we are not going to be able to turn into a thing that we humans can wrap our arms around, so to speak.
That's an interesting, almost, I would say, theological kind of concept of saying, you know, there's really something else out there.
It's not all just something that we can wrap our arms around and sort of make.
Well, maybe, but a Feynman's argument might be, you know, that's like we can wrap our arms around it.
And it's like the layers of an onion.
Each layer, we have a mathematical way of wrapping our arms around it.
but then there's a new layer and it requires a new mathematical way.
And there's a new layer in that.
And then that, and Fiamen says he didn't want to know all about,
he just, all he wanted to do was understand the next layer.
And maybe that's a little more humble.
Let me remind you that actual onions in the physical world are finite.
Yeah, I know.
Eventually peel it all off.
And there's, I don't know what there is in the middle, actually.
I have to say, I haven't done it.
But there's something in the middle.
And then you're done.
And I think that's a, you know, it is a fundamental question.
I'll tell you what's in the middle.
even. It's an atom of space, clearly.
Yes, well, quite. I think that one thing about atoms of space, though, that's a little bit
kind of disquieting is there's nothing permanent in the world. That is the atoms of space are
being rewritten all the time. So it's the only thing that's permanent is, you know, a space like
singularity in the middle of a black hole, that's an atom of space that just got stuck. That's an atom
of space that's not getting rewritten. There's nothing more that can happen. Time has stopped.
So that's the only permanent thing is, you know.
Well, let me ask you, I'm going to wrap up, but I can't resist.
You say that generically is general relativity and generically the quantum mechanics.
Well, since so far, except for the claims of string theory, we can't find a mathematical consistent quantum theory of gravity.
If both quantum mechanics and gravity are generic features of this, then this should be a theory of quantum gravity.
Absolutely. That absolutely is. And that's, I mean, that's a thing. Are you saying it is or it should be? Are you convinced to it? I mean, I think it will be. And there's a bunch of people now working on trying to fill in those features. And, you know, I think that the big surprise is that in the end, general relativity is a kind of theory of how things work in physical space. There's this thing we call branchial space, the space of quantum branches. And quantum mechanics ends up being,
in our models the same theory as general relativity.
So the deflection in, you know, there's gravity in physical space
and mass and energy deflect paths of things in physical space.
That's what leads to gravity.
In branchial space, energy also deflects things.
What it deflects is jadis, the paths in branchial space,
and that deflection, the kind of coordinates in branchial space
are essentially quantum phases.
So a deflection in branch ofial space is a change of a quantum phase, which is in fact exactly what the Feynman path integral of quantum mechanics.
It certainly is about.
So it's a, you know, there are many details to this, but I think the big picture is that's how it works, that these things are actually the same theory.
And so, you know, the effort right now, I mean, a bunch of people are working on this is to try and fill in.
So what happens?
You know, what is that interface like between kind of the quantum side and the gravitational
side, how does it relate to ADS-CFT, how does it relate to ER equals EPR, all these kinds of things that
have been popular in physics. It's looking like we're actually getting, you know, it's fairly
clear that the correspondence between like ADS-CFT and things is a correspondence between physical
space and branch-hield space, that these two things are part of the same object.
Well, you know, it's, I know you're excited and it's interesting. I understand there's a lot of things
that are looking like and general ideas, which I now learned a lot more about because I wanted
to prepare with some knowledge to be able to discuss with you reasonably, competently after all,
otherwise I would be wasting your time. But it still seems to me, I guess it's,
it still seems to me it's premature. I understand that you're exciting, but as you say in one
of your arguments, despite these developments, fundamental physics, always seems to resist
this advance that we're making.
And I suspect the resistance is still the challenge, okay, show us something we didn't know.
And that's still the challenge.
And you're right, we don't know where it's going to go.
And I think it's...
I think the things that we're looking at, and they may not be the right things to look at,
the first thing we're trying to do, which is a typical thing in history of science,
is can the new theory reproduce what the old theory is,
said. Sure. And in fact, we're doing better than that. We're actually making practical methods
for computing things in the old theory, using the new theory, and doing it better. That's kind of a
Copernicus story or whatever else. No, that's great. Are you doing it better or is there
potential to do it better? Is that been? Actually, it seems like, okay, in the case of quantum
circuit optimization, we're definitely doing it better. In the case of numerical relativity,
that's still a mushy. That's still a mushy. I kind of figure that. Okay. But I think that
the thing that there's the question of where will the first places be where we can actually
see definitive new effects? So dimension fluctuations are one thing. Is there a place we can see
that? Another one is just as there's a maximum speed of light, in our models there's a maximum
quantum entanglement speed. And I am increasingly suspect that in quantum many body systems, it may
be possible to see the maximum entanglement speed, that we may not be too far. You know, the problem
in our theories, quite possibly the elementary length is like 10 to the minus 100 meters.
So that's really small.
It's really small compared to what we can detect.
So, you know, those are another case which looks promising is that a critical black hole,
when a black hole is spinning fast enough that it's almost revealing a naked singularity and so on,
right at the point where it's kind of at its critical, you know, angular momentum,
that essentially we have a gravitational microscope, that we can essentially see through
to individual causal edges
in the structure of space time.
That what will happen is
as the black hole spins faster,
essentially a piece of the universe
will break off.
And right at that point,
just before it breaks off,
we'll see this thing
where we can kind of see through
to the molecular dynamics.
You know, we'll see like a fluid,
you know, how do you know
that a fluid is made of molecules?
Well, you have to discover brownian motion.
Or you could also be flying a space shuttle
and you could be going at Mark 25
and you could realize that the hydrodynamics
that you might have learnt
doesn't work at Mark 25.
It matters what molecules there are.
So now the question is,
what's the analog of hypersonic flow
for, you know, in black holes and things?
And that's, you know,
so there's pieces like this.
And, you know, those are the,
I don't know which of those will break first, so to speak.
And maybe there'll be, I mean,
we have only one parameter in our models, basically,
which is the maximum entanglement speed,
equivalent to the elementary length,
equivalent to all these other parameters.
If we knew the value of that, we would be able to make a whole bunch of predictions of
things.
Now, there may be predictions which are hopeless to observe with current technology, but we'd know
what was going on.
And we just need that one parameter.
And I don't know where we'll be able to get to that.
Well, okay.
Well, it's ambitious.
And it's clear that you're, that, that, that, you're excited about, about what, the thing.
But the thing that has most surprised me at recent times about this physics project is the following thing.
So I thought, you know, we do this physics project.
It might be interesting for physics.
If there's an application of it, it's 200 years in the future.
You know, we're not even close.
It's, you know, I wrote a thing about going fast on the speed of light and using kind of Maxwell
demon-like methods in space to go past the speed of light.
And it's like, there is no way, you know, even if this works, we're 200 years away.
Okay.
And then, you know, maybe that's a very good way to think about this because physics came out of natural philosophy.
And, you know, when you could argue about philosophy as it turned into physics.
And as you pointed out, and I don't mean this in a pejorative sense, when I read what you're writing,
and I think when you're thinking about what you're thinking, you may be at the stage in your picture of producing philosophical pictures.
and it may take a long time to discover if if they're physics pictures, I guess.
You see, that's not what's happening.
I mean, by the time you're running black hole merger simulations, that's not philosophy anymore.
Okay, yeah, no, that's true.
If you're right, you're right.
And if you can do that, as I say, I'm going to continue to be a skeptic in the sense
that I'll say, this is a really useful numerical tool.
Sure.
And that's great.
And that's wonderful discovery and very useful.
And it's clearly a new way of.
of thinking about how to handle manipulating space time.
Whether it is a new picture reality, I'm going to, you know, I'm going to still say.
I mean, it's like people would have said that about Plank when he fit the black body spectrum
with photons.
Even Plank said that about Plank.
He didn't know where the photons were real or whether this was just kind of a trick.
Yeah, but the fact that we don't know yet doesn't guarantee.
But, I mean, it's always nice to make analogies to wonderful breakthroughs.
And this may be maybe a useful breakthrough and it may be a wonderful breakthrough.
But I think the jury's still out.
Well, I mean, I think that the thing that to me is most interesting.
So first point is, so far, no cluges.
That's really big.
Yeah.
It's like it wasn't necessary that that would be the case.
It could be that, you know, as we investigate and do a bunch of complicated math, it's like, oh, gosh, it's got to be 26th dimensional.
Whoops.
Nothing like that has happened.
So that's remarkable to me.
The second point is the thing that is kind of the most.
interesting to me right now is the underlying sort of meta model that we're using, which
I'm calling multi-computation, which is this whole business about multiple threads of time and
all this kind of stuff.
What is really remarkable to me is that meta-model is turning out to be applicable to a ton
of other things, to metamathematics, to distributed computing, looks like to chemistry, molecular
biology, possibly to economics, possibly for linguistics.
Okay.
Why do we care? The reason we care is that we get to leverage physics in those areas. That is, if you want to make a model of economics, you want to make a model of molecular biology, right now, you don't get to talk about time dilation and space time singularities and things like that. But if there is the same underlying meta model that applies both the physics and to these other fields, you get to transport kind of the successes of physics to these other fields. So even if it turns
out that, you know, we didn't make it all the way to the bottom, that this isn't the final
sort of theory, so to speak, that there's still another layer of onion, which I'm having a hard
time understanding where that onion would be, but that's, you know, be that as it may.
I mean, this is a thing, you know, I'm enough of a student of the history of science that
I'm well aware of kind of, you know, there isn't any another layer of the onion, but we just
didn't know where to look for that other layer, so to speak. It's not, but I think this is
the thing that being able to see these correspondences with other fields, this is going to be super
powerful. And it doesn't really matter. At that point, it's basically just using, it's doing
something, which is, again, not what I expected. It's leveraging the success of physics to make
physics-like models of other fields. Well, if that's good, that would be useful. Again, I'm, I'm, I have
to say, in some ways, I'm more skeptical. Let me tell you, the reason, when I was a kid, I
remember going to go taking a sociology class.
And I suddenly thought, oh, they're using all these terms like physics terms.
Maybe we could, because I was always interested in science, maybe we could use physics to create,
you know, really good science of sociology.
And then I realized it's just analogies that don't work and social systems can't.
So I'm, you know, and biologists have told me one of the reasons that, well, I know why physics is
so much easier because you can generalize.
And whereas biological systems, you can't often general.
generalize. Each system is quite, each cell, each organism, it's much more difficult to make the
kind of beautiful generalizations we make in physics in biology, which is one of the reasons why it's
so much harder. So I'm surprised if it works. Well, you see in biology, one of the, one of the kind
of inspirational ideas is this. If you look at genetics before 1953, it was a mess. Of course.
People were saying there were all these effects, et cetera, et cetera, et cetera. And then there was an
idea, which was a single molecule can store a whole bunch of digital information, DNA.
And that then makes that whole area much clearer.
So right now, one of the issues is in molecular biology, there are all these processes,
and you can kind of look at all these giant wall charts of, you know, all these different
kind of signaling pathways and all these kinds of things.
What is the big picture of what's going on?
What actually matters?
And I suspect that there's a different thing that matters that has to do with causal graphs
and all kinds of things like this,
that it's just something like the,
oh, actually a molecule can have digital information on the molecule.
There is something different that can matter in molecular biology,
and that becomes kind of a paradigmatic change
that then enables a lot of things.
But, you know, I'm curious to ask you, you know,
if you say, you know, the question is,
is there a bottom level, so to speak?
In other words, what, you know,
we have a model for physics, let's say,
and it reproduces everything we know right now,
and maybe it makes some predictions
that turn out to be right, et cetera, et cetera, et cetera.
What would or wouldn't convince you
that we're done, that that's it?
What would convince you that there's nothing left?
There's no more miracles, so to speak.
There's no, there's, you know,
because that's what, in a sense, when we say,
there is a physics, there is a rule for the universe,
you say, well, whoops, there might be a miracle that happened,
that doesn't follow that rule.
I guess, yeah, well, I guess I'm a little,
you know, it's a good question.
And I think, I'd have to think about it more carefully to give you a real answer.
But I think the first answer might be somewhat similar to what you would say.
Is if there were no adjustable parameters, if there was, if you could reproduce it all with no adjustable parameters,
then I would be much more, I'd be much more willing to suspect.
that it was a complete theory.
Right.
So what I think is going to happen is that in this Ruliad of all possible physics is effectively,
we, just as we live at a particular place in physical space and not in another place,
so we live at a particular place in Rulial space and not another place.
And so in a sense, the theory is going to say, this is the space of all possible theories,
the particular one that we're at.
We're going to have to say why, you know, if you say, derive from,
first principles. Why do we live on Earth rather than Alpha Santori? You can't derive that from
first principles. It's not the kind of thing you can derive from first principles. And so I think
similarly, it's going to be the case that what we're going to find is we live at this place in
rural space. We can say why, you know, we can give evidence for why that's the place we're living
at, but we're not going to be able to derive from first principles why the universe appears to us
the way it appears to us. Well, that's very similar to the, to them, not only to multiverses,
but to anthropic arguments in some sense. It really,
there's a lot of, I mean, from a very different path,
including the path that I've taken,
you come up to a somewhat similar argument
that there may be nothing fundamental about our universe.
It's one of, in fact, actually, I was intrigued
by one of your conclusions, which is,
why does the universe exist?
Because more or less, because it can.
And ultimately, something has to.
And in some sense, it's not too different
than a multiverse idea that, that, that,
That, you know, I mean, that's another conversation that's two hours long.
I mean, this is a, this whole issue about sort of necessary truths and the fact that there are formal things that just by the definition of those things have to be that way is a little bit different from physical arguments about how, you know, you can look at a space of parameters and so on.
It's a, it's a different kind of thing.
It's a, I think it is a significantly philosophically different thing.
But it's a, you know.
Well, I would like to have that conversation again.
I mean, first of all, I thank you for taking the time.
I've always, you know, I've known you for a long time.
I've admired you as well because you were able, I mean, in particular, you know,
there was particle physics, but then you took this thing and really made something,
I mean, real, Mathematica and, you know, and it's something I, I'm always admire people
who do things that I couldn't possibly imagine myself doing.
And that's one.
And so I admire the dedication that you've given to this trajectory you're working on.
And it's a noble and ambitious trajectory.
And most noble and ambitious trajectories don't succeed.
But I hope for your sake you do.
Because I see what I'm doing.
I basically done basic science and I've done technology.
And I've alternated between those things about five times.
And it's turned out that the place that I've got to could only have been reached, I think, by the path I've taken, which is just so weird.
And it's so, you know, I mean, for example, this physics project, there are so many ways that this project would never possibly have happened.
And, you know, because it requires both having the tools and the knowledge and the, you know, knowing physics and knowing about sort of theoretical computation, et cetera, et cetera, et cetera.
And, you know, it's the question, it's sort of an interesting experience that I've had because I've had, you know,
I think after my very first paper, everything I've done since that time, it's turned out
to had the right intuition.
So it's, you know, it could be, and I'm usually one of the things, you know, maybe there
are, this, I have to say, of all the different things I've done, this is a place where
I am more certain than ever before, it just too many things fit together.
This is not one of these things where it's like, it's a put-up job, you know, you've got to
say, well, I've got to tweak this to get quantum accounts.
I've got to tweak that to get, you know, event horizons or something.
It all just comes out.
And it's very, it's really surprising.
I mean, it's not what I expected.
I, you know, what I expected was, you know, I had thought about these ways of thinking
about sort of underneath space and time and so on.
I thought that, you know, in the next 50 years, we would be at the point of being able to
tweak little pieces and we have some understanding what was going on.
The idea that we actually get to the point of being able to.
and make real statements that can be compared to, you know, actual experiments and so on,
I thought that was far, far away.
And it's ended up being much, you know, I think much closer.
Now, you know, to me, I think I would pose to you as a piece of kind of philosophical homework,
because I think it's interesting, is, you know, you, I suspect, believe in some kind of idea of
induction as a way to deduce what's true about the world.
And the question is, with induction, you never reached the end.
You can never know that you got to the end of the whole thing.
And so I'm curious, and I'm poking you a little bit because I'm saying basically,
I think that any claim that you haven't got to the end is essentially a theological claim.
And I think that that's something where unpacking that, I think it's sort of interesting.
Because it's like, how do we know, you know, we've done a bunch of experiments, we can do this.
you know, how do we know we got to the end? And, you know, I think that's not, you know,
it's not obvious what the answer to that should be. And in a sense, you know, just to say that,
that I think one thing you have to understand is, what is a model of physics? So, you know,
the physical universe is a model of physics. It does what it does. Yeah, of course. And so the question
is, to make a model of physics, what we're doing is, we're saying, what is a model of physics? A model of physics is something
where we humans can wrap our brains around something which gives us a narrative that explains
why the universe does what it does.
That's, in a sense, it's like computational language design.
We have to figure out there's this thing out there that's the universe and then can we have
this language for describing it that makes us convinced that we understand what's going on.
I think as you unpack that, I think you'll, I think, I don't know because I haven't done it,
But I'm guessing that this question about where is induction?
I mean, I actually had a suspicion for a while that some of these things about the structure of this Ruliyadh,
that there would be essentially you could prove limits to scientific induction,
that basically that certain aspects of decidability in this thing would be essentially a proof,
and it may still be correct, that there may be a proof that there's certain kinds of things that are unknowable to scientific induction.
and that's
That'll be fascinating
I guess I know
when I was thinking about
when I
just threw off the notion
that our universe
is an analog
model of itself
I guess the difference
is you'd say
it's a digital model
of itself
I suppose
is that correct?
Yeah and that's a big difference
well look
you know I think one of the reasons
the difference
maybe I haven't
to the question
of how do you know
if you have a complete theory
I guess
right now
you think you're much closer to that point.
And only when you get close to that point,
does that philosophical question become relevant?
If you think you're very far away, it's not so relevant.
So I guess why, you know, a Philistine like me,
I just figure I'm so far away that I haven't worried about that philosophical question yet.
Right. Right. Well, I mean, you know, and similarly, for me,
this question of why this universe and not another is not something that I had really thought about,
until you imagine you might hold in your hand a model of the universe,
you don't really care why this one or not another.
You know, it's clearly an interesting question.
know if it's true. I think remember when I wrote a universe from nothing, I think you contacted me
and were interested in that question. I think I remember you wrote me an email and maybe that,
maybe then you were beginning to think about those issues. I don't know. Sounds plausible. Yeah.
I mean, I was, I was, I've been, you know, I have to say it's been, it's been an issue that's
been bugging me for a long time. And I was just really surprised that I had anything useful to say about
it. And that was, you know, anyway. I hope I've done just.
I wanted to give some justice to the listeners who have been patient enough.
And I realize there's some areas where we skirted over things that are maybe technical and some people may have been lost.
But I hope we were able to give the flavor and give you a chance to not only talk about the new work you're doing,
but also to see the unique trajectory of you as a human being, which is fascinating.
And I thank you for sharing that.
And I hope you've enjoyed it.
It was a fun chat.
Nice to chat.
It was good. Oh, good. I'm glad you found it to be nice, Stephen. And I hope we can be together in the real world or the virtual real world, depending upon whether it's... Yeah, someday, someday. Someday. Where are you? You're somewhere in Canada. I'm somewhere in Canada, and it's on the east coast of Canada, but I'm sort of keeping it a little bit of a secret. But I'm in the most beautiful work. I'm somewhere in Massachusetts.
Yeah, I'm close by, and when we're offline, I'll tell you where it is, and I hope you'll come visit us.
Fair enough. Okay, take care.
I hope you enjoyed today's conversation.
You can continue the discussion with us on social media and gain access to exclusive bonus content by supporting us through Patreon.
This podcast is produced by the Origins Project Foundation, a non-profit organization whose goal is to enrich your perspective of your place in the cosmos by providing access to the people who are driving the future of society in the future.
21st century and to the ideas that are changing our understanding of ourselves and our world.
To learn more, please visit Originsproject Foundation.org.
