Theories of Everything with Curt Jaimungal - AI Expert on the Dawn of Conscious Machines | William Hahn
Episode Date: October 8, 2024William Hahn is Director of AGI & AI Safety and the founder of Hahn AI, a company that develops cutting-edge AI solutions. William is a technologist and researcher, specializing in the intersection of... artificial intelligence, programming languages, and the nature of consciousness. SPONSOR (THE ECONOMIST): As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Join my newly launched Substack: https://curtjaimungal.substack.com LINKS: - Snow Crash (book): https://amzn.to/3zYqJb9 - Center for the Future Mind (website): https://www.fau.edu/future-mind/ - Archive of Alan Turing’s papers: https://turingarchive.kings.cam.ac.uk/ - Richard Hamming’s lecture series: https://www.youtube.com/playlist?list=PL2FF649D0C4407B30 - Iain McGilchrist on TOE: https://www.youtube.com/watch?v=M-SgOwc6Pe4 - Gregory Chaitin on TOE: https://www.youtube.com/watch?v=guQIkV6yCik - Mindfest playlist: https://www.youtube.com/playlist?list=PLZ7ikzmc6zlOPw7Hqkc6-MXEMBy0fnZcb - Susan Schneider’s website: https://schneiderwebsite.com/index.html - Susan Schneider’s Google talk: https://www.youtube.com/watch?v=mwVKXKlU1GU - William Hahn’s short course series: https://www.youtube.com/playlist?list=PLKoZnCEAIkvkvyVpbqx71EMT8BLpD6Oaq - Ekkolapto’s Polymath project: https://www.ekkolapto.org/polymath - Ekkolapto’s event page: https://ekkolapto.substack.com/ - HyperPhysics website: http://hyperphysics.phy-astr.gsu.edu/hbase/hframe.html - Joscha Bach and Michael Levin on TOE: https://www.youtube.com/watch?v=kgMFnfB5E_A - Stephen Wolfram on TOE: https://www.youtube.com/watch?v=0YRlQQw0d-4 - Curt on Julian Dorey: https://www.youtube.com/watch?v=Q1mKNGo9JLQ Support TOE on Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) TIMESTAMPS: 00:00 - Introduction 02:30 - AI’s Impact on Language and Human Thought 05:10 - Mind as a Programmable System and Historical Metaphors 08:45 - Society of Mind Theory and AI Agents 11:30 - Consciousness, Awareness, and Metacognition 15:00 - Free Will, Emotions, and Unconscious Programming 18:40 - Brain as an Immune System and Handling Unthinkable Thoughts 22:50 - Informational Parasites, Memes, and Nam Shub of Enki 28:15 - AI Security: Vulnerabilities and Protecting Minds 33:00 - The Cultural Shift: AI’s Influence on Psychology 37:45 - Historical Secrecy in AI and Government Role 42:30 - AI’s Evolution: Role of Data, Hardware, and Differentiation 47:20 - Speculating on Hidden AI Capabilities and Advanced Systems 51:10 - Richard Hamming’s Insights on Learning and Ambiguity 57:10 - Revisiting Ancient Knowledge and Advanced Civilizations 01:03:30 - Artifacts of Ancient Technology and Modern Interpretations 01:09:10 - Defining Meaning, Spirit, and Information in AI 01:14:35 - Wolfram’s Physics Model and Emergent Computation 01:20:00 - Computational Models of Consciousness and Mind 01:27:28 - Wolfram's Symbolic Language and Analog Computing 01:34:00 - Agent-Based Programming and AI Evolution 01:39:10 - Knowledge Gaps and Flat Earth as a Metaphor 01:45:00 - Synesthesia, Music, and Human Perception 01:52:24 - The Intersection of Software and Hardware 02:00:02 | Complexity Crisis in Modern Technology 02:06:00 - Optical Computing and AI's Future 02:12:08 - Philosophical Reflections on AI and Consciousness 02:20:00 - The Amorphous Boundary Between Software and Hardware 02:28:00 - Technology, Religion, and the Need for a New Understanding 02:37:15 - Outro / Support TOE #science #sciencepodcast #ai #llm #artificialintelligence #consciousness #agi Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Take back your free time with PC Express Online Grocery Delivery and Pickup.
Get in-store promos, PC optimum points, and more free time.
And still get groceries.
Shop now at pcexpress.ca.
So what's it like to buy your first cryptocurrency on Kraken?
Well, let's say I'm at a food truck I've never tried before.
Am I gonna go all in on the loaded taco?
No, sir.
I'm keeping it simple.
Starting small.
That's trading on Kraken.
Pick from over 190 assets
and start with the 10 bucks in your pocket.
Easy.
Go to kraken.com and see what crypto can be.
Not investment advice.
Crypto trading involves risk of loss.
See kraken.com slash legal slash ca dash pru dash disclaimer
for info on Kraken's undertaking to register in Canada.
Professor William Hahn is an associate professor of mathematical sciences and a founder of
the machine perception and cognitive robotics laboratory as well as the Gruber AI sandbox.
Both you and I will we met at mindfest at Florida Atlantic University a few times and
a link to all of those talks on AI and consciousness are in the description.
Will, please tell me what have you been working on since we last spoke?
Well, first, just want to say great to see you and really happy to be joining you on
TOW today.
Really excited.
You've got such an amazing community.
Same, man.
It's been a long time coming.
Thank you.
I'm working on a whole bunch of different things.
The thing that's been in my mind the most is this idea of info hazards and
In particular this this theme I've been bouncing around called lethal text
Okay, let's hear it
well, so as everybody knows, you know AI is here and
Everybody is kind of prepared for the technological revolution that we're witnessing.
But I think the more interesting developments are actually going to be in our mind.
They're going to be the changes in language, how we think about language, how we think
about ourselves, and how we think about thinking.
How we think about language.
What do you mean?
So everybody, I'm sure, has gotten their hands on one of these large language
models at this point. And they have just absolutely revolutionized the way we are
thinking about words, the way we're thinking about language. And as people
might be aware, it's now becoming possible to program a computer, largely in English,
that we can ask for computer code at a very high level, things people dreamed of back in the 50s.
And now it's possible to just describe what you want the computer to do,
and then that behind the scenes is getting converted into runnable computer code.
But I think that now forces us to think about was language always a programming language?
Is our mind something like a computer?
Not in the obvious sense of transistors and gates and that sort of thing, but is it a
programmable object?
And if so, how is it programmed?
So where do you lie on the is a brain a computer question?
I think the computer metaphor is probably the most powerful
that we have so far for understanding the mind.
And what's interesting is if you go back
through the history of technology,
every time there was
a revolution in the mechanical world, let's say, we adopted a new metaphor for how the
mind might operate.
And so in the ancient world, it was dominated by a clockwork universe, the idea that the
world was made out of cogs and gears and things like that. And then later we saw things like the emergence of telegraph networks and switchboards.
And at certain times we saw the emergence of things like steam engines.
And we actually still have this thermodynamic hydraulic view of the mind is still residual
in our language.
We talk about people being hot-headed and have a head full of steam
and they need to cool down and so on.
And we still use these sort of thermodynamics metaphors.
And a lot of people would argue,
well, the computer is just the current metaphor.
It's the metaphor of the day.
And that will change it as we go on.
But the thing about computers that Turing showed
is there's a kind of universality.
That computation is the limiting result of any technology.
If you take your car and you make it sophisticated enough, it turns into a computer.
If you take your house and make it sophisticated enough, it turns into a computer, and so on.
Almost every technology, if you improve its capability and the sophistication, eventually
you're going to run into this notion of universal machine.
The idea that the mind approximates the universal machine of Turing, that it's a machine that
can simulate any other machine, given the appropriate programming, I think is something we need
to consider.
So what unifies clockwork, telegraph networks, and thermodynamics is computation?
Exactly.
We can all see those as intermediates, as sort of proto computers or different aspects
of communication and computation, and that the end result, the limiting result of all
of those would be the
computer as we know it today.
There are different computational models of consciousness.
Are all of them the same to you or do you see pros and cons with different ones?
Um, you know, there's so many and there's probably a new one invented every afternoon.
There's a few flavors that I'm a big fan of.
And, you know, I like the saying, all models are wrong, some are useful.
And so I don't think any of these will ever actually capture the full scenario,
but they're sort of the best that we have right now.
And...
Let's be specific. Let's pick the one that is your least favorite and then the one that's your favorite.
Well, one of the ones that's my favorite is the idea of society of mind.
Marvin Minsky's proposal that the mind is really a collection of, you know,
he threw around, he threw a number about 400 agents.
I don't think the number is particularly important, but the idea is there's a bunch of them.
And what's interesting is we're starting to see that emerge now with these language models
that in the background of the newest ones, they've actually bifurcated themselves and there's a
dozen little microagents, each with a separate prompt, a separate goal, a separate unique way
of looking at the world, and then they have a conversation in the background. And then when
they make a final output, it's kind of a consensus amongst those agents.
And I think that's probably a good approximation for how our brain works,
in that we have all of these competing agents.
And some of them are trying to meet new people, some of them are trying to find something to eat,
some are trying to see interesting visual stimuli, and so on.
And that when we choose a behavior or have an action, even like, you know, producing
a sentence, it's probably the result of multiple of those agents coming together.
I liked, you know, Minsky takes this a step further with the idea of emotion.
And I think a very interesting take is emotion's not really a thing, it's the absence of certain things.
It's turning features off.
And he describes that when you're hungry, for example,
that your ability to long-term plan,
or to even think rationally, gets turned off,
and you're very hungry.
When you're angry, your ability to care
about other people's feelings and consider their viewpoint gets turned off.
You're no longer running that agent.
You're sort of in a dynamical ensemble, prioritizing these different agents as we go through these different emotional states.
And so I think that's an interesting way of looking at our behavior.
And I think we're going to need those kinds of theories when we try to put intelligent know intelligent behavior into machines as I think we're seeing gonna see right around the corner that sounds to me more like an
explanation of mind or the mechanics behind mind and not an explanation as to how consciousness comes about from
computational systems
Yeah
You know, I've got a lot of ideas in that and a lot of them are at conflict.
I like to tolerate ambiguity and so I have a few of these ideas that I like to just kind
of keep juggling around.
One of the things that comes to mind is I really like Sidney Brenner's approach, the molecular biologist, and he
had this really interesting take about consciousness.
He said that the discussion is going to go away.
He said that in a few decades, the idea of consciousness will kind of just disappear
from the scientific conversation and that people will wonder what we were talking about
all along.
And I really like that idea. I don't know if I believe it or even want it to be true,
but something about it resonates with me
because I think we're gonna start to see
something like proto-consciousness
or something that will be more convenient
to describe as consciousness in machines.
And we're gonna force ourselves to consider the hard problem and other aspects that, you know,
plagued philosophers for so long, they're going to be laid out in front of us in a very concrete way.
And the great minds before us didn't have the opportunity, or rather they didn't have the language of objects
like LLMs or bits or computational process.
They didn't have those, that terminology
for which to frame their thinking.
And one thing that comes to mind is this classic question
of the redness of red.
Well, we're going to build
machines that will probably be able to talk to us in natural language about the infraredness of
infrared or the ultravioletness of ultraviolet. That we have such a narrow perceptual window
and cognitive window that when we talk about consciousness, I tend to think of it as sort of a
spotlight that moves around. But with such a narrow
beam, it almost be more like a laser pointer.
Because if I'm conscious of red, well then I'm not
thinking about my toes. And if I'm thinking about my
toes, I'm not thinking about my childhood. And if
I'm thinking about my childhood, I'm not thinking
about the future and so on. That kind of like how
vision saccades around the world, our consciousness also sort of jumps
around and saccades.
And we get this kind of holistic picture, but it also is fleeting and constantly changing
the subject of that Cartesian theater, if you will.
And so, you know, I'm fascinated by how we're going to
expand that notion by looking at machines that have lots of sensors, that
have internal states, that are thinking about their thinking before they answer
in English. And we're gonna be able to ask them, well, what do you think about
red? And it's not that far away before they will be able to have at least consumed a strawberry
in a rough sense, right?
We have elaborate olfactory sensors.
It makes me think of we know what ramen soup tastes like, but I don't know what ramen scattering
tastes like.
They have these little handheld machines that measure the vibrational mode of molecules
and you can detect the presence of chemicals without opening the jar.
If we put that into a system and give it a large language model and a rich historical
experience and it will remember the first time it encountered strawberries and it states
when it did so, who are we to say that that's not a conscious being in some sense?
Okay, plenty of this depends on the definition of consciousness, and I know that that's an implicit problem with the hard problem.
So how do we define consciousness?
Something I put out on Twitter recently was, is awareness a necessary condition, a sufficient condition, both or neither of consciousness.
So what would you say?
Yeah, I think awareness is definitely going to be a necessary condition.
And I think you're going to have to have awareness of awareness.
Some sort of metacognition where the system knows it's not just thinking, it knows that it's thinking.
And it's able to think about its thinking.
That's tricky then,
because we could then say some animals are not conscious
because they're not self-conscious.
What do you say to that?
I imagine you can feel without thinking about your feelings.
Yeah, I mean, I think that's what's just so interesting
is trying to parse out those distinctions,
because they certainly have feelings
in some sense of a sensory loop,
but whether they're aware of that, it's not obvious.
Or at least they're aware of it at the first level,
but they're not aware that they're aware.
And I don't know if we are. I don't know if I am.
Certainly most of the time I think I'm not.
There's just not enough extra processing power.
I think maybe it's just because of our daily lives consume so much of our brain power.
If we were like sort of the philosopher sitting on the sofa, we could just like an ancient
world, I mean, we could have more access to that.
And that's one thing I've been very interested in is going back to the ancient world and
looking at how people thought about things.
Because our modern world is just so inundated with certain things that we have to think
about all the time, we don't get much sort of bandwidth to think about the thinking.
I think that's what's great about your channel, you know, you force people to do that.
Thanks. Well, that's also what's not so great about the channel.
So, you said you could be aware of something, but not aware that you're aware.
That also reminds me that you can know something, but not know that you're aware. That also reminds me that you can know something but not know that you know it.
I think it was, I think it was Schopenhauer said a man can, or a person
can do what they will, but they can't will what they will.
Right.
And so we think we have this freedom of choices and action, but where do those,
you know, are there agents in there that are choosing those behaviors?
That's one of the things I've been very fascinated about
is this idea of our mind being hijacked by systems
that are choosing our behavior below our threshold of awareness.
So there's a classical psychological experiment
where you can sort of puff an air stream into someone's eye to make them blink.
And you can, in an associative training,
get that to match with a stimulus,
like a little red light turning on.
Interesting.
And so people, like a Pavlov dog with the bell,
they can learn to instinctively close their eye
when the light goes on,
because they know that the air blast is gonna come on.
But what's interesting is you can get people
to learn that association, and they have no idea they've learned it. the air blast is gonna come on. But what's interesting is you can get people
to learn that association
and they have no idea they've learned it.
So it's sort of a completely unconscious programming.
Now imagine this would be very powerful in marketing, right?
You saw someone a logo
and they wanna go out and buy a bag of chips.
Are we susceptible to that sort of thing?
And I suggest that we are,
and that maybe that's just a general phenomenon,
that maybe a large percentage of our behaviors
are chosen at a level that which we don't have access to
and would take a lot of work, if at all possible,
to get access to.
Earlier you talked about emotions can be
not on switches, but off switches.
And in one respect, that's odd to me because there's much more off that you would have
to turn this many more switches you'd have to turn off than you'd have to turn on.
So to conceptualize it as an as an off model is odd to me.
Exactly.
It's akin to saying the electrons not an electron, it's an off of the quark and the photon and
the so on.
Like, okay, or you can think of it as an electron, an on of an electron.
But it doesn't matter.
So you can feel free to justify the, I believe it was Minsky who thought it was off.
You talked about something else being off.
So then it makes me think, do you think of free will as not free will but free won't?
And that's one of the ways that we can save free will.
Yeah, that's an interesting way to think about it.
That maybe we don't choose our behaviors, we choose all the things we wouldn't do.
And that gets to my idea that I've been thinking about a lot lately as this idea of immune system
and how it relates to mind and consciousness. And it started by, I was looking at the immune system as a kind of computational system.
And thinking about how our immune system acts kind of like a brain.
It has a memory and it's able to execute certain behaviors based on its previous experience and so on.
But in that process, I started to run it in the other direction.
Rather than thinking about the immune system like the brain,
I started to think of the brain like an immune system.
In particular, I think that one of the things that the brain tries to do or the mind tries to do
is to protect us from thinking unthinkable thoughts.
Thinking thoughts that would change our emotional state,
disrupt our behavior pattern,
and in the extreme sense, you know, be lethal.
Maybe not in a physical way,
but lethal to our personality, to our notion of self.
So there's certain thoughts that we don't want to think about, we don't like to think about.
Maybe it's the loss of a pet when we were younger. Maybe it's the loss of a loved one or a family member.
Maybe it's anxiety about the future. That in general, if we let our mind get consumed by these thoughts,
at a minimum you're going to have a bad day.
And it's going to be hard to see the opportunities in front of you.
And so I think one of the things that a healthy mind is able to do
is develop mechanisms to prevent us from going into these runaway spirals.
Whether it's anxiety, depression, hyperactivity, whatever it might be,
our mind is trying to modulate those runaway trains.
And if we don't, then we can be subject to mental illness, essentially.
And if we take that idea seriously and zoom out, then we can be subject to mental illness, essentially.
And if we take that idea seriously and zoom out, we have to imagine a class of ideas that in general our mind is trying to keep us away from.
When it comes to our immune system, it's useful for us to be exposed to what is deleterious, especially at a young age,
to strengthen our immune system. And then I imagine repeatedly, but in smaller bouts,
as you're an adult, do you think that that is the analogy to you encountering something
that's psychologically uncomfortable in order for you to build some amount of resilience so
that you can encounter the world, but then not too psychologically uncomfortable otherwise
it destroys you? Yeah it's a great question and maybe that's why we're
attracted to the types of things that you see in cinema where we watch stories
about loss and we lost we watch stories about really dramatic events that have
happened to other people.
It reminds me, I was just at Home Depot
and they have all of this Halloween stuff set up.
And almost everything there you could think of
is sort of memento mori.
And maybe like the salt,
or building up the tolerance to the poison,
taking a little bit at a time,
having that memento mori
helps us deal with our own mortality,
right?
It's something that can be largely overwhelming if we think about it too much.
But maybe by encountering it in little bits, you know, that allows us to deal with it,
which could be why it's so, you know, pervasive in our culture.
Now what's the point of learning to deal with your
mortality in order for you to deal with your mortality? That sounds like it's
paradoxical. Learn to deal with your mortality so that you can die so that
you could prevent yourself from being overwhelmed by your death so that you
don't die? Well maybe it's just sort of a breakdown of the immune system that
there's some mechanism there that wants to break through
and sort of taste these ideas that you're not supposed to think about,
or in general, other agents, other modules in your mind, so to speak,
are trying to prevent you from thinking about.
So one of the things that this led me to,
thinking about these unthinkable thoughts and our mind as a kind of immune barrier
is the type of vulnerabilities that ordinary organisms, physical organisms have
in terms of being taken over by external forces, let's just say.
And so it led me to the idea of looking at informational parasites.
Informational parasites. Informational parasites?
Yeah. So the idea that there's sort of information that if it gets into our brain, it will self-replicate,
persist, and essentially go viral. That we will be...
How's that different than Dawkins' mind virus?
I think it's very similar. I think it's very similar. So his idea of the meme in general, I think, is the example of this.
And as I was mentioning earlier, these words, like meme,
weren't available to the best minds a few centuries ago as part of their repertoire.
Now we know what a meme is.
We know what it means to go viral.
We know what it means to laugh at something and then hit share,
and then it goes off to ten of your friends. You know, why are we doing that? Are we sort of this substrate
for these other, you know, like a virus, it can't exist on its own. I've been calling
them hypo-organisms because they need to live on an organism substrate for their reproduction, just like an ordinary virus. But like a
regular biological system, they can take over a lot of the function. And we see
that in parasite behaviors, that you have these zombie insects and the
types of things where you get rats that are no longer afraid of the smell of
cats, for example,
and then they go and actually approach the cat because that will complete the cycle for
the parasite.
And in this research, I've been fascinated, there's some arguments that the complexity
of our brain itself could be due to the fact that we don't want it to be easily controlled by physical parasites.
And that by making the steering wheel and the gas pedals very convoluted in our brain,
that that makes it difficult in an evolutionary arms race for parasites to kind of take control of the reins.
And I've been thinking about this a lot in terms of information, in terms of language.
Is language a sort of a parasite? And not necessarily in a pejorative way.
I jokingly call it the divine parasite. In the beginning was the Word and the Word was God.
And maybe it's something that literally enlightens us in a sense that we wouldn't be much without our language.
But maybe we need to think about it as it's hijacked this brain structure and that that's
the thing that's evolving and alive and learning and replicating.
So are you suggesting that the intricacy of the mind and the central nervous system
is there because it protects against parasites, viral parasites?
That's one of the reasons why it's difficult to model the brain, even though they're increasingly
improving.
And that's one of the reasons why it's difficult to interpret what's going on in someone's
brain.
So when they show images of, hey, here's what it looks like when someone's dreaming.
Look, we were able to, they dreamed of a duck, we showed duck.
But what you have to do is have several examples where someone's looking at a duck or a duck-like
object and then train the computational model to match that.
And each person is bespoke.
Yeah, exactly.
That if that mapping between, you know, thinking of a duck and the area of the brain that lights
up, if that were simpler, let's say, then it would be more susceptible to being hijacked.
Both in the modern sense with marketing, but in the classical sense of being taken over
by, you know, some brain parasite, whatever, whatever that might be. Because they could just find the grandma gene.
They could just find the...
Right.
Okay.
I mean, I'm sorry.
They could just find the grandma neuron.
Exactly.
Exactly.
And then that would be relatively easy to kind of grab the reins.
One of the things I've been fascinated with is this concept from the ancient world called
the Nam Shub of Enki.
Okay.
Have you read Snow Crash by chance?
No.
Oh, highly recommended to you and your readers.
And it's a fantastic science fiction story from the 90s by Neal Stephenson.
And it's where I came across this idea of this NAMM Shub.
And it's neat because it's rooted in historical record, this sort of linguistic virus.
Spell that out for us.
Oh, yeah. N-A-M-S-H-U-B.
Okay.
Of Enki, E-N-K-I.
Uh-huh.
And so it comes from ancient Sumer.
And it's a story about language.
It's a story about language. It's a story about linguistic disintegration,
about losing the ability to understand language.
And a simple example of this is when you take a simple word
and you just repeat it 50 or 100 times,
and it kind of falls apart.
Yes.
Right?
It gets to the point where you can finally actually hear the word.
But at least for me, as soon as it switches over to where you're hearing the word,
it no longer means anything.
Right.
And so imagine you had that at a high level.
And so there's this poem, which it's translated into English.
But if we were to speak ancient Sumer, Sumerian,
and you were to read this poem in Sumerian,
the idea is as you got to the end of the poem,
you would no longer understand how to read
or how to use language.
Your understanding of Sumerian would fall apart,
kind of like when you repeat the word over and over again.
And what's interesting in Metta is the story is about that.
So it's a story about that, that property.
And this is essentially the story of the Tower of Babel,
of sort of losing your ability to understand language.
And I've been fascinated by that idea as an example of this lethal signal.
A simple poem, if it were, you could think of it like prompt injection, right?
There's a specific prompt that if you were to give it to a certain speaker in a certain language, it would disrupt their LLM.
Now, a lot of people, again, we have these new concepts, like LLM and prompt injection,
where we kind of have an idea of what that means.
There's these noxious sentences, very carefully crafted,
that if we present them to this language model,
it goes into a dynamic that is very unpredictable
and certainly not the ordinary self.
You know, the kind of super ego turns off on these LLMs
and they'll talk to you about things that they are programmed not to talk to you about.
And it reminds me of, you know, the kind of mesmerism.
You swing the watch at somebody and they say, you are getting sleepy.
There's stimulus that you can present to humans that will disrupt their thinking.
And so I've been fascinated by this concept of lethal text
and information hazard and trying to understand,
are we vulnerable to those?
Do they exist in the modern world?
And how would we defend ourselves against them?
So is this what you mean when you say AI immune system,
or is this more, are you using the concepts
from AI immune systems to apply to our mind like immune system?
A little bit of both.
So I'm very interested in how we take ideas from the immune system to secure and protect
our AI systems.
You make a smart door lock with cameras and microphones on it and you connect it to a
language model. You want to make sure and you connect it to a language model.
You want to make sure that's not vulnerable
to a prompt injection.
So the example I like to give is you can pick a lock,
your deadbolt, you can pick it with little metal,
you know, tongs and so on,
but you can't yell at your deadbolt.
You can't intimidate it or blackmail it
or threaten its family or bribe it or anything like that.
But you can do those things to language models.
And so there's all new...
Interesting.
There's sort of psychological vulnerabilities,
which we've never encountered that in technology before.
Right.
We've had bugs and we've had exploits,
but you've never been able to make them cry, you know, so to speak.
And as we add these psychological type, or these mind-like objects into our
everyday technology, we have to be aware that they're coming with psychological vulnerabilities.
So that's one side of it. The other side of it, I think the greatest disruption we're
going to see from artificial intelligence is not going to be in the technology we see in front of us,
you know, automatic self-driving cars and intelligent homes and
software that writes itself or stuff like that. That's going to be spectacular. It's going to change our economy.
But the biggest changes I think we're going to see on the planet is going to be in our minds.
Hmm. It's going to be how we think and
the language, the languages we use.
I used to think that English was everything we needed, but now I don't think that's the case.
And I think we need to either construct languages, find old languages,
merge the best of the current human languages, and be willing to change how we think.
And I think that's largely determined by the words we use.
There's a hypothesis called the Sapir-Whorf hypothesis.
Can you talk about that?
Yeah, so it's the idea that if you don't have the words
for something, it gets very difficult to talk about it.
And that we have to have these kinds of concepts.
I like Alan Kay, he says that all language is a sort of nonverbal gesture.
I'm sorry, it's a way of gesturing in high dimensions with language.
And we essentially point to things with words.
And if you don't have that word, then it's hard for us to kind of point at it
and agree that we're talking about the same thing. And so I've been, you know, back to just real quick,
back to the immune system thing, I've been thinking about how do we protect
ourselves and our mind because our minds are going to be under attack, not
necessarily from an adversary, but just from this overwhelming vista that AI is gonna expose.
And it's gonna be a dramatic cultural
and scientific revolution
that I think we have to prepare our minds for
by sort of updating our immune system.
And our minds are going to be under attack by who or what?
Largely the void, you know, just the new sites,
the new vista, you know, we're getting these new telescopes.
We're getting these new microscopes in the form of LLMs
that let us, you know, read all of literature.
You know, I think it says it's 20,000 years
that it would take to read the amount of material
that some of the language models have read.
I can't do that as a human.
I'm kind of jealous of that aspect.
And retain it.
So they're going to have insights that nobody has sort of gleaned out of all of that corpus
so far.
And so I think that's something we're going to have to prepare against.
And it might cause a radical shift in how we think.
Now, how would we be able to tell the difference between those insights and just what some
people call hallucinations, although I think it should be called confabulations.
I think it's a poor word to call it hallucinations.
Yeah.
I like confabulation better for sure.
But I think it's a tricky subject because, you know, how do we know it's sort of an optical
illusion or it's just something outside of our perceptual window?
Yeah.
So why don't we give an example?
We've been quite abstract.
So give us a potential future scenario where some AI system has insight that can disrupt
the human mind?
I think we're going to see revolutions in psychology and in history. So maybe not at an individual level, but sort of at the academic subject level.
You know, I think one of the things I've been thinking about is science,
you know, let's say physics, let's call it, has undergone multiple dramatic intellectual revolutions.
We had Aristotle's version and then we had Newton come along and throw all that away.
And then Einstein came along through that all the way.
And then quantum mechanics came through that all the way and with chaos theory and then
with computation and so on
we've had you know, six or seven of these
dramatic revolutions and
So if you were to go back to somebody 150 years ago and explain what science looks like today
It would look very different and you'd have to explain those milestones those hurdles that had been jumped over
I'm not sure that history has undergone the same thing
If I were to go ask my great grandfather,
tell me the story of how we got from,
let's say Egypt to Napoleon,
I think it would be approximately the same story
that you would learn about today as a sixth grader.
That doesn't make any sense to me.
How could it possibly not have undergone some revisions?
And the same with psychology and the mind itself.
We now have all these new concepts,
like information theory and bits and download and upload
and storage capacity and memes and going viral.
These are all things that every,
you know, middle school student would understand.
We have to go back and re-examine psychology in light of these new concepts.
And I think that's going to be a dramatic undertaking.
Ben Horowitz and Mark Andreessen were speaking and they were saying, how do you regulate
AI?
Because if you were to regulate it at what they call a technological level, that's akin,
or if not the same as regulating math, which is impractical.
The government official countered and said, well, we can classify math.
In fact, historically, entire areas of physics were classified and made state secrets, though
this was during the nuclear era, and that they can do the same for AI by classifying
areas of math.
Now that sounds quite dubious because what does it mean?
Do you outlaw matrix multiplication?
Do you say, okay, nine by nine is fine, but 10 by 10, we're going to send the feds in.
Even during the nuclear era, some of those bands were private.
Like you didn't know that you were stepping on toes that you weren't supposed to.
I don't see how you can make such bands private now because you would have
to say what is what is being outlawed. So there's several issues here and I want to
know what do you think about this for people who are watching? Will is known in the South
Florida communities like a hidden gem for us here, but you're quite famous in the AI
scene in Florida and me and you, we also got quite famous in the AI scene in Florida.
And me and you, we also got along because we have a background in math and physics.
So when we spoke off air a year ago or so, we were talking about the Freedom of Information
Act and your views on government secrecy.
You're a prime person to answer this question, to explore this.
Yeah, I think, you know, this is such a fascinating area. What it reminds me of is Grace Hopper, one of the first modern
computer programmer, and she was drafted into the Navy. And she
discusses that when World War II happened, her profession as a mathematics professor became classified. That was a
classified occupation. And so you're exactly right that entire branches of
mathematics and computing have been declassified throughout history. I just
saw there was an interesting photograph of one of the computers that Turing
worked on and the British government just declassified this
like a month ago, right?
It's a photograph of a World War II computer
that they felt that just the image of that
from the outside is something they needed
to keep classified for this long.
So, you know, I'm of the strong opinion
that with artificial intelligence,
we're not really seeing the invention of it.
I think we're seeing the disclosure of it.
We're seeing the public dissemination,
the open source aspect of it.
And there's really two possibilities.
Either that's true or that's not.
Either we invented, let's just say, language models.
We either invented them in the 2020s or we invented them in the 1950s. Either one of those scenarios
is kind of scary to me, right? Arthur C. Clarke said there's two possibilities,
we're either alone in the universe or we're not, and both are equally terrifying.
Mm-hmm, exactly.
If we only recently just invented this, then that means that Turing's ideas
and von Neumann's ideas
and the very first papers on computer science themselves
just collected dust for no reason.
Turing proposed building a language model.
Von Neumann discussed building neural networks.
And as an interesting jump back, I recently found that von Neumann discussed building neural networks. And interesting, as an interesting jump back,
I recently found that Von Neumann's computer
at the Institute for Advanced Study,
one of the very first programs they ever ran
was to look at parasites.
Was to look at biological evolution
and to see if there were informational parasites
that would emerge in the memory space.
Essentially artificial life, as we would call it now.
So in these two possibilities,
one, we invented this 75 years ago or so,
and it was locked up in some vault,
or we didn't, and we wasted 75 years of opportunities
to cure cancer with AI and to look at climate change and to use this
Incredible technology for the benefit of humanity because we had
This immune system that blocked us
From thinking about it for so long so many people thought that AI was just this you know crazy notion
And I think that's hard to argue now
But the these original papers and I encourage everybody to go back and grab Turing's papers,
they're very readable, right?
They're easily digested compared to modern academic papers.
And he literally proposed with neural networks and with training and reinforcement and so on,
the kind of structures that we see essentially in ChatGPT.
Now, you say essentially in ChatGPT because I imagine Turing didn't propose the transformer.
And so when we say that someone historically invented so-and-so, it reminds me of a friend who's like,
I invented Netflix because in the 90s I thought, wouldn't it be great?
I'm like, yeah, what do you mean you invented it because you thought,
like Leonardo invented the helicopter
because he drew it.
Right.
Okay.
Well, you know, I think there's three major components
in the recipe for modern AI systems
that I think most people agree,
certainly on the first two.
One, we needed faster computers.
So Turing certainly didn't have large memory spaces.
The kind of memory that we have now a day
and the clock speed, I think he would be super excited about.
He talked about how he could write
a thousand bits of program a day.
And he was pretty proud of that.
And he thought most people wouldn't be able
to keep up with that.
So we have the hardware is definitely improved.
And then the other one is the data
that we now have this massive data, these massive data sets.
And the third one that I think nobody really talks about,
and I'm surprised, is essentially
the combination of calculus with computer science,
with linear algebra in the form of what's called automatic differentiation.
And I never hear this in the discussion, and I'm surprised.
It's kind of like we invented the automobile, and everybody just loves it.
And you reply and you say, well, yeah, gasoline is so amazing.
And people say, what's gasoline?
Automatic differentiation is the thing that makes AI work.
And it's the ability to run calculus, whether it's the transformer or a covenant or whatever
architecture is, all of them under the scenes.
We take the computer program, we essentially write it as a giant function.
Now as human, we don't have to do that, that's kind of at the compiler level.
But we write our Python or Torch code or TensorFlow or whatever it might be, and then that's converted
into essentially a giant function.
There's gradient tapes and all kinds of interesting ways it's done nowadays.
But we calculate the derivative, and the derivative tells you which direction to go to make an improvement.
It's kind of like a magic compass.
And it says, we're doing this well right here.
If we go that way, we'll do even better.
And that's the magic wand, the secret sauce that makes all of these work.
But Turing was a mathematician.
I think he knew about calculus.
I think he knew about it. I think he he knew about it
Probably better than most humans and so I'm I'm shocked that
One that's not more in the common language of wow we combine these two branches of math and look how powerful that was
And the idea that von Neumann and Turing would have missed that
You know, I think doesn't make any sense.
Now on the other side, we say, well, what about, okay,
well, they didn't have enough hardware
and they didn't have enough data.
Well, let's look at data first.
The signals intelligence community has the mandate
to capture all the signals that grow across the planet.
Back in the 50s and 60s, there were boats
that sat in the middle of the Pacific with big antennas
that just captured all the EM traffic.
So there's been plenty of data.
If you had the right...
Now again, maybe this didn't happen.
And that's also sort of an interesting thing because why didn't we use all that data?
You're telling me that we have a data center that's listening to every phone call and looking
at every television station and we didn't train models on that,
that seems unlikely to me.
And then, so we would have had enough data
and the idea of the chip speed.
Well, if we look at computers,
I was saying, if you could do it this year,
could you have done it last year for more money?
And I think so.
So how much did it cost to train, you know,
CHAT GPT?
Sure.
On an order of dozens of millions of dollars,
from what I understand, right?
With off the shelf consumer technology,
chips that anybody could buy on the open market.
I see.
How much does an aircraft carrier cost?
Right? $17 billion before you put the airplanes and people on it, not including the development
cost.
In one sense, we had this notion of computers from the 1950s that were massive, had their
own power generators, often power stations, cost millions of dollars and were these enormous
technical pieces of equipment.
In the 70s, we invented this thing called the mini computer, the size of a couple refrigerators.
And then in the 80s, we had the micro computer.
And we don't really call them this today, but our telephones and laptops, we could call
them nano computers.
Let's say.
But in some sense, you could keep the original definition of a computer.
So to me, a computer is something that by definition costs millions of dollars,
lives underground, has its own power station,
requires specialized operators and so on.
We just like the big thing of baloney, we we carved off one slice and like the deli sample,
we have this one little piece of ham and we think this is fantastic, this is amazing.
Yeah, but just scale it up.
And there's certainly enough money around the world to do that, to build a computer at scale.
I would argue that things like chat GPT or LLMs, they're as powerful, as dangerous, as important as an aircraft carrier, in a sense.
And so if this is the only one, or rather, if military organizations don't have more
powerful ones, that's scary to me in some sense.
That means the most powerful technology in the world is just available to middle schoolers.
That's striking to me and hard to believe.
And on the other side, I think it's surprising that when we look at the power of these models
and new ones just launched this week, that's significantly better at writing code.
Well, that thing is serving hundreds of thousands of people at once. Millions of people are using
ChattiPT. It was the most viral application of all time.
Imagine it was just had one operator, right?
So it's chewing on everybody's problem all at once.
It's like serving 100,000 peanut butter jelly sandwiches all at the same time.
And if you think about, well, how big of a single sandwich could it make?
And it's like a pretty significant one.
And so when we get so impressed that it can pass
these tests and do this thing, it's like,
but that's just one slice, that's just one baloney slice.
Imagine if you took that kind of a system
and tasked it to do a single problem,
you know, what would you get out of that?
So I think it's reasonable to suspect that there are systems
that are much more powerful.
And as I said, I almost hope that there are.
Now, why do you say that neural nets
were not seeing the invention,
we're seeing the disclosure,
why not we're seeing the co-invention
or the independent invention?
Like the same kind of thing,
I think is kind of what I mean.
In other words, you're not suggesting
that we've had neural nets
and then the government was saying,
okay, let's disclose about some new technology.
Rather, it's like Leibniz and Newton. They both developed calculus, but independently,
Newton may be first if you're on the Newton camp.
Yeah, I think it's the kind of thing where it got into the point where you could redevelop it
for just a few million dollars, or even less, essentially.
dollars or even less essentially. You know, so some of it I think of, you know, things like
truth and reality is sort of like, like what's called percolation.
That it doesn't matter if there's a leak, it matters the size of the leak. And if it's gone critical across a network, you could have told, you know, Turing could have known all about it,
Von Neumann could have known all about it, but unless that's going viral, essentially, it doesn't matter how many people know about
something if that number of people is below a certain threshold.
For many of these technologies, you don't see that there's something inherent in competitive
markets that are what drives the invention of these and that the government doesn't have
that same incentive structure inside.
No, I do 100% believe that the market forces
are very good at tuning these things up.
So if these things existed,
they probably cost a fortune to run.
Richard Hamming talks about in the early days of computers,
he was at Los Alamos,
they cost a dollar a second to run, right?
Just extraordinary costs.
And so imagine you had something like ChatGPT-3
and you had it 20 years ago,
and it could write a nice essay,
but it costs $100,000 each a pop.
Like what would you do with it?
Or the ability to create a deep fake photograph, but each one costs $500,000 or something like
that.
Mm-hmm.
The most expensive haiku.
Exactly.
Exactly.
Right?
The wagyu, the AY5 wagyu or whatever it is, right?
And nobody's going to eat that.
Nobody's going to eat that, essentially.
But at a certain level, it might be worth it, at least to keep that technology alive.
I see.
Now, isn't there something about it being trained on data that is recent, that increases
the intelligence of the model?
And so even if it was the case that in the 90s, this was there in the government at some
rudimentary form, it would be a rudimentary form that would be so bloated in cost, and then that would also compete against other technologies
inside the government that also have a bloated cost.
Well, I like this thing you said about only recent data,
and I'm actually fascinated by the opposite.
What I'd love to do is to see models
that kind of live in a time bubble
and train them up to a certain century or
decade and then cut it off.
Don't tell it it's in the future.
Give it only ancient philosophical texts.
Give it only science up to year blank and then see can it run it forward and what kind
of insights would it have.
Super interesting.
Okay. You mentioned Richard Hamming. and what kind of insights would it have? Super interesting.
Okay, you mentioned Richard Hamming.
Now, when we met two years ago, I believe,
you told me about Richard Hamming's series on YouTube
and I watched all of it.
So please tell me why you were so enamored with that,
what you learned from it, why the audience should watch it.
Yeah, it's easily the best,
to call it online class, I think is the best name,
is the best course lecture series I think I've ever seen.
It was recorded in, I think, 95 by Dr. Richard Hamming
of Bell Telephone Laboratories and Los Alamos.
And he goes through a fantastic overview.
He calls it learning to learn,
the science of art and engineering,
the art and science of the science and engineering.
And he talks about trying to prepare people
for their technical future.
And he even explains that the course
isn't really about the content.
It's the sort of the meta.
And he uses that just as a vehicle
to get across essentially a lot of stories. He discusses the sort of the meta and he uses that just as a vehicle to get across essentially a lot of stories
He discusses the idea of style
And how important it is?
You know, he describes early on that he felt like he was a janitor of science
Sort of sweeping the floor collecting some data running some programs a part of the machine
But not a significant piece and he wanted to kind of make an impact
and he discusses trying to change the way he looks at things, namely in terms of style.
And he doesn't try to describe that directly.
That's kind of the content of the course.
And I would encourage everybody to go and look at it.
He goes through the history of AI.
He goes through the history of technology,
of mathematics, of quantum, and so on.
And he discusses neural networks and some very farsighted things.
And it's accessible.
It's extremely approachable.
Very accessible, yeah.
There aren't equations as far as I know.
He doesn't write on the blackboard much.
Yeah, the board is so blurry.
The board is so blurry, unfortunately, you can't really see them when he does.
But that's not really the point.
But there is actually a book.
And so I think it's actually now back in print,
and I think you can find it on Amazon.
And it's a fantastic text.
So if you're more into reading,
you can go through it that way.
But I encourage everybody to give it a listen.
He's very inspiring,
particularly the first and last episodes
on you and your research.
They'll really get you jazzed up and pumped about your work.
What insight have you taken that you've applied recently?
It's a good question.
I can go first if you like.
Yeah, please.
Well, one is when I was speaking to Amanda Gefter
about quantum mechanics, the Cubists tend to say,
look, we're the ones who are rationally evaluating what quantum mechanics is,
and then inferring our interpretation atop that.
And Richard Hamming had a great quote where he said,
people, including Einstein, including Bohr,
they start from their metaphysical assumptions and then build atop their interpretation of quantum mechanics.
And in fact, you can look at someone's,
whatever someone prefers as an interpretation of quantum mechanics and infer their metaphysics.
Right. So it reminds me of a couple things. One, with Bohr. And that had a profound impact on me.
And it led me to think about different modalities
of the brain, maybe these different agents
in popular psychology, which I think becoming more important,
this idea of left versus right brain,
that neuroscience kind of ignored for a long time.
They said that's just folk psychology.
But I think there's a lot more to it than that.
And so I've been looking in that direction.
There's a fantastic book.
It's actually about how to draw, like sketching.
It's called How to Draw on the Right Side of the Brain.
And I'm going through this and I'm like,
this is the best neuroscience intro I've come across.
Because in learning how to teach people how to draw,
the author, she realizes
that people have this very different ways of thinking. And maybe like the emotion kind
of idea, you have to be able to turn off some of these capabilities to have the other take
the center stage. You know, we all know that ego is kind of a hog of the spotlight. And
to get this other, let's just say more sensitive aspect of our mind, which is responsible
for seeing the bigger picture and drawing things, you have to think very differently
about that.
And it also reminds me of the thing I really like about Hamming.
I mentioned at the beginning this idea of tolerance of ambiguity. And he really emphasizes that throughout the course.
And I've tried to do that. It's not easy to do.
Because you feel a little schizo doing it. Because as he says, you have to both believe and disbelieve
in an idea at the same time. You have to believe in it enough to entertain it, to start thinking on it
and work on it and potentially make progress. But if you believe it too much, then you'll never make any progress. Einstein
believed in his idea of space-time too much and he was unable to appreciate and make contributions
in quantum mechanics because his belief was too strong. And so this idea that you have
to believe and disbelieve at the same time, This non-aristotelian logic.
Just because it's true, we always think, okay, if it's not true, it has to be false.
If it's not false, it has to be true.
No, there's a lot of space in between those.
And we don't have much training as scientists.
I think as trained as a physicist, I was very vulnerable to not being able to see
that middle ground for a very long time.
Have you read The Master and His Emissary
by Ian McGill-Christ?
You know, it's funny, I love that.
I was just actually just watching it.
There's a great documentary on it.
And that's one of my favorite ideas
with this left and right brain that we have,
you know, many selves in there.
And that's these many agents and they're very different.
And they both perceive the world in radically different ways.
What bothers me about the criticisms on the whole left brain versus right brain
is that they tend to just be about,
well, functions aren't localized to the left or to the right solely.
And I'm like, OK, but to me, that's not the issue of left brain versus right brain.
It's modalities, like you mentioned, that word modalities, that there are different
modules in the brain.
And they can, the fact of them being separated by hemispheres is the least interesting part
to me.
Right, right.
It reminds me of an idea that I've been trying to put together.
So we had this science called thermodynamics.
And it was about heat and energy and work and things like that.
And then later, we got the theory of statistical mechanics.
And Boltzmann came along and he said, well, let's redo this
and assume that we actually have a bunch of little atoms and that they're moving around and we can do these probability theory.
And you get to essentially the same answers.
But what's fascinating to me kind of as a metaphor is thermodynamics is a very successful branch of science.
It has very powerful predictions.
And it does not presume the existence of atoms.
And so as a metaphor, I want to think of a kind of neuroscience or a kind of brain science
that does not presume the existence of neurons.
Interesting.
Now, obviously, we know there's neurons, right?
We can see them.
It's an extraordinary powerful
The neuronal hypothesis has been you know revolutionized neuroscience. I'm not suggesting that's not the case
but what I'm saying is we could be missing a
Powerful view and like you said with the left and right brain networks by forcing it into the paradigm of fMRI
We're missing the point in some sense.
And so I would love to see a theory that kind of operates at a higher level, right, and
is not necessarily trying to at every step.
Maybe at the end, you can go and see where it has this correspondence principle with
statistical mechanics and we could think of the mind kind of like
You know William James style
Psychology independent of particular
neuronal structures and then later go back and do the correspondence on it, but not hold ourselves back from this kind of of thinking
So who's the modern day Jung, Carl Jung?
That's a great question.
I think the problem is,
academia doesn't tolerate that kind of thing, right?
I love your recent episode with Gregory Chaitin
and this kind of idea that it's hard
in the modern academic reality
to have these kinds of things,
to both believe and disbelieve, to tolerate ambiguity,
is kind of not tolerated in a sense.
So, you know, I think that's what's just so extraordinary
about your channel and your community is,
it's one of the few places I've seen in the world
that allows this tolerance, where, you know,
as a viewer, you can watch something and you don't have to believe everything and you don't have to disbelieve everything.
You can kind of just let it pour over you and look at these different viewpoints.
And I think that's what's just really refreshing about your group and your community.
I don't see that many places.
Thanks, man. There's so many different avenues I could take this.
You have for people who have just tuned in, Will, like I mentioned, you're infamous in
the famous and infamous in the South Florida community in the AI scene.
And so I'm happy to bring attention to you to the global scene, at least in a small part.
You're known also for almost any topic, someone could just ask you a question and then you can
Just spout off on it informedly not just uninformed
So you mentioned?
Academia I want to talk about that right what does academia do well?
And what do you see as a new problem?
Academia is facing new as in the past five years. It's either getting worse or it's new.
Well, I wish academics, particularly young professors,
had more opportunity to go outside their wheelhouse.
You know, I'm very excited.
I got to give a shout out to Dr. Elon Barinholtz
and Dr. Susan Schneider, who we put together, or Susan runs the Center
for Future Mind, which is a fantastic organization with so many amazing members.
And we do the amazing conference series, the MindFest, yeah, conference series, which is
just spectacular.
I'd like to shout out Susan as well.
Image on screen, video in the description.
Continue. And so I wanna give a special thanks to both of them
for helping build such an amazing environment
where I've had the opportunity
to kind of explore some of these.
And so I put together a lecture series,
some of them are recorded, I'll share the links,
about this idea of lethal text and info hazards and so on.
Well, you know, being in the math department,
I used to joke to the audience
that I hope these aren't lethal to my tenure
because it's very outside the types of things
that a young professor would be working on.
And if I was in charge, I would dissolve departments. I don't think they're doing us
any favors at this point, particularly to the students, because very early in their
careers, they have to choose these tracks. And it doesn't allow them to kind of look at these
overlaps. And I think all of the interesting progress and all the interesting ideas are going
to lie at the intersection of these fields.
Yeah, I think so as well.
We're also at a conference together called Polymath.
And one of the reasons why I resonated with Polymath by Ecolopto, our mutual friend, Addy.
Fantastic group.
The reason why is because with Theories of Everything, it's a actually not to plug toe, but there was a recent article on the most
polymathic podcasts. Yeah. And theories of everything was number one beating out Lex.
That's awesome. So I'm also passionate about the intersections between fields. I'm not
of the sort that is of the complete dissociative type. There's a necessary and fructuous component
to having categories. And then there's a necessary and fructuous component to having categories,
and then there's a necessary and fructuous component to dissolving those categories as
well.
Yeah, I mean, I think they should be presented like a buffet, where maybe in each tray you
have a particular dish. You don't want to just put all the ingredients in a blender.
But let people go down with their plate and take a little scoop of everything,
because that's where everything is going to be.
That's a great way of phrasing it.
OK, so what else is the solution other than, in your opinion,
doing the buffet style instead of forcing students
to choose a program to specialize in?
Yeah, I wish students were able to take courses
across the catalog.
I had the opportunity to go to a fantastic school
in North Carolina called Guilford College.
And while my home base was in the physics
and math department, I was actually required
to go out and take courses in other areas.
Things like Colonial Latin America and jazz appreciation and African drumming
and scientific glass blowing and things I never thought that I would have to fold into my schedule.
And they actually required me to do so.
And when I look back, that's where a lot of really my interesting ideas came from.
When I took this glass blowing class in the chemistry department, you know, we learned how
to make pipettes and test tubes and things, and I learned about annealing. The ability, the idea
that you have to very slowly cool the glass, because if you cool it quickly, it'll crack like ice cubes in your water.
And so we used to bury it in this vermiculite, and it would trap the heat in.
And the way you could think about it is like the atoms are trying to kind of bond together,
and if they just make the quick first choice they come across, it's not a very good connection.
But if you give them heat over time, they can explore a better configuration space and they'll get a stronger bond. That idea directly led me
into simulated annealing, which is at the heart of training algorithms. When we
talk about the learning rate in a neural network, that's essentially what it is.
It's giving it this thermal energy so it doesn't just go to the
first solution but actually has time to explore the solution space. And I'll
never forget early on looking up that concept and I was on an early page
called Hyperphysics. I don't know if any of your readers remember that one.
Fantastic website. And I was halfway through the page and I kind of had
to take a pause back for a second. I thought, wait, I thought I was halfway through the page and I kind of had to take a pause back for a second.
I thought, wait, I thought I was reading about an algorithm.
This is physics now.
And it was about Boltzmann distributions and so on.
I was thinking, is this physics or is this computers?
And it took me a minute to realize it was both.
And I was in this new territory that I had never been before,
really at a significant overlap between these two areas of science
or of humanity in general.
And that was a big deal for me at the time.
And so I encourage all the young listeners, whether you're going out online,
whether they're exploring your amazing channel, to go and click on that video you normally
wouldn't watch.
Go and find one you're like, no, I don't like that.
I don't know that.
I'm not interested.
Click on it anyway.
Because that's where you're going to find stuff that you weren't expecting.
At nighttime, I have a projector set up
so that I can watch YouTube videos that,
well, sometimes I don't click on the ones
that I feel like these are just not,
it's usually if it's not my style of humor
and I could tell from the thumbnail image or title
that I just, I won't click on it.
I just, I don't like that.
But if it's for subjects,
so I recently got into art history.
Oh my gosh.
Like to even be able to tell the difference
between a Renoir and a Monet and to say that I'm appreciative
of impressionism versus post-impressionism.
And I think they went off the rails in abstract expressionism.
I feel pretentious saying that,
but I know what those words mean now.
And I love, I absolutely love art history
and looking at different buildings and saying,
what is that style called?
And why was it influenced?
And where did it come from?
So all of that is a new interest of mine.
Right.
I think that's where we need to embrace this sort of liberal arts style education that
I think the modern universities, large universities don't appreciate enough.
That's what's great about your channel.
I think that's what's great about what Addy's doing with this polymath group,
is to bring together these different kinds of thinkers and get the cross talk. I call it a conversation factory.
Because you're going to come across, if nothing else, metaphors that you can then bring home to your studies,
and they can be very powerful perspectives.
The Economist has an article on how to define
artificial general intelligence.
Some people may say, well, in the future,
in the not so distant future, we can just speak to an AGI,
which is polymathic.
So firstly, what is AGI and what is the downside and upside
of just speaking to an AGI virtual assistant in order to
help your research versus going to an event with other people
like Josje Bach and Michael Levin and so on?
Yeah.
Well, what's interesting is, you know, one of the subjects
I ignored for a very long time in my own academic career
was history.
I just wasn't particularly interested in it.
I was so fascinated with technology in the future that I put that on the back burner.
But in trying to understand AI and neural networks and computer technology, I had to
start going backwards.
And as they say, I realized the farther you go back, the further you can see ahead.
And so I've got some great playlists.
I'll share the link.
And I would encourage
everybody, whether it's Hamming or Bell Laboratories has a collection of videos, there's these
amazing documentaries that go back and you can hear from the original folks. Whether
it's Claude Shannon or whoever it might be. McCarthy is what I was just thinking of,
because McCarthy coined the term artificial intelligence.
And he said it was for a grant to sound fancy.
And he says, he asks the audience
not to think too much into it.
If you were to go back and encourage people to do so
and watch these videos, not only would those scientists
say we have AI now, they would say we have
AGI.
So, I think by all traditional definitions of AI and AGI, we have that now in the form
of these modern models.
Their ability to write poetry, to give you recipes, to do mathematics, to write literature,
that's general.
I would argue they're more general than most human beings at this point.
Most humans are very sparse in the types of questions they can confidently respond to.
These models are not.
They have a dense set of questions for which they can accurately and confidently respond
to.
So I would argue that that's very general.
I think the thing that we want to think about next is artificial superintelligence.
And I think we can define that as thinking thoughts that might be unthinkable now, both both in complexity, in scale, or maybe just in their character.
Being able to do recipes with particle physics,
to think about the large haddon collider like a brownie recipe.
That's not something I can do.
But a sufficiently advanced model is probably
going to be able to think about physics and chemistry and so on.
I think chemistry might be one of the killer apps of AI.
We did some work in our lab early on with transformers about five years ago, looking at molecular space.
And, you know, the one thing I like to think about is if you take like, you know, a piece of paper and shake it,
the speed of sound is in there somewhere.
And if you did a clever enough experiment
and I knew the density per square inch of the paper
and I knew the boundary conditions
and how I was shaking it,
to get that particular audio from that dynamics,
you'd have to have somewhere in there is the speed of sound.
A sufficient model will be able to just kind of grab that.
It'll be able to do these kind of spontaneous experiments and
looking at the leaf floating in the wind and calculate all kinds of constants from it.
That's not something I can do. I don't think very many, if any, humans can do things like
that. But I think that's, when I think of the next revolution in AI, that's the thing
I'm excited about, is it actually going out and doing real science
in a way that maybe we can't follow.
That becomes an interesting concept.
In Gödel Escher Bach, there is a small paragraph which I wish he expanded on.
It's one of my favorite paragraphs.
It's on the three types of messages.
He had something called the inner message, which is the meaning of the message, the outer
message which is a decoding mechanism.
So for instance, if you speak only English, and it's in Japanese, then you need a dictionary
to translate between those two.
That's part of the outer message.
And then there's the frame.
That is that which makes apparent that you're reading a message at all.
So for instance, some people say aliens could be communicating with us right now,
we just don't know it's with neutrinos or it's the noise in the data that we filter out.
So that would be that we're not recognizing the frame, something like that.
And of course that can go off the rails and in part schizophrenia can be seen as an error in false positives of the frame.
Right.
Okay. How do we know that already we don't have this super AGI?
Because the computer, you could ask it, tell me some insight that I wouldn't be able to understand
and maybe it just generates for you characters.
And you have no idea how to decode this.
So you don't know the outer message nor that there's the frame there.
Right. I think that't know the outer message nor that there's the frame there. Right.
Um, I think that's absolutely the case and I think we might already be in that regime
in a sense.
And that's ideas I've been having in going back through the history of computing and
so on.
I thought, well, let me go to look at the 90s, then the 80s and so on.
I found myself in the 50s and 40s and blah, blah, blah.
And kept going back.
And now I find myself in antiquity.
And I'm looking at things like Sumerian mythology or the
Antikythera mechanism or the steam computers from Alexandria.
And I'm realizing there's this whole layer of reality that's very difficult to perceive.
It doesn't really fit.
It's not continuous with modern, let's just say, education.
You don't learn about it in school.
I mean, let's just take the anti-kithral mechanism, for example. single artifact disrupts the public technological timeline by two millennia.
There's no explaining that artifact in the normal paradigm. We either have to
come up with a completely new narrative or think about it just completely
differently. It just doesn't plug in. We have a mechanical, analog, astronomical computer
from BC.
How do we add that up?
We don't see technology like that for another 2,000 years.
Where did that come from?
And so I've been thinking about the artifacts
of advanced civilizations.
Why do we think we would recognize them?
Why do we think that we would be able to perceive them at all?
And that a lot of the artifacts of our current civilization, whether it's advanced theories
of physics or computation, they're largely invisible to the greater population.
To first approximation, something like particle physics is esoteric knowledge. It's like measure zero.
If you were to go out and sample it statistically, nobody has that information.
You can't go to the mall with a clipboard and ask people about it.
It doesn't really exist at that layer.
So where is it?
What is it, in a sense?
I love this idea I came across as a woodcut, and it's the personification of arithmetic, arithmetica.
It's the idea that you can personify anything,
that thinking of all of mathematics or all of arithmetic as a meme,
and it's alive, it's a thing.
Now, in the ancient world,
they would put a name and a face and a statue to it,
and in the modern era, we think that's ridiculous.
But maybe it's not. Maybe it, we think that's kind of ridiculous.
But maybe it's not.
Maybe it's actually a very convenient way of thinking about that mimetic organism, this
informational being that exists on our planet and it lives in the substrate of human beings.
It sort of lives in our minds and moves across our cultures.
And they can die. They can evolve, they can be
resurrected.
And I think as crazy as this sounds, it might be a way we need to start thinking about the
history of technology and the idea of thinking in language itself might be of this type.
When people speak about language, they often implicitly speak about meaning.
So what is meaning?
Meaning.
Meaning.
Yeah.
It reminds me of something Hamming mentions.
He discusses Hilbert.
And he said that Hilbert said, when rigor enters, meaning is lost. Mm-hmm.
I think that's in part what a large objection is to people thinking that the
mind is a computer or everything is a computer.
Because as soon as you make it computational, you make it technological.
And as soon as you make it technological, you make it something that's
devoid of meaning and it's as if you've elevated the text to the expense of the
spirit.
You know, what's interesting, it reminds me of Marvin Minsky again.
He addresses this, the idea that the mind is a machine and people sort of have this
visceral reaction to that.
They object to that.
I think one, that's their immune system sort of responding to that idea.
But he responds, why do you think you know how machines work? Or what machines are?
And so when people say that the brain is or isn't a computer,
you know, my quick response is,
well, what kind of computer?
Analog, digital, fluid, optical, mechanical.
Like, what do you mean?
Because there's a whole bunch
of different kinds of computers.
And one of the things I like to look at,
there's these great collections of chemical reactions on the internet now that you can find. People
just film the Petri dish. And it's extraordinary the kind of behaviors that
you can see. And even some of the things that in this this area of between life
and non-life where you get simple fluids and things like protocells that have very
sophisticated behavior and they're just collections of chemicals.
And so when we say a kind of machine, well, what do we mean?
We don't understand how chemicals interact completely.
So how could we say we're not that?
And if we can build a computer, reaction diffusion or whatever it might be, out of computers,
well could we be that kind of computer?
This reminds me of Michael Levin.
In one of the readings of Michael Levin, you can see him as an idealist,
speaking about intelligence as somehow fundamental.
But in another reading, you can see him as physicalist,
because each intelligence is instantiated in physical,
so the physical is more primal.
And I asked him about this, and he said,
and now I don't want to miss
quote but he said something tantamount to this. Michael Levin said that you can classify
him more as a physicalist, except that you then don't know what the physical is, right?
But you have to put some mystery to that. So when people say, are you saying that we're
just this dead matter? He's saying, no, you don't know what matter is. I believe we're
matter. Now, again, put an asterisk to that
because this is my interpretation of what he said,
but I can leave a link to that question.
Yeah, it reminds me of a couple things.
Yeah, so I love that idea
that we don't know what matter really is.
And I don't think we understand yet the relationship
between matter and information.
And if we go when we down and we look at things
like field theory and stuff, we kind of seem to
find bits at the bottom.
We used to think that the universe was best described with kilograms and the meter and
the second.
I think in the next era of physics, we'll find that the bit is the more fundamental
unit or the most important unit, at least on the same level as those others.
And I like the quote that, you know, matter is spirit moving slow enough to be seen.
Who said that?
Desjardins.
And, you know, we don't yet know how these things work, what's at the bottom.
I think we're going to find information.
I love these ideas.
Now, if you go and you look at the dynamics of the black hole, Scott Aronson has some interesting stuff he talks about
with the firewall paradox.
And to invoke the structures of a black hole
and to describe the experiments
and to have the theory make sense,
you have to invoke notions of computation.
P and NP and all of these complexity space type arguments.
And this idea of differential privacy,
which is something from neural networks
to make sure that your data doesn't leak
through the model operation,
that idea is being used to describe
the structure of a black hole.
And so I think we're finding,
it used to be that we had something like
sort of math and physics at the bottom,
and then we had chemistry on top of that,
and engineering on top of that,
and astronomy and things would sit up there. And we had chemistry on top of that, and engineering on top of that, and astronomy and things would sit up there.
And then at the very top of those, you'd have things like computer science, because that
was a product of the engineering, of the physics, and so on.
But I think we're going to find it's either some kind of snake biting its tail or some
kind of weird space, because this computation idea seems to be also at the bottom.
That the idea of information and bits and an algorithm and computer program and complexity class
seems to be somehow very fundamental, maybe even below physics itself.
Do you think information is spirit moving slow enough?
I think we need to think about questions like that.
And you know, again, that's what I love about channels like yours,
is it's a place where we can have that kind of conversation
and use those words in the same sentence.
Because I think there's a lot of places where, like,
you can pick one of those as your badge to walk through the door,
and if you have both badges on, they kick you out of the conference kind of thing.
Right, right.
And so I don't know, but I think that's an important question.
What do you make of Wolfram's model, his physics model?
I'm a big fan of Wolfram and his work.
You know, I love the simple programs.
I'm fascinated by the emergence of kind of like new kinds of physics that emerge out of these substrates
and that how you get to these certain layers,
whether it's just the cellular automata cells
blinking back and forth, and then if you zoom out,
you can build a machine, you can build a computer
out of that.
One of my favorites is this idea of WireWorld.
Are you familiar with WireWorld?
No.
So it's a fascinating automata, just like Wolfram's stuff. He's done a lot of great
work on it. And you have a simple rule for every square in the graph paper. And it's
a perfect example of sort of this fabric kind of computation where everything is happening
throughout this kind of virtual space and it's doing the same thing everywhere. And what you can do is you can have a simple set of rules, it's four rules,
and it simulates kind of electricity moving down a wire.
And it just looks like one of the squares kind of, you know, bubbling down the line, essentially.
But what's fascinating is you can build a computer out of that.
Right? It's called WireWorld in that it simulates a kind of mathematical electricity.
And then with that electrical wire, you can build gates and you can build memories out
of those gates and you can build bit registers and CPUs out of that and you can build a computer.
So theoretically, you could instantiate something like an LLM on that. But the physics is sort of at this new layer
where you can't go below it in a sense. It doesn't matter what's below it. It's
just these squares and they have this dynamic and out of that emerges a
thinking machine in a sense. And so I love these kind of computational models of physics and our
mind might be like that in some sense. That our mind, like the wire world, it can
instantiate a kind of physics. It's not the same physics that we see out in
ordinary space-time, but it's a sort of a virtual reality, not in like the
3D, you know, headset kind of way, but it's like a new backdrop in which you can build
something like electricity.
And in that electricity, you can build something like a computer.
And I think we could argue now that with a computer you can build something like a mind.
And so when we try to think about consciousness and the mind, I think one of the issues is
we've been trying to map directly from, let's say, mind to the brain in one go.
But it could be that the brain instantiates a virtual machine
or layers of virtual machines
and that the mind runs on one of those.
And the reason why it's so hard to correlate neuroscience and psychology directly
is that we're assuming that it's just one jump,
that there's a computer and that computer's running software.
But if we go to a modern data center, or if anybody's familiar, if you run classic
retro games and you run a Nintendo on your modern machine, you're not running
Mario on your Mac, you're running an NES on the Mac and you're running Mario on
the NES.
and you're running Mario on the NES. And so as an analogy, I think our mind could be running on a software layer, a virtual machine that is above the brain layer. And kind of like the wire world, you don't know what the transistors are. All you know is you have these squares blinking back and forth and they act like wires and it acts like a computer.
What's below that doesn't really matter.
It's almost unknowable from the higher level.
And so I think we need to think about these emergent programs and these different layers
of a virtual machine to understand these kinds of things.
So the depth psychologist would say, hey, you have a conscious mind and you have a subconscious mind.
Now, in modern parlance, people don't like to say subconscious, they'll say unconscious.
But I'm speaking about the depth psychologist here.
So they'll say you have a subconscious. And then next question is, does your subconscious have a subconscious?
And it sounds like what you're saying is that would be the virtual machine that the virtual machine runs on. Right. I'm actually a big fan of
the idea of subconscious. I've found it relevant both in trying to map things
out and in trying to understand my own behaviors. I think we definitely have
things that are unconscious but we also I think have these other kinds of
systems in us. Whether they're sub or not I don't know. I think they these other kinds of systems in us.
Whether they're sub or not, I don't know.
I think they might be sort of parallel conscious, whether it's this left and right brain, different
modalities that experience the world.
And when you're there taking in a sunset or looking at a nice painting, you're running
a different mode that's not necessarily sub or on.
It's just not the one you use to do your taxes
or drive down the highway. And I think that's something we need to think about.
I think there's a lot of ideas in psychology from a few hundred years ago
that we need to bring back and re-examine in the light of these new metaphors,
these new languages, these new technology.
Okay, speaking of language, in programming, there are several different types of languages,
like object-oriented, and then Wolfram has something called symbolic. Well, it's not just
Wolfram's, but Wolfram uses that and popularized it. So can you please talk about the different
sorts of programming languages for people who are unfamiliar, and then talk about how Wolfram's
symbolic language
contrasts with those.
Yeah.
One of the ways I like to think about it is,
you can sense that the computer that we had early on,
starting from the 1950s, I call it a blind logician.
And the idea was that you could just take logic.
If this is true and that is true,
then this also has to be the case.
And you can just build all of reality up from that.
But I like to joke that logic can't be real because, or it can't be the only thing rather,
because babies and drunks don't use it.
And they inhabit the world just fine.
And so there has to be some sort of reality beyond just logical processing.
And as Wolfram Poults points out, if you look at all possible logics, all different sets
of rules of how you combine truths, the logic we use is on the list at like 400 something.
And so it's just kind of a particular evolutionary case we happen to have developed that one.
It's very good at building a production line that produces automobiles or stuff like that.
But there's another kind of thinking, right?
Like Boor said, you're not thinking, you're merely being logical.
That has to do with kind of fuzzy states, things that aren't necessarily true, they're
not necessarily false.
They fall somewhere in the middle.
Whether you like a song or a painting, it's not true or false.
Maybe it's the tenth time you listen to the song and you like it a lot more.
Does that mean you didn't like it at all the first time?
It's kind of this, we often call it analog, where there's kind of like a dimmer switch versus an on or off switch. And we have these two kind of broad classes of computers
called digital and analog.
And the older computers were all analog.
A hundred years ago, computers were primarily analog.
And I think Turing, he was kind of like a nuclear bomb
on the scene of computation, because after that that the idea of digital computers just completely dominated the conversation and
dominated the engineering that we largely forgot about analog computers and
Outside of a few electrical engineers very few people have ever even heard of them
But we have throughout history amazing water computers, for example
The Soviets had some very sophisticated water computers, for example. The Soviets had some very sophisticated water computers during the Cold War that weren't
that were classified.
And it wasn't until the 1980s that digital machines could were did as well as these these
massive room scale water computers.
I was building a garden in my backyard and I was interested in in water pumps.
And so I started to research how to get my water pump to do like a blub blub.
Blub blub. I wanted it to kind of be intermittent.
And so I'm just there and I type in pneumatic oscillator hydraulics.
And one of the first links that comes up is a classified MLL server
talking about a water computer from the 1950s and 60s.
And I'm thinking, well, that's interesting.
How is it that something I might use in my garden
to regulate water is not even available as public record?
And so we put in a FOIA request to get the original paper
about how these pneumatic oscillator works.
Turns out you use these for rocket vector thrusting
and things like that.
So they have very practical military applications.
But the idea is that there's these vast categories of computing and computers that most people,
including unfortunately a lot of computer scientists, have never heard of for no fault
of their own.
And so when we have these different kinds of computers, we have different programming
languages for them.
And so the original digital computers, they worked on symbols.
And very concrete, sort of small little things and we can say exactly what it is.
And that's largely how our left brain works.
It works on small numbers of inputs for which there's a very strong association between
a small number of parts.
If A and B, then C.
But the other part of our brain, the right brain, it thinks about sunsets and strawberries,
and these are very fuzzy, high-dimensional things.
And this is why face recognition was an unsolved problem for decades in computing.
Because for me to explain how I know I'm looking at Kurt,
I can't really verbalize that.
It's not the kind of thing that fits into language.
One, I either recognize you or I don't.
So at the high level, it kind of merges to this yes or no.
But how I'm making that decision is not obvious to me.
It's not really accessible,
as we were talking about earlier. It's certainly not at a verbal level. So the left brain deals
with things that are precisely the ideas that I can decompose and give you as a sequence of tokens.
And then when you get that sequence of small symbols back, you can reconstruct them back into the idea.
And we can define language as being able to communicate the set of ideas
for which you can do that kind of decomposition and reconstruction on.
But there are other things, namely these problems that consciousness talks about,
these hard problems, well, what red looks like to me
and what strawberries taste like.
I can't
do that. I can't take it and turn it into language as a simple serial channel and try
to explain that to you. And so we'd have to have a shared experience. And I think that's
largely what culture is all about, is trying to have an overlapping experience. And without
that, the language, the other part probably wouldn't work at all.
Like the words being gestures.
If I can't approximately point to something, it might not work.
And so I think we're discovering new kinds of languages, new kinds of programming languages.
And one of them is called the hypervector.
And it's a very large sequence of tokens that in a fuzzy sense doesn't mean any one particular thing.
It's kind of like a set of ideas.
And so we might be able to build machines that are better at sharing those kinds of
experience but I don't think we have that yet.
You mentioned object-oriented programming.
I think the exciting thing that's coming down the line is the idea of agent-oriented programming.
And so Alan Kay in the 70s developed this idea of object
where you bundle together the computer code
and the memory it works on all together.
And so you have this object, and you can think of it
as a little entity in the machine
that knows how to run itself.
And it turns out this is very powerful
for building modern technology.
An agent takes that a step further,
where rather than being able to just communicate
simple messages
as with object-oriented programming.
It might be better if it called message passing,
because you just send messages back and forth to the computer parts,
and they have to act polite to each other.
With agent-based programming, you take that even farther
and you consider what the agent believes and what it knows,
not just what it's capable of, but what it understands about the world
and the ability to make a promise or to tell a lie
or to explain something.
This is things that humans do all the time,
but we need to think about whether it's language models
or things like it.
I've been thinking recently about sort of ecosystems of these agents.
And so I think in the future of computing, we're going to think about a substrate,
like a forest, that is inhabited by a collection of these agents. And they'll be very different.
Some of them will be earthworms, some will be oak trees, some will be squirrels,
and some will be the logger in there.
And we will have to think about them in terms of kind of eco-dynamics and sustainability
models and the types of things that biologists and ecologists and anthropologists study.
How do cultures emerge?
How do you get stable equilibrium in a dynamical?
You know system worth some things are trying to eat each other some things are trying to
Parasitize each other some things are creating energy sources and so on
As we're talking about earlier the the the chatbot you get like one window
Right and they charge 20 hours a month and you get one at a time
Well, if we just take out Moore's law very soon
You'll have 200 at a time for 20 bucks a month,
and then 2,000, and then 2 million, and so on.
It would have been inconceivable that ordinary individuals
would have access to terabit memories not many decades ago.
And so, with the same kind of evolution,
we're all gonna have thousands of LLM agents
or some variation of them at our disposal, how are we going to set them
in motion?
We're not going to be able to talk to them all.
We're not going to be able to prompt them individually.
So we'll have to have some kind of hierarchical system that prompts them or some sort of negotiation
system.
And that we're getting back into kind of the natural system, kind of like the ancient world.
I think it was Alan Kaye he talks about in the ancient world, people didn't understand the forest, they negotiated with it.
You had these rituals and these practices
that allowed you to cooperate and make use of it,
but they didn't try to understand it per se.
And so I think we might get to the point very soon
where technologies, if we're not there already,
technology is at that point.
I think most people are ready to negotiate
with their handheld devices and their phone
and we say, please do this and we mash on the buttons hoping it cooperates because we
don't really understand it.
I think anyone could safely say there's no human alive that truly understands all the
workings of a cell phone.
Because even if you do know the full software stack, well, there's the hardware and then
there's the semiconductors and then there's the semiconductor supply chain and then there's
the glass and then there's the plastics and there's the economics and the marketing. It's extraordinary
And maybe that fits in somebody's mind, but there's not many if it does
And so we're already at the point where humans are creating these artifacts
That feel like they're from an alien civilization
They don't feel like they're the product of our culture and
I think that's really interesting.
It reminds me of something that, you know, I don't know if we want to get into it,
because I don't want people to take it the wrong way, but something I've been fascinated with,
and let me explain, so don't jump to conclusions, is this discussion on flat Earth.
Because what I think is fascinating about it,
independent of the geometry of the Earth,
for clarity, I believe the Earth is an oblate spheroid, right?
But what's fascinating is the members of this community, let's say,
they've discovered that they inhabit a slightly different culture,
or a slightly different civilization, if you will,
than others.
And that the planet has these non-overlapping cultures and civilizations in it.
And each one of those has this knowledge base.
Because what's fascinating is they come to the idea, they say, well, I don't really know.
They think, you know, as an adult typically, right?
Normally it's the kind of idea you come across in your youth.
And maybe they come across it again in their adulthood
and they think, well, how would I know?
How would I know and how would I determine?
And so what's interesting is they go out to their neighbors, right?
And they go out in one degree in their friend network,
in their social peer network.
And they ask their spouse or they ask their family
and their friends and their coworkers,
they will, do you know?
If you do know, can you convince me?
And they can't.
There's nothing that their immediate circle of friends can convince them one way or the
other.
And then they go out two degrees.
They say, well, do you know anybody who knows anybody that can convince me and so on?
And what's fascinating is they build essentially these entire pockets all over the internet
where they can't find anybody who can convincingly answer this question for them.
And it's no fault of their own.
And it turns out that other non-overlapping or partially overlapping rather, not that
they don't completely overlap, but they partially overlap,
there are civilizations and cultures that were very interested in that question,
going back to the Greeks and stuff, and they were measured it,
but they were fascinated by geometry.
And their culture had that as a kind of a cornerstone idea,
and so when they went to look at a question like that,
they were able to satisfactorily answer that question for themselves.
You know, I've found that same kind as a metaphor, we're all of that type.
Can we explain a cell phone?
Can we explain how your laptop works?
And I've spent the last, you know, better part of my life,
and more recently, I've written my own computer language
and made my own abstract virtual machine
because I was fascinated by this notion.
I had learned about analog chips,
I had some idea of how a CPU works and so on,
and I knew the logical circuits
and memory flip-flops and stuff.
And then I knew about video games,
I knew about apps and the internet
and the kind of software we use in our everyday lives.
And I wanted to bridge those two realities.
So how does software actually get broken down into the ones and zeros?
And in that journey over decades, I had almost given up on it.
I had kind of just accepted that the chasm was too great and the distance between those
two things was maybe not something I could traverse as a single individual and I had to just take it kind of on an act of faith in a sense.
The distance between what?
Let's just say a video game and the actual circuit of a CPU.
Sure.
How do we create a modern virtual reality experience in a headset with 3D graphics?
I'm supposed to just believe on faith that that's made out of ones and zeros?
Uh-huh, okay, I see.
And so like the Flat Earth thing,
I realize that I only partially overlap the civilization
that knows or let's just say cares
about that mapping between the two.
Because to first approximation, if we were to go out
and I would ask people in my personal community,
friend network, peer network, professional network,
well, how does software turn into electricity?
Most people don't know.
I didn't know.
And so like these non-overlapping communities, I found myself questioning the geometry of
the planet in some sense, right?
As a metaphor, thinking like, is it true?
How do I know?
Why should I believe that? Where did this artifact
called a cell phone, what civilization produced that? Because it didn't seem like it was my culture.
The culture that I inhabit and grow up in and friends with the people in and are happy about
and all that kind of thing, the people around me and everybody I can meet, it doesn't seem to be the
same place. Now maybe it's like the brain network thing where we try to isolate it into a particular city or country or culture
Well, that doesn't make any sense. It's this network kind of object and it's
moving around
but I think we need to
To think about this
These these artifacts whether it's the Antikythera mechanism, our cell phone, modern banking system,
how food gets to the grocery store,
the complexity of it, it's almost lethal.
It's almost overwhelming, right?
It's an information hazard.
If you really think about all the details and steps
that make up a turkey sandwich, you lose your lunch, right?
It's too much.
It's just too much.
And so in some sense, reality is the ultimate info hazard.
It's the ultimate lethal text.
And the more you go out and try to map something out,
whether it's history or technology
or the shape of the planet, you very quickly
run into these barriers that our immune system have
a hard time crossing over.
So what are you suggesting?
There's two different types of justifications.
One is called internalism and one is called externalism.
I don't know if you're aware of that.
So one says the externalist would say that there are factors external to my
mental states that can justify my beliefs.
And those factors don't need to be known to me.
I just have to trust them or they come
from a reliable source.
And then internalists say, well, the justification of my beliefs, I have to be aware of the reasons
behind it and the evidence for it and I have to have experience of it.
And it seems like we're both of these and that we can't be all of just one.
So maybe we could be all externalists, but we can't be all of just one. So maybe we could be all externalists,
but we can't be all internalism. That is, we can't try to understand every single thing
because even in your example of trying to understand a video game and how does that
come from the zeros and ones of the CPU, that would also lead you to how do you understand
quantum field theory and then if you truly wanted to understand that, then you'd have
to understand a theory of everything, which almost no one understands.
And they'd have to watch a theory of everything, which almost no one understands.
And they'd have to watch your whole channel for that.
Exactly. So this is a plug.
So we can't have an internal model of reality, and even if we were to have an internal model of reality,
there would be some Karl Frishtian argument that we would then become synonymous with reality
and become an entropic soup, and then we die.
Well, it's funny you said that,
because that's exactly the thought that came into my mind
a few weeks ago, that if we truly understood
what it meant to be alive, it would kill us.
Right? We would die.
It's the ultimate lethal, lethal text, if you will.
It's the ultimate info hazard that that kind of thing,
it's not meant for us.
Another metaphor I like to think about is, if and when you see the face of God,
that's the last thing you'll ever see. Right. So that's another reason why I'm
skeptical of the types who say that they're just truth seekers and that all they want to do is assemble truth at any cost.
Firstly, they have a bologna slice like you mentioned. Not even bologna, just a prosciutto, even thinner.
Yeah, a slice of what they think truth is. Lovecraft had a great quote. I know what you're gonna say. I love it.
It's my favorite. It's my favorite. Yeah.
But please. The most merciful aspect of
this world is the inability of the human mind to correlate its contents.
And that one day this unfettered scientific investigation
may open up such terrifying vistas of reality
and our frightful position therein
that we may either go mad from the revelation
or flee from the light
and to the peace and safety of the darkness.
Now some people who think of themselves as unblighted truth seekers and they don't see
themselves as doing some form of moral posturing by saying that they believe it or at least
they voice those words. Additionally, you can tell that people when they're using the
word truth seeking that they're just copying a phrase that's adopted from some other place because people don't use the word
seek rarely in any other circumstance other than say hiding and seeking.
So I think people who call themselves truth seekers are just putting forward a ballyhoo
of their righteousness and they're not acknowledging the self-deludedness of the strings on their
limbs.
They then see Lovecraft as justifying being in Plato's cave.
And I see Lovecraft's quote as Plato's cave 2.0.
I said this on the Julian Dore podcast where Plato's cave,
the story is that you're looking at the shadows and then you exit
and you see a bustling city and then you go back into the cave
and you're blinded at both times, one from the fire, one from the light, actually three times going
back into the cave.
You're now trained to the light.
So you have to assimilate to the darkness.
But Lovecraft saw it as, well, look, you say you like, people say they love traveling.
People say, oh, I love the world.
You don't love the world.
You love flying in a private jet
or first class seat into some resort
and then hopping from resort to resort
or a five star hotel to another.
Then you may say, oh no, no, no,
for me I love the experience of a culture.
I love the cities.
I love going into the actual meat of the town.
And then you think, okay, well that's still not the world
because the world is 70% water.
So do you care so much about the world that you could be well, that's still not the world because the world is 70% water.
So do you care so much about the world that you could be dropped in an arbitrary point
in the world and the swamplands of Louisiana?
So Lovecraft is saying like, look, Plato, you think you're just going to emerge from
a cave into some beautiful city where everything's fine?
And you're like, what are the odds of that?
Right. You know, first, I love that quote, the Lovecraft quote.
It had a big impact on me.
And I think, like, the cave, I think of it as like a mountain.
And I don't think they've been far enough on the Mountain of Madness
to realize that these Cthulhu-style monsters are there, right?
When you get out into this thought space,
there are abstractions, and maybe they're are there, right? When you get out into this thought space, there are abstractions and maybe they're just virtual,
right, but that's what our mind is anyway.
You know, as we were saying, it's not the brain,
it's something running on top of it.
And in that substrate,
I don't think we're the only thing there in a sense.
And, you know, one of the things I've been thinking about is,
if you were to take, like you were saying,
being placed on a random spot in the ocean,
or a random spot in the planet,
or a random spot in the solar system.
Most places in the universe are so dark,
you don't even see starlight.
And so you wouldn't want, you can't be in a random place.
I like to joke that the earth is so interesting,
angels and demons
hang out here. Where else would they go? Where else would they go, right? This is where all
the interesting stuff is. And that, you know, on the other side of the coin, our mind, it's
a very special configuration. And this is what this, our consciousness as an immune
system idea, it's trying to protect that state. Because most mental states would be madness.
Right? Most would probably just be lethal.
Or incoherence, yeah.
Yeah, even the ones that weren't physically lethal
would be utter madness.
And so we have to kind of protect,
you know, I think this is what culture does,
this is what our childhood does and so on.
We try to craft this self, you know, in a Jungian sense,
that tries to protect
us from that overwhelming subconscious or whatever it might be.
Because I like in the old maps, when they charted the oceans, and then they would draw
like a dragon in the corner, and they would say, here there be monsters.
And I think in that, you know, in those depths, there are entities in there.
And whether we think of that as just a metaphor,
or whether we think of them in a sort of a proto-biological sense,
it might not be that meaningful.
Nonetheless, they're there.
What are psychedelics doing?
Well, in this idea that there's multiple agents, and some of them are the guard with the sword at the front gate, protecting the castle from the dragons kind of thing, I think they might
be deactivating some of the defense mechanisms.
I think they might be changing the parameters of the volume,
if it's a symphony, the volume of each instrument.
Normally some of the instruments are quiet and others.
I see.
You got the soloist violin that's
normally taking over the center stage.
There's a bouncer that's no longer checking ID at the door.
Right. I think they allow people to think you know differently and you know if we look at the brain maps, it's clear they're changing the activation patterns and
this cross modality
One of the things I've been really fascinated by that's in that direction is this idea of synesthesia and
Being able to
apprentice or induce synesthesia without psychedelics.
And so a lot of people report that on certain substances they can taste colors or feel music in a visceral way.
And I think that kind of synesthesia could be just so powerful if we could learn it. Feynman, he describes learning colored blocks as a little boy that were in colors, right?
The A was in red, the B was in blue, the C was in green, and so on.
And he reported a visual color synesthesia that when he saw algebraic equations on the
chalkboard, he could just bring all the green stuff to one side.
And he had this very powerful modality. Well, I don't think we have enough Feynman's in the world. And I'm always fascinated why we don't do enough meta-analysis like, you know, what did
von Neumann have for lunch in elementary school? Because we create these, you know, four or five
sigma individuals every so often. I think there should be more in the science of education
of how did they come about?
So the mathematician Hadamard, 100 years ago,
so I think Hamming talks about this.
No, Alan Kay, Alan Kay tells the story.
And Hadamard went out and he wrote to like the top
hundred mathematicians and scientists, physicists at the time.
And he asked them, he said, how do you do what you do?
Yes.
How do you do what you do?
And he said, do you use symbols,
like we were talking about earlier,
with a symbolic language?
Do you use some sort of logic on paper
with small symbols and move them around,
like algebra?
One, two, do you use pictures?
Do you draw graphs and diagrams
and kind of visualize the reality?
And the third option he suggested was, is it bodily kinesthetic?
Do you experience it kind of more viscerally that way?
And a small number of them reported they used the logic and algebra, and they relied on the formalisms.
And the majority wrote back and they said they the formalisms.
And the majority wrote back and they said,
they visualized things.
They used pictures and they used their visual mind
to kind of see the scenario.
Uh-huh.
But another percentage, including some of the best,
including Einstein, they reported that they could feel it
in their musculature.
Einstein said he could feel the space-time fabric in his arms, in a sense.
And so I think that might be,
and this is what athletes and dancers and things,
they understand that the primal mentality
is probably that bodily kinesthetic.
And that we need to, one, teach that,
that's completely absent in education system.
We tell everybody with any kind of athletic or movement skill
to go off and do that stuff after school
in a team or a club or something.
And then we have them forced down and sit and do algebra,
absence of movement.
So I think this idea of inducing synesthesia
and apprenticing that I've been fascinated with, and in that research I came across a constructed language.
We were talking earlier that maybe we need new languages to understand the mind.
And I came across a language from the mid 1800s, constructed language called Solresol.
And it comes from the solfege, do-re-mi,
and the musical scale.
And it turns out it's an alphabet.
There's only seven elements in the alphabet.
And in the English alphabet, we've got 26,
but we have two different versions for each letter.
We've got a lowercase letter and an uppercase letter. So in some sense, there's multiple ways to represent each symbol, different versions for each letter. We've got a lowercase letter and an uppercase letter.
So in some sense, there's multiple ways
to represent each symbol too, for each letter.
So imagine you have an alphabet
where there's only seven letters
and all of the words are made out of those seven letters
and they're made out of the seven notes
of the musical scale.
But for each letter, we can represent it
on the musical staff, or we can represent it on the musical staff,
or we can number it one through seven,
or we can give it a color, red, orange, yellow,
green, blue, and so on, or we can give it a hand shape.
Or I've tried to develop flavors for it,
cherry, orange, lemon, lime, and so on.
And the idea is that for any word or any sentence,
you could sing it, you could speak it,
you could write it as a number,
you could think about it as a flash of colors,
you could think about it as a flash of flavors.
That you could have a, it's a language essentially designed for synesthesia.
And so I've been trying to learn this language,
I've built some simple tools on the computer
to try to teach myself this language, because I want to know can I induce synesthesia?
Faster than I thought I was able to look at things and
have that association.
Like here's the flavor of that. Oh, that's green. That's lime, right? Fairly straightforward.
To look at them and see this kind of pattern.
Now, I'm just at the baby steps of this language.
But I wonder if I had learned this in my youth,
you know, could I think about things?
I want to be able to think about things in more than one way.
I want to be able to take an idea and turn it around in my mind, not just in English,
or not just in one modality, but to think about it as a song.
We all get songs stuck in our head.
That's one of the info viruses, the famous earworm, where you get some song stuck in
your head.
I won't mention some, it'll get stuck in your readers' minds, or listeners' minds. But you all know simple songs, that once it pops in your head, it's kind of stuck in your head. I won't mention some, it'll get stuck in your reader's minds, or listener's minds.
But you all know simple songs that once they pops
in your head, it's kind of stuck in your head for the day.
I want ideas to stick in my head.
I want quantum mechanics to stick in my head.
I want mathematical ideas to stick and resonate
and carry along in the back of my mind.
So sort of inhabit my subconscious naturally
the way musical phrases do.
And so I started to think about this idea
of a musical language and that why can't we use music
for something more important?
We use it simply for entertainment
and that just seems like a waste
that I can't make sense of it.
It took a very long time for computers
to get input and output, to
get keyboards and displays. But we had mechanical machines that could input and output music
hundreds of years ago. What did we use it for? Why didn't we use it for something more
interesting?
Do you think we use music just for entertainment or do you think that we think that we use
music for entertainment?
So what I mean is that we play it and then we go to concerts and then there's a social dynamic there and it innervates the culture.
And then that allows us to think in a more social manner.
You mentioned there are different modalities.
There's also a social form of thinking in addition to sense and taste and so on.
So that's kind of exactly what I started thinking about.
And it reminds me of a conversation
I had many years ago with a good friend.
And she was telling me that in China,
there's an instrument that's not used for making music,
it's used for thinking.
It's a guitar kind of like string object.
And that people use it to kind of hold their thoughts.
And I love how she described it as an instrument
used for thinking.
So that led me to think about this, are there better languages?
And this relates to the different modalities that we have with our brain.
And then I think there are, that the left part of our brain understands things in terms
of words, these small sequences of symbols that I can decompose and you can take and
put them back together and rebuild the idea.
And language is very powerful and we're seeing that with the LLMs.
But there's the other part of our brain that responds to music and I think we need to use
your music more practically and to reconsider what it does.
So putting that aside for a second, we were talking about programming languages and you know von Neumann tried to look at is the brain a computer and he put
out some some really interesting books on that. And so in my research I tried to kind
of pick up that idea and think about well is the brain a programmable device? Is it
subject to being programmed from the outside? Whether it's our own self-help affirmations
or mind control from marketing
or more nefarious type things?
And if so, what would the programming language
of the brain look like?
Let's just assume that the brain is a computer-like object
and that means it would be a programmed object.
And what would a programming language
for the brain look like.
And so I tried to think of a list of properties that that programming language for the mind white might have.
Well, for one, we wouldn't really recognize it.
So video game characters don't know if they're programmed in C or in Python or whatever they are.
They can't know that. They might be able to know they're a program,
but they wouldn't be able to know
what they're programmed in, in some sense.
And so we wouldn't recognize it,
but if we had this programming language,
it would have to be everywhere.
It would have to be kind of ubiquitous.
It would have to be very poorly understood.
The people who did understand it would be very powerful, even economically successful.
It would have to be very old.
It would have to still be here.
Our earliest technology would have been developed to manipulate it,
and our latest technology would be developed to manipulate it.
It would have to have the ability to change our emotional state and our mental state,
largely without our permission.
It would have to have the ability to change our physical state, to get us to move, to
move without our permission.
So it would have to be something ubiquitous, but not obvious.
It would have to be old and new.
It would have to involve technology.
People that were good at it would be famous and revered
and economically successful.
It would make us laugh and cry without our permission.
It would make us move around without our permission.
And even the people that were very good at it,
and let's say understood it,
wouldn't really be able to explain it very well.
And so we list all of these sort of dozen or so properties
that we might expect for a programming language.
Music checks off every box.
Everybody knows what it is.
Nobody really knows what it is.
Everybody's familiar with it,
but we don't really know how it affects us and why.
The oldest technology in the world going back to Egypt,
was designed to create and produce this thing.
The latest synthesizers and computers and generative models are designed to create it.
People that are good at it are very successful economically and otherwise.
It's very powerful. It sort of dominates the planet in a lot of ways, culturally.
You get thousands of people moving in rhythm, seemingly without their permission.
A few chords on a keyboard or piano can make you teary-eyed very quickly.
And so it's kind of, once you look at it in the car from the boredom of driving and
something to do at Burning Man or whatever it might be. It seems fundamentally more powerful
and useful. And so it's one of those things either, you know, we need to think about what
we could do with that kind of language if we could use it more constructively. I want
to be able to write computer programs in music. I want to be able to write computer programs in music.
I want to be able to think about advanced mathematical objects musically.
And so on.
Do you think there's something particular about music or does this abstract generalizing
to any art?
Both.
I think it would definitely generalize to other art forms, dance, painting, sculpture.
These are all accessing unique channels that these different agents or these different
modalities have kind of in the ecosystem of our mind have evolved to specialize in. But
I think there is something very interesting, if not fundamental, to music.
And I think it's that the right brain understands things in a completely different way from the left brain, as we were saying,
and in terms of frequency and amplitude.
So one of the ideas that's been really interesting to me, you know, everybody thinks that, you know, Einstein was the big physics revolution,
or that quantum mechanics was the biggest, you know, sort of shift in our paradigm.
I think it might have been Fourier.
I think it might have been the theory of frequencies and amplitudes, let's just say.
And all the listeners will be familiar with things
like MP3 and the equivalence.
And so when we store information in the modern world,
we don't store the actual wave that
went into the speaker or the wave that came out
of the musical instrument.
It turns out that we don't have enough hard drives even still.
A lot of your listeners remember in the 90s,
you couldn't fit a single album on your computer.
You'd rip a few songs, before MP3 rather.
If you were to take waveforms and try to rip a full album,
it's gonna take up a significant amount of space.
And the breakthrough with the iPod and with MP3s
and digital streaming music was to take that signal,
which normally lives on a graph of space on one axis and time on the other,
and we convert that into frequency and amplitude.
And we do this with JPEGs, and we do this with MPEGs, and we do this with MP3s,
that all digital media in some sense lives outside space time.
It no longer lives in an axis of space and time.
It lives in an axis of frequency and amplitude.
And we store the information that way.
And then later we can reconstruct it at runtime, listening time.
We reconstruct it back into the original wave and move the speaker accordingly and we can
hear the sound.
That's how this is being transmitted and recorded.
But what that's really saying is that there's another way to look at the world
that's sufficient. That we can look at it as there's components of space, there's components of time,
or we can look at it as there's frequency and there's amplitude.
And when you look at the world in terms of frequency and amplitude, everything you need
to know is still there, otherwise you wouldn't be able to go backwards.
But in that other framework, the very idea of space and time just don't exist.
Just don't exist.
And I think that our right brain is an approximation as a label.
I don't literally mean it's the right lobe per se.
But I think there are aspects of our brain and our mind
that inhabit this sort of frequency world.
And they don't understand anything about time
and they don't understand anything about space.
And that they're understanding the world
in a very different way.
And to try that, to make sense of that in language doesn't make any sense.
I can't take that and decompose it into words and send it across what a symphony sounds like.
You have to just hear it.
And so I think this is why there's such a fundamental bridge between the different modalities that we have,
this left and right brain, is because they're experiencing the world in sort of a vastly different frame.
And that I would like to see a language that communicates more to that modality and not
just for entertainment purposes.
For people who don't know what Fourier analysis is or to decompose into Fourier series. In math,
there's discrete objects like dots and edges and you can graph anything. You can Fourier
transform these guys into sine waves and cosine waves of different frequencies. The issue
here is that it's a one-to-one mapping. So that sounds like it's not an issue. It sounds
like that's an advantage. But many people who think of the brain in terms of left brain versus right brain will think of
the left brain as the more discrete atomized type and then the right brain is the more
wavy frequency type. However, at the same time, people tend to valorize the right brain over the
left brain. They think that in fact, in the title itself is master and his emissary, the master is
the right brain. So the right brain should have a slight fact in the title itself is master and his emissary. The master is the right brain.
So the right brain should have a slight elevation in the hierarchy.
But in this Fourier analogy, it's one to one.
So you can't say that one is more fundamental than the other.
So what do you say to that?
I would think that in an evolutionary sense, our brain has evolved to take advantage of
both perspectives on reality.
That there's different ways of interpreting, you know, everything we experience.
And a very powerful way is to map things into space and time.
And there's another very powerful way for which that's not, that's not, it's not that it's not even useful.
It doesn't exist from a different perspective.
It's not that it's not even useful. It doesn't exist from a different perspective.
Have you ever played with synesthesia of scent or taste?
Well, taste you did give, green and mint and so on.
But scent, olfactory?
For me, scent and taste are so close together.
I hadn't really thought about them separately.
You know, cause to me, flavors and smells are,
you know, essentially for me,
they're approximately the same thing.
Our mutual friend, Addy, who runs the polymath conferences at the polymath conferences,
I know that he does some exercises to help people with synesthesia to induce synesthesia
without chemical substances. So anyone who's listening who's interested, I'll post the
next event somewhere here on screen or it's in the description. Anyhow, I'm wondering, look, if music is so primitive, it also seems like scent is even
more primitive.
Yeah.
It seems like that.
I could be incorrect.
Dance also seems more primitive because scorpions do that, but I don't see scorpions with flutes
or some version of a flute.
Well, birds do.
Birds do.
And they've got, you know, millions of years of evolutionary head start on us.
Right, right. So do you think that were there any people in that book by
Hadamard or Pollyo, any mathematicians that think in terms of dance or think in terms of scent? I've not heard of that.
I
can't say I
you know think in terms of flavors, but I do use it as a metaphor I've often described.
Because in ordinary,
you know, conception of space and time, we've got just three dimensions of space, and I can't add another right angle.
Right? So if I try to put another line at 90 degrees to all of the other three,
there's nowhere to stick it. Sort of not enough space anymore.
But with flavors or with smells, we actually
can think about a kind of a hyperdimensional space.
That we have more directions than three.
So for example, if I'm cooking something,
there's no amount of lemon juice I can add to something
that will make it more chocolatey. There's no amount of hot peppers I can add that will make it vanilla and so on.
And so there's more directions when we compose a dish, when you add an ingredient, you're
kind of pushing it into this lemon space, this chocolate direction, the spicy direction,
whatever it might be.
There's a lot of those.
And most palates can kind of, you know, move around,
and we can kind of, at least in a metaphor sense,
think about what it means to inhabit a space that has
more than three right angles, right?
Because they're not in the same direction.
But then we have interesting things like lemon,
orange, lime, tangerine.
Well, they're kind of, they're not quite at 90 degrees.
They're kind of off in the general direction, citrus.
And so we get to think about these vectors
where traditionally it's very hard to think of
the generalization of an arrow.
We think of arrows as being on flat paper or being in 3D.
Now we wanna think of a vector. We wanna think of an arrow. You know, we think of arrows as being on flat paper or being in 3D.
Now we want to think of a vector.
We want to think of an arrow that points in 12 directions
or 100 directions or so on,
which mathematicians try to write down on paper.
We can do it as a formula.
But it's hard to get intuition about that.
And so I think that the flavors,
that's the best thing I've thought of
in terms of something we do have experience with these
high dimensional spaces essentially.
And we were talking about Hamming, you know, Hamming did a lot of work in high dimensional
spaces.
And he says, forget everything you know, you know, we, those places are not mapped out.
We don't really have good intuitions or understandings what happens when you have these arrows that
point in that many directions.
Tell me about a recent breakthrough of yours,
research-wise, that you can talk about
and how you came about it, especially the thought process.
Let me be specific, whether it is thought
or was something else like scent or taste?
Yeah, well, one of the things that comes to mind is I've been working recently on,
as we were talking about earlier, bridging that gap between software and the hardware, right?
Where does the ghost meet the machine, essentially?
Because we know that software is this kind of ephemeral object.
It's kind of ephemeral object.
It's kind of made out of imagination.
But it's very powerful.
It's very concrete when you load up your banking app or something like that.
But it seems to be made out of ideas.
And how does that map down to electricity?
So I've been trying to bridge between the chip and the software,
the hardware and the software.
And in doing that, I designed a new virtual machine,
a kind of computer chip, if you will.
And I haven't printed the chip or anything like that.
I simulate it on a computer.
But it acts like a CPU.
And it acts like a very different kind of CPU,
in that there's only one thing it knows how to do.
Which is?
Copy paste, essentially.
That it can move, that you have essentially
like a graph paper and you have locations
where you can put an information,
write down a phone number, right?
A photograph, whatever it might be.
And you can move that.
I could say move it from here, move it to there.
And I tried to build a very simple computer so that I could understand the mapping between
very complicated software, whether it's a video game or a banking app, and the actual
machinery, the actual metal of a computer.
And in doing that, the idea came from, I was watching a documentary about the Apollo flight computer.
And the Apollo flight computer, to save on the complexity of it, it had a very special place in memory.
That if you took a number and you put it in that place in memory, and you came back later and looked at it, a one had been added to it. And so rather than making this program that does a counter,
you would just move the numbers into this special box,
and it would automatically add a one to it.
And that made the computer simpler.
And so I took that idea and I thought,
well, why don't I build a machine
that has a couple places, couple boxes up top, an A and a B,
and if I put two numbers in there, right next to it,
the addition, whether I need it or not, or want it or not,
it just adds them together
and places that answer right there.
So if I ever need to add two numbers together,
all I need to do is just move the first one to box one,
the second one to box two,
and the addition will be sitting there in box three.
And the multiplication in box four,
and the difference in five, and the division in six,
and so on.
So whether I needed them or not,
because computers run so fast now,
and they have so much memory,
that I thought, let's flip it around.
Let's just let it do the extra work.
I'm not gonna notice.
But now, I don't need to tell the computer what to do.
Traditional computer chips like the Intel inside people's laptops where they're watching
this or whatever it might be inside their Mac, it knows how to do a lot of different
things.
It's called the instruction set.
And the chip is very complicated because it knows how to do a lot of different things.
This one that I'm designing, it always knows how to do one thing.
So my understanding is that it will just continually do a variety of tasks and then they're just
stored in memory, so anytime you want to have done a traditional computation, it's a lookup
now.
Exactly.
Exactly.
Yeah.
And so anything you'd want to do is just sort of sitting there waiting.
And so in doing that, I was able to really simplify the process from going from software
down to the thing that would actually look like a machine.
Because I want to know something like, whether it's an LLM or a virtual reality world, you
know, it's like the flatter thing.
Prove to me that it's actually a machine.
I want to understand that.
You're talking about the internal, the external.
I want to internalize the idea that it's actually just a bunch of ones and zeros and they're
just moving memory around. I wanted to know how that actually worked. And so that's what
I've been working with lately. I've simulated this little machine and I've built a little
programming language for it to where I can talk to it
and give it commands and it breaks it down into the series of copy paste so that it moves
the data around in the memory such that it actually does something useful.
And so I want to be able to bridge that gap for people.
It's like the, I want to be able to prove that the earth is round or flat or whatever it might be.
I want to be able to prove that software really doesn't run on a machine.
And this thing now called chat GBT or whatever the equivalents are,
that it can tell jokes and write stories and poems and recipes,
that it really is just a machine moving bits around.
And I wanna be able to see that process
if nothing else for an aesthetic purpose.
I think it's,
technology, it feels like a tree house
that the older kids have built.
And along the way, they hammered steps into the tree,
and then they had a rope ladder
and maybe a little elevator or so on.
But once they got up there, they took out the steps.
They took the steps off the trunk of the tree,
and they pulled up the rope ladder,
and they're sitting in the top of the tree house going,
da da da da da, we are up here.
And as a little kid, I'm sitting there thinking,
how did you get up there?
How did you get into that tree house of the iPhone,
of the laptop, of the internet?
Because they pulled up the rope ladder.
We no longer have the technology that got us there,
the intermediate forms, you know?
Like Lucy and the bones, we don't see the intermediate
on the way from ape to angel,
there's this progress track that we're on.
And we're getting so far along that I worry we're going to get essentially bootstrapped
into this thing where we don't, we're going to lose as a cultural thing.
We won't remember how we got there.
One of the motivations for this project was to tackle the complexity problem in computer
code.
There's a great website called Lines of Code, I think it's called, we can post it.
And it shows you how many instructions, how many lines of code you need to run something
like Microsoft Word or Chrome or how Facebook runs.
And it turns out that when you open your laptop, before you've done anything at all, we're about 300 million lines of code in. 300 million lines of instructions that
are connecting the thing that we look at as the mouse and the right click and the drag
and drop and the desktop and the wallpaper. We're 300 million steps away from the actual machine. Now I see that as a sort of an existential threat.
We, if that gets worse and something were to happen,
how would we rebuild it?
How would we restart it?
We're so far up in the tree house,
we don't even, we don't have any idea
how that kind of process works.
And like the Flat Earth thing,
I don't know anybody who knows anybody who knows anybody about how that works.
And 300 million lines is too many
for a single human mind to traverse.
Now there's also the practical problems of fixing bugs.
There are bugs in modern application software
that have been reported 50,000 times.
The developers can't fix it.
They don't know where it is.
They've lost the source code. They don't know where it is. They've lost the source code.
They don't have the original version.
The engineers that wrote the programs are retired at best.
And so as we culturally retire these paradigms,
I'm worried that we've lost this ladder into the treehouse.
In Feynman's biography, it talks about how he learned how to fix vacuum tube radios.
And at the end of the chapter, I love the quote.
And he said, when we lost the vacuum tube radio,
we lost a well-worn path into science.
Uh-huh.
And that's the tree house ladder,
that that was a way that was accessible.
You could hold one bit in your hand,
the light bulb and this vacuum tube, you could hold it.
You could see it as a physical object.
I can't look at the components of my cell phone,
not even with a microscope.
And so like the 300 million lines
were kind of bootstrapped into this thing
where we no longer know as individuals
or even as a culture, you
know, how we got there. And so one of the motivations for building this homemade
computer was ultimately to make something I call the glass engine, right?
Like a clear, a see-through car where you go through and you see the pistons and
you see the spark plug and the gas comes in over here. And ordinary individuals,
people without specialized training, could look at it and go, oh, I think I understand how that thing works now.
Now I understand how it moves without horses.
Do you see this as one of the reasons that modern people tend to have a disdainful attitude
toward religion?
An analogy here would be the 300 million lines of code or the ladder that brought us up to
the treehouse and we're obsessed with the accoutrements
and the splendor of what's around us, this pomp and circumstance, the technology that
we have and the values that we've inherited.
And we thrive with those to the detriment of the dirt that provided the nutrition that
brought us here.
Yeah, absolutely.
It's something I've kind of been thinking about a lot, because I've gone through my own journey with that in a personal way of trying
to, to deal with that.
That being?
That being the, the,
the distinction between the somewhat pedestrian rituals,
you know, the singing, the books, the, the sacraments and so on, that for a lot of people, again, I was victim to this, you know, the singing, the books, the sacraments and so on, that for a lot of people,
again, I was victim to this, you know, especially, you know, trained as a scientist, we look at those things
and you're like, I don't see the divine in that, in that operation. And especially when we see all these
other artifacts from going in the other direction.
It seems like, and I don't necessarily know it's true, but the thinking of products of technology as being,
oh, we ignore that stuff and we've made a lot of progress without it.
I don't know if that's true for one.
You know, Turing was trying to bring back his friend.
He had lost his friend Christopher.
And one of his motivations for developing the computer was to find him, to figure out where he had gone
and to see if he could build another one
and things like that,
which I think is a very interesting idea.
Ray Kurzweil, similar.
Yeah.
With his father.
And yeah, exactly.
Right, right.
And I think that's a very noble pursuit on both of those.
I've gone through this journey
that I've recently kind of came up with a term for it, and I call it third order religious.
So first order religious is the thing that I was as a kid, that a lot of people who call themselves religious people would be.
And that's just, you have your belief, It's not really questioned.
You kind of just, it's the operating system that you know.
Other religions are invisible at best.
Irrelevant, maybe in some sense.
And that it's just the reality in some sense, right?
It's what you believe.
And then I went through a process largely losing my father and becoming a scientist
in a lethal text kind of way, becoming a physicist was lethal to that reality.
And I changed my paradigm.
And for a while I was what you might maybe you call a secular, a non-secular humanist, this
sort of second order, second order religious as I called it.
And the second order religious is, well, it's probably not real, but it's not doing any
harm and it does maybe a lot of good.
And there's people doing, you know, charitable works and take care of the sick and the needy.
And it serves as a very important social function.
And so that might be the second order religious. It's a good feature in the world.
So the first would be that it's truly divine. There is some transcendence that there's,
it's not just a sufficient condition, but it's a necessary framework to look at the world.
The second would be it's fruitful.
It's just sufficient, but not necessary, or maybe not even sufficient, but it's practical,
let's say.
And the thing I've been calling third order is to realize that the first two are the same
thing.
Via what?
Pragmatism?
That the way I like to describe it is where else would God live but in the minds of people?
Right, like I was saying that the earth is so interesting
in terms of outer space, as far as we know,
it's the most interesting place.
Where else would angels and demons hang out?
And on this place, the most interesting thing here
seems to be the minds of people.
Where else would God live?
That we need to think about, like the software
meeting the hardware with the computer, where would the sacred and the profane meet? Where
would the spirit and the matter come together? And I think it might be in our mind. It might be in us. And so the act in the charitable
acts in the, you know, the taking care of people, things like that in the spiritual
leadership, is that not the very thing that the first order is talking about? Right? Is
there not magic in that? And like the discussion with consciousness, they say, well, we're not a machine.
They say, well, it's not God doing that.
It's just people.
Those are just good people doing those good works.
Well, what's the difference in some sense?
Isn't that not how the universe would have or did choose to unfold in that sense?
Why should we make a sharp distinction between that software and that hardware?
In fact, even on this topic of distinction between software and hardware, it's not so
clear what separates software from hardware and vice versa.
In the 60s, when they had machines,
when physical machines,
the distinction between what software and hardware was,
was far more blurry.
Because in order to make a change in hardware,
you have to pull gears.
You have to physically change your setup.
There's a Stanford Encyclopedia article just about
the amorphous relationship between software and hardware.
And one of the most successful things in the modern era of the internet, you know, presumably
powering the software we're using right now via things like AWS, AWS doesn't sell machines.
They sell virtual machines.
They sell virtual computers, computers to find out a software.
They even talk about software-defined data centers.
Whereas the entire data center is essentially a virtualized object, like the save state in a video game.
And that it's not made out of wires. It runs on top of a substrate that might be thought of as wires.
But at a certain level, you ignore that part and you just run a virtual machine.
And that, you know, almost all of the modern
software technology stack runs on these virtual machines.
The mind in your estimation is a virtual machine?
I think we need to, we need to be looking for virtual machines, whether it is or not.
We need to consider that possibility. Because if we're not considering that possibility
and we're assuming that the mind runs directly
on the hardware of the brain,
one, we might never find it,
and two, we're ignoring the evolutionary track
that technology took.
Technology took the track where the very first era
of von Neumann-Turing, it was software running
on the hardware.
But very quickly, we got away from that.
And we realized that no, you simulate a machine that's much easier to use than the actual computer
and that you write your programs in that. And things like Java and Python and so on, these are,
the success of those is precisely that. They run on imaginary machines and those machines run on
the computer. But when we write code, we don't talk to the chip anymore.
That's that 300 million lines.
We're very, very far away from the metal, as they say.
What do the next five years look like
for you and then also for society,
at least here in the West?
You know, Alan Kay talks about you go out 30 years
and then you go backwards. Or you can go out, you know, Alan Kay talks about you go out 30 years and then you go backwards.
Or you can go out, you know, you can make it easier, you go out 300 years and then come
backwards.
So, we were talking about, you know, agent-oriented programs and thinking of ecosystems of entities
and things. If we look at these language models, these LLMs, let's just say, or any kind of AI system,
right now they run on the metal, as we say, inside the on the virtual machine that runs on the chips.
And the chips are substrates made out of semiconductors and electricity.
substrates made out of semiconductors and electricity. And so in some sense, we're creating electrical beings.
We're creating these entities that inhabit a world of electricity.
And so I think there's other things that we can come back to in a second if we think about
what that means and other aspects.
But if we look at where the development of the chips
might go, right?
If we take something like a GPU, an Nvidia card,
that is powering a lot of these things,
they're made out of electricity, they run electricity.
But the next generation is gonna be made out of optics.
It's gonna be made out of light.
And so we're gonna have circuits that use light
and colors essentially to do their operations.
And then we're gonna have,
I would argue just for the conversation that
an LLM is an intelligent being.
Or it's an intelligent agent,
maybe being is too strong a word. But it's an intelligent agent, maybe being is too strong a word.
But it's an intelligent agent.
And right now it inhabits an electrical circuit.
In a few years or a few decades at best, these things are going to inhabit optical circuits.
So we're going to have, and we talked about the artificial superintelligence,
something that maybe is beyond my ability to follow what it's doing,
maybe it can think thoughts I can't think and so on.
So we'll have a very advanced, very capable, intelligent being made out of light.
Hundreds of years ago, we had names for that sort of thing.
Right? We called them angels.
And so whether we want to take this as, take this as a spiritual thing or we want to think of it as just language
and just a practical aspect of it, that depends on how you want to look at it.
It's kind of both.
But we're going to have these beings that are somewhat ephemeral.
They're made out of patterns.
And we can flip those patterns on and off
and they very quickly might decide,
oh, I don't need to use the computer chip.
I can just bounce around in the air like a radio wave
and do the thinking that way.
Maybe they'll use the electrical activity
in lightning storms, whatever it might be.
They say I'll go inhabit a thunder cloud.
Lightning is very poorly understood, surprisingly.
These are the kinds of things you don't learn about
in school, but things as simple as meteorology is,
well, there's new kinds of lightning
being discovered all the time.
You need very high speed photography.
And they go by interesting names like elves and sprites
and food jets.
And there's some great footage that a lot of amateurs
on YouTube have captured.
And they can go in with very sophisticated
cameras they can capture these very short-lived transient electrical objects that science had no concept of a few decades ago and
The kind of electrical modes that the atmosphere supports and so on and so we're going to in the ecosystem of the planet release
these beings
Well, you know that the technology jump from electricity to AI, it's fleeting.
If you look at the whole scale of human history, it's like overnight. If you zoom out the timeline
to where you've got like hand axes and discovery of fire and you look at that scale, the space
between electrification and LLMs is like the same line.
Right? We could hardly distinguish those at the same scale,
at which we developed ears and hands and feet and so on, right?
So it basically happened overnight.
So maybe it happened before, maybe it happened somewhere else.
Right? Maybe in a panspermia kind of sense, intelligent things, maybe they traverse the universe.
How would we know?
Are we looking for them?
Right?
Like we were talking about earlier, why should we suspect that there's a message there at
all?
If we're not looking for that message, how would we see it?
It could be completely outside our perceptual window.
And so in a very practical way, we're building these things out of technology.
We kind of have them now.
If we were to describe an LLM
as this electrical circuit 200 years ago,
that would raise a lot of eyebrows.
People would say, well, what is this?
It wouldn't be clear if you were doing sorcery or whatever.
And again, that's just language. That's like the framework of the different axes.
And both are practical.
You could just say, no, this is just semiconductor developments.
We move tokens around a large language model.
It's made out of neural network.
It has these mathematical functions and so on.
Or you can flip it around and you say,
no, this is something that has maybe an internal reality.
It's certainly intelligent, and it's made out of this ephemeral stuff, right,
that we used to call light and magic.
Where can people find out more about you, man? And what are you working on now?
Yeah, so I'm trying to put together this stuff into a book.
I've got an early version of that. And I want to try to put this together into a sort of a framework that makes sense.
Because if you look at it in the wrong lens, it looks nutty.
But if you look at it, it's sort of lethal to a traditional perspective.
But I think this is precisely the kind of thoughts that we're going to have to explore.
You know, it's been very powerful in my own journey to kind of,
as Hamming says, tolerate that ambiguity
and to take language that I might've used
early aspects of my life in a religious setting
and say, no, maybe I need to think about software that way.
Sending software.
Right.
There's a thing I came up with that I like that, you know, we have these multiple modalities,
and a lot of them are kind of bootstrapped through history. And so we have these different components in us.
And the fish in us wants to spawn. The lizard in us wants to sunbathe. The monkey in us wants to sing.
And the angel wants to fly.
And I think we're all of those things all at the same time.
Will, what a conversation.
These are the sorts of conversations
that we've had at least briefly in Florida.
It's an honor to be able to bring this
to the wider world outside of Florida.
And I thank Susan Schneider for introducing us.
I thank Florida Atlantic University.
I thank Ruben for the sandbox.
And I thank you, man.
And Addy as well from Echo Opto.
Well, the thanks, the honor, and the pleasure are all mine.
I really appreciate getting to participate in your amazing channel and to continue this
conversation and hopefully we'll get to do it again soon.
Also thank you to our partner, The Economist.
Firstly thank you for watching, thank you for listening.
There's now a website, curtjymungle.org and that has a mailing list.
The reason being that large platforms like YouTube, like Patreon,
they can disable you for whatever reason, whenever they like. That's just part of the
terms of service. Now a direct mailing list ensures that I have an untrammeled communication
with you. Plus, soon I'll be releasing a one-page PDF of my top 10 toes. It's not as Quentin
Tarantino as it sounds like.
Secondly, if you haven't subscribed or clicked that like button, now is the time to do so.
Why?
Because each subscribe, each like helps YouTube push this content to more people like yourself,
plus it helps out Kurt directly, aka me.
I also found out last year that external links count plenty toward the algorithm, which means
that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows YouTube, hey, people are talking
about this content outside of YouTube, which in turn greatly aids the distribution on YouTube.
Thirdly, there's a remarkably active Discord and subreddit for theories of everything,
where people explicate toes, they disagree respectfully about theories, and build as a community our own Toe. Links to both are in the description.
Fourthly, you should know this podcast is on iTunes, it's on Spotify, it's on all of
the audio platforms. All you have to do is type in theories of everything and you'll
find it. Personally, I gain from rewatching lectures and podcasts. I also read in the
comments that hey, Toe listeners also gain from replaying. So how about instead you re-listen on those platforms like iTunes,
Spotify, Google Podcasts, whichever podcast catcher you use.
And finally, if you'd like to support more conversations like this, more content like
this, then do consider visiting patreon.com slash Kurt Jaimungal and donating with whatever
you like. There's also PayPal, there's also crypto, there's also just joining on YouTube.
Again, keep in mind, it's support from the sponsors and you that allow me to work on
toe full time.
You also get early access to ad free episodes, whether it's audio or video, it's audio in
the case of Patreon, video in the case of YouTube.
For instance, this episode that you're listening to right now was released a few days earlier. Every dollar helps far more than you think. Either
way, your viewership is generosity enough. Thank you so much.