Theories of Everything with Curt Jaimungal - Top AI Scientist Unifies Wolfram, Leibniz, & Consciousness | William Hahn

Episode Date: February 11, 2025

As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Rubin Gruber Sandbox (referenced by Will): https://www.fau.ed...u/sandbox ➡️Join My New Substack (Personal Writings): https://curtjaimungal.substack.com ➡️Listen on Spotify: https://tinyurl.com/SpotifyTOE ➡️Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join Links Mentioned: William Hahn’s first appearance on TOE: https://www.youtube.com/watch?v=xr4R7eh5f_M William Hahn’s Website: https://hahn.ai/ Jacob Barandes’s first appearance on TOE: https://www.youtube.com/watch?v=7oWip00iXbo Lilian Dindo on TOE: https://www.youtube.com/watch?v=L_hI7JNsbt0 Stephen Wolfram on TOE: https://www.youtube.com/watch?v=0YRlQQw0d-4 Stephen Wolfram’s Mindfest presentation: https://www.youtube.com/watch?v=xHPQ_oSsJgg Curt’s Substack article on Hahn: https://curtjaimungal.substack.com/p/the-hahn-jaimungal-conjecture-the Michael Levin and Anna Ciaunica on TOE: https://www.youtube.com/watch?v=2aLhkm6QUgA&t=81s What is it like to be a bat? (paper): https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf TOE’s Consciousness Iceberg: https://www.youtube.com/watch?v=GDjnEiys98o&t=21s&ab_channel=CurtJaimungal Karl Friston on TOE: https://www.youtube.com/watch?v=2v7LBABwZKA&ab_channel=CurtJaimungal If you're struggling with your mental health or experiencing a crisis, please reach out. You're not alone. Here are free, confidential support hotlines: Mental Health Support Hotlines * US: 988 (Crisis Lifeline) - https://988lifeline.org * Canada: 988 or 1-833-456-4566 (Talk Crisis Canada) - https://talksuicide.ca * UK: 116 123 (Samaritans) - https://www.samaritans.org * Australia: 13 11 14 (Lifeline) - https://www.lifeline.org.au * Germany: 0800 1110 111 / 0800 1110 222 (Telefonseelsorge) - https://www.telefonseelsorge.de * India: 1800-599-0019 (Kiran) - https://pib.gov.in/PressReleasePage.aspx?PRID=1651963 * France: 3114 (Crisis Support) - https://3114.fr * Netherlands: 113 or 0800-0113 (113 Crisis Support) - https://www.113.nl * Sweden: 90101 (Mind Helpline) - https://mind.se/hitta-hjalp/sjalvmordslinjen/ * China: 800-810-1117 / 010-8295-1332 (Beijing Crisis Intervention Center) - http://www.crisis.org.cn * Japan: 0120-783-556 (Inochi no Denwa) - https://www.inochinodenwa.org/ * New Zealand: 1737 (National Helpline) - https://1737.org.nz * Spain: 024 (Crisis Line) * Brazil: 188 (CVV) - https://www.cvv.org.br/ * Ireland: 116 123 (Samaritans Ireland) - https://www.samaritans.org/ireland/ * South Africa: 0800 567 567 / SMS 31393 (SADAG) - https://www.sadag.org In an emergency, always call your local emergency number (e.g., 911 in the US, 112 in Europe). Support TOE on Patreon: https://patreon.com/curtjaimungal Twitter: https://twitter.com/TOEwithCurt Discord Invite: https://discord.com/invite/kBcnfNVwqs Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 You hear that? Ugh, paid. And done. That's the sound of bills being paid on time. But with the BMO Eclipse Rise Visa Card, paying your bills could sound like this. Earn rewards for paying your bill in full and on time each month. Rise to rewards with the BMO Eclipse Rise Visa Card. Terms and conditions apply.
Starting point is 00:00:24 We're here at the MIT Media Lab. I'm here with Professor William Hahn. The links to his work are on screen and in the description, as well as the previous time that we spoke, which was a fantastic podcast, and I think people should check it out. Thank you so much. It's really great to speak with you again.
Starting point is 00:00:39 I've been looking forward to this for some time now. Thank you. We spoke off air yesterday and the day before. You were actually there for the Jacob Barandas podcast, which is also on screen. You had some ideas as to how to unify liveness with Stephen Wolfram's work. Please talk about that. Yeah. So really the idea is bringing together ideas from consciousness and computation.
Starting point is 00:01:04 And so typically when we think of consciousness, we have this kind of light switch view that it's either on or off. And there's an idea attributed to Leibniz where it's really a hierarchy. It's kind of a ladder of different capabilities. And it revolves around the ability to represent your ideas, to think about your thinking,
Starting point is 00:01:27 and then to think about that thinking and so on, as this sort of this hierarchy that goes up. And if we think about the simplest animals, they don't, they have the sort of the first order things like sensation and response to stimulus, but they don't really have any way of representing that response. And it seems that by thinking in terms of representation and then representing your representation, where we can have things like language and then we can build philosophies on top of that language, and it's a way of modeling the model and a way of talking about the talking and things.
Starting point is 00:02:06 And so we get this more analog or at least a stepwise approach to consciousness where it's not just on or off, but it's kind of this continuum. And I think this is relevant now because we're in this age where we're building intelligent machines. And I would argue that these machines that we have in the form of these modern AIs are quite intelligent. And the question now is, do they instantiate something like consciousness? And if so, is it like human consciousness?
Starting point is 00:02:41 Is it something more like a lower animal where it's just knee-jerk reaction you put in the prompt and it just spits out the tokens? Or does it have some way of knowing what it knows and thinking about what it's thinking about? And maybe the current versions that we see right now in early 2025 don't, but it seems to me almost inevitable that if we keep going in this direction, that we will kind of close the loop on this thing. That the snake is going to bite its tail and we will bootstrap this thing we call consciousness inside these machines.
Starting point is 00:03:17 But we might need a kind of graded scale to try to understand that. And then hopefully this will help us understand things like animal consciousness or even higher levels that human consciousness might achieve. So when someone says that consciousness is a continuum and it's not a binary on or off, I think you can always translate a continuum to an on or off by saying is it zero or non-zero? So for instance, we say that a photon is charge-less. We don't say it has a little charge or the electron. We say it's a charged particle even though it's charged as minuscule. So when you say that consciousness is a continuum and it's representations of representations, do you believe there are
Starting point is 00:03:59 entities or things or objects or what have you that are not conscious or is it just all conscious? Well, I think that's a great question. I think that's at the heart of it is trying to figure out does this have sort of that threshold? Do we want to think about things like rocks, for example, as being conscious? And classically that would have been the pinnacle example of, no, clearly it's not. But I would argue that things like a modern GPU are rock-like objects, right? They're these semiconductor layers, very delicate, intricate patterns. But essentially what the classical world would say is a fancy crystal of some kind. And now we're instantiating this intelligent, possibly ultimately conscious behavior out
Starting point is 00:04:49 of this rock-like object. So the situation's getting more interesting than it was previously. How does this tie into Wolfram? So Wolfram has this fantastic idea of computational equivalence. The idea is that the natural world stumbles upon computers quite often. The idea of a universal machine that Turing established, which is a machine that can simulate any other machine. It's a machine that runs software. And in that software programming, you can get this universal machine to simulate
Starting point is 00:05:30 the behavior of any other machine. And in the early days of computing, it seems like computers were very delicate human artifacts that took a lot of engineering work to put together. But what Wolfram showed with his cellular automata research is that it's relatively easy to stumble upon a system that's complex enough to support universal computation. That he has this breakdown of sort of four different classes of, he uses automata as the case study. But he argues that any system that's not obviously simple, any system that's sort of interesting in its dynamics, the majority of the time that's going to support universal computation.
Starting point is 00:06:22 So in other words that there's these natural computers all throughout nature and previously we didn't recognize them or understand them as computational objects. I think that's what's just so interesting about this new era of maybe what we might call philosophy is that we have this new language. We can invoke the metaphor of a computer and software and program and universality where, as we spoke last time, the best minds on the planet didn't have access to those thinking tools. And so if that's really true that there's these computer-like objects all over the place,
Starting point is 00:06:58 and now we have computer-like objects that might be instantiating consciousness, namely a GPU running a large language model, which obviously would be highly debated whether or not that's conscious or not. But if we take that in the abstract sense and extrapolate that out a decade or so, to me it's clear that we will have a hard time it might be fruitless to argue whether they are or not, because they'll be so similar to the kinds of things that we see in humans, that maybe they're actually all throughout nature.
Starting point is 00:07:34 Maybe throughout the universe, we should expect to find objects that are universal machines, sort of spontaneous in nature. And we wanna think about now what's the chance essentially that they have a software program that might be somewhere on this ladder of consciousness that's attributed to Leibniz. Are you referring to computational equivalence? Exactly. Yeah. Okay. So in the Leibniz case, you were referencing representations of representations and you have a hierarchy.
Starting point is 00:08:08 Right. In the Wolfram case, it sounds like you have what is not a computer and then what exhibits computer-like qualities and then you hit universal computer. Right. But there's nothing above that. Exactly. And that's kind of the point is that you smash into this sort of upper ceiling of computation Relatively quickly right it's sort of a low speed limit and whatever vehicle you're in quickly hits speed limit because that's what's
Starting point is 00:08:33 That's what's so profound about universality is that's the ceiling. There's nothing something more powerful than that and so if that is relatively easy To create spontaneously in some sense by chance we get a computer-like object, a universal machine, then we want to think about, well, what's the... the analogous part is the software that would go with it, right? So Wolfram suggests that it's relatively easy to find computers in the wild. Well, what's the chance they have any interesting programming to go with that? So, is there a similar ceiling with representations? Like, is it the case that when you hit the fourth representation, you've hit them all?
Starting point is 00:09:13 And is there a universal representation at level six? So, I think you hit the nail on the head with that one because that's really kind of what I've been thinking about, I like how you phrase it. Are humans already at that universality? Are our intellectual sphere, our language, our culture, have we maxed out? Or are we going to transcend this era, reach another level of consciousness, maybe called spirituality, whatever that might be, another layer of evolution?
Starting point is 00:09:42 Or is this it, right? That we have the ability to represent our representations and we can think about our thinking and then we can think about that and so on. And I don't know. I would like to hope that we're not at that universal level yet. I think we mentioned last time on the channel, this sort of path from ape to angel, right?
Starting point is 00:10:07 Or from rocks to light, you know, kind of thing. Where are we on that sort of evolutionary trend, in a sense? And I think that we're not at the angel stage, right? We're not quite there yet. And so it reminds me of a quote, I think they said we're, I forget who says it, we'll have to look it up, but we're angels with assholes, right? That we're still these corporeal beings
Starting point is 00:10:38 that eat sandwiches and have to go through our daily lives. We're not this ephemeral being that's just pure thought and energy and things. But it seems like the AI is on that trajectory. And I wonder if maybe there's just a discontinuity and as others have suggested, maybe the biological sphere just bootstraps the AI and then that's the next stage of evolution. Because for lack of a better term term they don't have assholes.
Starting point is 00:11:08 Is the question of have we hit the upper limit equivalent to the question of are there thoughts that we can't think of? So now what I mean by equivalent to is we we can imagine that let's look at this room and there are thoughts everywhere and we have access to them. Okay cool. Then there are thoughts that are outside this room. We don't have access to them. Then we'd say, okay, we have not hit the upper ceiling. And then we can say that, okay, so in that case it was, if there exists a thought that's outside this room, then we have not hit the upper ceiling.
Starting point is 00:11:36 We can also say, if we have not hit the upper ceiling, it implies there is a thought that's outside this room. So I'm not sure if that's the case, because then you would make an equivalence between those two. Right. Well, I do think it's the case there's unthinkable thoughts. And the question is, are we able to think about them in the abstract sense? Maybe not the actual thought, but do we really understand or can we even appreciate that
Starting point is 00:11:59 there's things outside of our conceptual and perceptual window. And that's what I've been thinking about a lot recently or trying to. And you know, we mentioned last time Richard Hamming and he talks about this idea. He says there's smells that dogs can smell that we can't smell and they can hear things we can't hear. And there's lots of things in the animal kingdom that have a larger perceptual window. And he argues, why should we think that there's thoughts we can't hear and there's lots of things in the animal kingdom that have a larger perceptual window. And he argues, why should we think that there's thoughts we can't think? Right.
Starting point is 00:12:29 It seems quite clear. And also even just arithmetically, there are numbers that we can never conceive of. At least we can write them down like Graham's number, but we can say, Graham's number to the power of Graham's number. We can't actually conceive of it. We actually don't understand what it means for a number to be greater than 150 or so. Right. Right. We can't fit these things into our mind.
Starting point is 00:12:49 We can kind of only approximate them. So to someone listening, they're like, well, what are you talking about? It's quite clear. The philosophical term, I believe, is umwelt. It's quite clear. Our umwelt is bounded and has some overlap with other animals. I mean, that's not quite clear, but it's not obvious. Okay. So then they're thinking, okay, so what's so profound about this?
Starting point is 00:13:10 Well, so thinking about these sort of unthinkable thoughts has led me in a few interesting directions. You know, why can we not get to that space, right? I think I mentioned last time the future is not what we think it is. Because if we knew exactly which direction to go, we could just go in that direction. And so there's some kind of barrier. And the question is, is this like a natural barrier? We just don't have enough neurons. We don't have enough cortical layers to represent the representations and so on.
Starting point is 00:13:43 And then there could just be another hierarchy. Or something I've been thinking about more recently is that there's a more practical reason why we can't think these thoughts, and it's kind of a sort of an immune system that we mentioned before. That our mind itself is trying to protect us from a variety of thought patterns that would essentially destabilize both our mental patterns and then ultimately our physical self if the thoughts go into a terrible spiral. So from an evolutionary point of view, there might be this kind of filter,
Starting point is 00:14:26 this barrier that evolution has instantiated to prevent us from going nutty, essentially. And that if, from a Darwinian evolution perspective, if we just wanna kind of populate the planet the best we can, you don't need this kind of intelligence. Intelligence could be kind of a Fermi-style filter that's actually preventing a lot of progress. If we think about like, you know,
Starting point is 00:14:53 the rain the dinosaurs had on the planet, it was a hundred million years, 200 million years, and humans haven't had that kind of longevity on the planet yet. And it's hard to imagine us continuing the way we're doing for that much time. But imagine like whales and dolphins, you could easily imagine them doing the same thing they're doing for millions of years into the future and being happy to do it.
Starting point is 00:15:18 But this also leads me to think about what thoughts our language supports. And I used to think that English was good enough, but now I'm not so sure. And I think what's really interesting is we've taught these language models not just English, but all of the human languages that we could get our hands on. And I think something that's particularly relevant is the computer languages and the mathematical languages that we've also put into that system. So it has representations that it can map into English, but its mind, so to speak, doesn't really operate in English. It operates in these thought vectors, these word vectors that are independent of any particular language.
Starting point is 00:15:56 Okay, this is extremely interesting. So I have two thoughts, and you can take both of them if you like or choose a direction. So when you said that you thought English was enough, are you saying that you thought there was something like the universal language and English is one of them, Chinese maybe another, there may be some languages where they only have a hundred words and maybe those are insufficient to represent. So in other words, some people say there are some words in some languages which you cannot translate, like untranslatable
Starting point is 00:16:25 words. And technically, they should say they're untranslatable into another single word. And let's pick one untranslatable word. So in Turkish, there's some word for the feeling that you have after you drink coffee with friends, something like that. Let's say that. I just described it there with a set of 10 words or so. And if it was German, you would just smush them all together, right, and make one new word out of it, right? Right. Okay.
Starting point is 00:16:49 Which I think Turkish does actually as well. Okay. So is that what you meant when you were saying you thought English was enough? Right. Okay. That it was this, like you mentioned, a universal software framework. That we have this idea of universal machine, but, you know, this idea of a Turing tar pit that just because everything's possible doesn't mean anything's practical with say like a Turing machine or some other abstract computer.
Starting point is 00:17:09 And so the idea is what computer languages would be sufficient, right? So we could, as an analogy, we could say basic is not sufficiently powerful enough of a computer language to instantiate the internet or something like that. Or it might be, but it would require orders of magnitude more complexity in the coding. Whereas something like C ended up being sufficient to kind of create the modern software stack. And so I suspect now that English is not quite as universal as I previously thought it was. This is why I've been looking at things like musical languages and conlangs and esoteric languages to get some ideas about where could we put these unthinkable thoughts.
Starting point is 00:17:51 Because when you learn those new languages, you can suddenly think in new, maybe more compact ways. You don't have to have a whole sentence. You can have just sort of one neuron. But like you said, I would argue that there are no untranslatable words. And if there were, you couldn't talk about them, like at all. You wouldn't even be able to mention that they were untranslatable in some sense. And so that would be in the space of sort of maybe not unthinkable thoughts, but uncommunicatable
Starting point is 00:18:14 thoughts. And one of the ideas I've been thinking about this year is the set of ideas for which you can take it apart, like Legos, right? They have the Lego Lab upstairs. And Legos you could take apart into the little bricks, and like language, I have an idea in my head now, and I'm decomposing it into a serial sequence of symbols,
Starting point is 00:18:37 and then you're taking those and you're putting them back together, and I hope that you get the same kind of mental representation. But you could define a set of ideas for which precisely you cannot do that, right? Where if you take any one piece, it's not the same thing anymore. That it doesn't go down into a sequence of things. And that might be, you know, really, it reminds me of things like gnosis, right?
Starting point is 00:19:02 Knowing from the inside where you can't explain, you know, things like faith and God and all that. And I think a lot in the modern world, people were like, well, explain it to me, you know, explain what you mean by faith and God. And it's like, well, that's not how it works. It's precisely the kind of idea that I can't decompose into a sequence and you can put it back together. And we try and we have religious texts and, you know, services and things that approximate
Starting point is 00:19:24 it. But really the ultimate experience is of that kind where you can't do that. Let me have fun with this immune analogy. Is it the case then that there would be autoimmune disorders of the mind? The mind sees something that's actually benign or even salubrious, so nourishing, and thinks that it's not and so it tries to remove, and perhaps it attacks itself in the process. It's a fantastic idea. I'd have to think about that a little more. But I think in some sense, absolutely, that there's going to be, and that might be kind of the basis for mental diversity and mental disorders, if we might call them that. Because now we're better understanding language than
Starting point is 00:20:07 ever before. By building these machines that instantiate language, it gives us this petri dish, this microscope. And I think when there's a lot of mystery around how these language models operate, and I think the real mystery is in the words, in the language. We didn't understand what words were. We didn't really understand what language was. And we thought it was this communication kind of thing. And I've been thinking about it more with my colleague, Dr. Berenholtz, as an organism, right?
Starting point is 00:20:43 Something that's alive and that we need to you know invoke the framework from biology To to think about it not just as communication But as a you know, I've referred to it as a parasite in some sense and not necessarily pejorative refer to it as a divine parasite, but it's it's something that lives on the brain, but it's not the brain. And one of the ways I've been thinking about this is with things like multiple personality syndrome and split personality syndrome. I think those are evidence that when I'm talking to you now, I'm not talking to your brain,
Starting point is 00:21:24 right? I'm talking to this entity that lives in your brain. And when we see people with multiple personalities, it's clear that they're instantiating more than one operating system, software, stack on the same hardware, and they're multiplex. Like we run more than one app on our phone at a time. That the brain is rich enough of a substrate, of a backdrop, to have multiple plays unfolding at the same time. And then this leads into kind of the other side of unthinkable thoughts and this immune system concept in areas of things like mind control.
Starting point is 00:22:02 And is our brain a programmable object? And if so, is language, is natural language a programming language? Hi everyone. Hope you're enjoying today's episode. If you're hungry for deeper dives into physics, AI, consciousness, philosophy, along with my personal reflections, you'll find it all on my sub stack. Subscribers get first access to new episodes, new posts as well, behind the scenes insights, and the chance to be a part of a thriving community of like-minded pilgrimers. By joining, you'll directly be supporting my work and helping keep these conversations at the cutting edge.
Starting point is 00:22:37 So click the link on screen here. Hit subscribe and let's keep pushing the boundaries of knowledge together. Thank you and enjoy the show. Just so you know, if you're listening, it's C-U-R-T-J-A-I-M-U-N-G-A-L dot org, KurtJaimangal dot org. This is extremely interesting because it relates to the difference between hardware and software, which there may not be as much of a difference as we thought. So when we speak about the brain and even psychological disorders as you alluded to, or directly referenced actually, some of them are neurologically induced. So
Starting point is 00:23:11 there's a physical basis to some of them or at least correlations. Okay. So when we speak about the brain, are we just speaking about the folds and the neuronal connections? Are we also speaking about the particular firing patterns at this moment? Because there's you as you would be if you were dead and someone froze you. But there's also you as we look at you and somehow we can quantum clone your brain, even though there's no cloning theorem, but let's just imagine you can clone every single thing. But that would assume the brain's quantum, which we're not clear. Sure.
Starting point is 00:23:44 Okay, well, whatever. It doesn't matter. We get down to the electrical level. We see patterns of firings. So I imagine the patterns of firings along with your brain are the personality, or could be. I don't know. I would say that's at least an ingredient. So let me know what you think about the distinction
Starting point is 00:23:58 between hardware and software as it relates to the brain. Yeah. So one of the things we spoke about last time is this idea of a virtual machine. And as I mentioned a minute ago, if we go back into neuroscience 150 years ago or so, they didn't have the computer metaphor and they didn't have the software metaphor to understand it. Well, now we do and that caused the cognitive revolution is to think, okay, now we know
Starting point is 00:24:20 that there's this thing called program and so on. But I think we need to take it a step further and think about machines that themselves are made out of software. So the example we gave last time that I like to think about is this idea of an emulator, which the listeners who play video games might be aware of. But this is a piece of software that you run on a modern computer that simulates an older computer. So if you want to play Super Mario on your MacBook,
Starting point is 00:24:49 you can get this program that it doesn't run Mario directly on your Mac chip. The Mac chip simulates. It pretends to be a Nintendo chip. And then the Nintendo chip naturally can run the Mario software. And what's really interesting is can run the Mario software. And what's really interesting is you can do that again.
Starting point is 00:25:07 You can have an emulator running on an emulator. Interesting. Like you could somehow run an Atari game on the Nintendo. Yeah, I've never thought about that. And so it could be virtual machines all the way down in some sense. And so I think when we try to understand language and the brain and the self and personality and ego and all that kind of idea, we need to consider that there could be multiple layers between the thing doing the talking and the neurons doing the firing. And traditional neuroscience would suggest that there's kind of a one-to-one mapping, right? And I think the reason why we haven't made as much progress in those spaces is because
Starting point is 00:25:48 there's multiple representations. So when one references these turtles, the turtles all the way down, I take it extremely seriously. So let me say it's virtual machines all the way down. Do you actually mean that or do you mean that it's multiple levels of virtual machines, but there's a finite N? Well, there's a good question. So this, in the brain, I think there could be multiple layers of this virtual machine, more than we think.
Starting point is 00:26:10 But it makes me think of a much broader idea in terms of matter, right? That quantum information theory would suggest that matter itself is an information-like process. And that when we get down below the atoms, there might be software again. And it's sort of this interesting loop where we have information at the bottom, which somehow instantiates the atomic reality. And then on that, we build computers out of the atoms, and then we get software again. It would sort of complete the circle. So both in our mind and in the universe it might be sort of these virtual turtles, right?
Starting point is 00:26:54 Which I think is an amazing reference. We're in the MIT Media Lab where Seymour Papert developed the turtle programming language. Earlier when you were talking about representations and you're saying, okay, we can represent representations, etc. At some point at the first representation, you're representing something that was presented in order for you to represent it. Do you believe that's the case or also do you believe there that it goes infinitely downward or do you believe there's some non-represented substrate
Starting point is 00:27:26 that gets represented? I think in the case of humans, there's gonna be something like a bottom in some sense, right? We see that with cells and things. But it's not clear where that happens. I think there's all this straight work that you've been following it with Michael Levin looking at the cell and the electrical activity and that there's a lot more interesting there that classical biology kind of just glazed
Starting point is 00:27:53 over. And now we need to go back and realize, no, there's really interesting software at that layer. And throughout the engineering world, we always build a virtual machine. And a lot of times those are just what we call programming languages. So if we think about the electrical activity of the cell and it has some kind of software, it also will have evolved, it seems likely, some kind of other abstraction to make it operate more efficiently. Just as an analogy, when we know about NVIDIA GPUs and we know about these large language models,
Starting point is 00:28:28 well, we don't really work directly with the GPU. We create abstractions like Python and PyTorch and intermediates. PyTorch is an imaginary machine that has things like vector operators and matrix multiply that then get mapped to the low level instructions on the GPU. So in the modern world, we never actually talk to the hardware directly. We build a virtual hardware that's much easier to use. And I would imagine that nature and evolution and biology would have done the same thing.
Starting point is 00:28:59 So we have an audience here of Professor Berenholtz, and I'm sure he's chaffing at the bit and has some questions. Yeah, I can't wait for you to have them on the show. Which I'll feel that also I would like to talk to you at some point, but at least for now the audience can hear a disembodied voice at least or perhaps just my voice reiterating the question. Before we get to the audience questions, tell me more about these unthinkable thoughts. Well, one of the directions I've been thinking about in that is the thoughts that other beings
Starting point is 00:29:29 think about, right? And the classic example is, what is it like to be a bat? Classic paper from 74, I think. And you know, I wonder if it's anything at all to be a bat. And I know Elon has some interesting ideas about sort of the animal layer and things in the- Elon Berenholtz, not Elon Musk. Elon Berenholtz, my colleague. We were speaking in the car ride over about this.
Starting point is 00:29:58 Yeah. But I don't know. I think I take a strong approach to this that maybe it's not like anything to be a bat. So there's this idea in psychology of what's called infant amnesia. And it's that babies don't really have any episodic memory. And I actually encountered this as a child. I remember I was, I don't know, seven or eight or something like that. And I asked my mom about her oldest memory.
Starting point is 00:30:26 And I was, I didn't understand at the time how she could remember decades of her childhood. But as an eight-year-old or whatever it was, I couldn't remember being two. Right? And I was thinking, well, that's only six years. How is it that after six years, I can't, that's too far back to remember, but my mother could remember back multiple decades? How would that sort of thing work? And I think one of the explanations to infant amnesia might be this idea of representation.
Starting point is 00:30:51 That without language, right, we install language into children between the ages of one and two and so on. And without that framework, it would be nearly impossible to have an episodic memory system without having a labeling system, right? And so when I can say, oh, that nice dinner we had yesterday with Jacob, I have a label, right? And with that label, you can reference this experience and we can say, oh, that was an episode and I remember that.
Starting point is 00:31:23 But how would you refer to it? In the space of all possible mental states, without a language system, which is like the Dewey Decibel catalog, without having this card, I can go, that's the one I mean, there would be no meaning to that experience. And so when we think about what is it like to be a bat, I think they might have this fleeting, e ephemeral sensory loop that it would involve things like pain and pleasure, but it can't represent those things. It can't recall them. It has
Starting point is 00:31:55 no way of knowing that it went out and got fruit that morning because how would it recall that mental state without a language type thing? Now, maybe we need to do more research and find out, well, they do have a language type system and they're tokenizing the sonar and things like that. I don't know. But it would seem to me that we should be skeptical that it's like anything to be those systems. And I take it to another extreme and say that it's, you know, what is it like to be a human
Starting point is 00:32:27 being? And do we really know or is it just this approximation that our language model can reference different episodes and different states and so on? Because the example I like to give is the vast majority of our so-called experience is outside this thought window. We don't even know that we have gallbladder, let alone what they do, right? Most people can't name the organs. They don't even know what organs they have, right?
Starting point is 00:32:54 So if we don't even have words for them, we don't have representations for them, they don't exist, right? I remember a few years ago, a student was saying they wanted to be hollow, and I didn't quite know what that meant at first and not in like the shallow sense, in the sense that like they didn't want to have to have all of that complexity because from their experience point of view they don't, right? The thing behind your eyes that's talking doesn't most of the time unless something goes wrong, we're completely unaware that we even have all those systems operating in
Starting point is 00:33:23 parallel. We're not thinking about our toes. We're not thinking about our ears. We're not, and strangely enough, we're not usually thinking about thinking. I think most of, and this might again be a too strong a statement, but it seems like most people, most of the time, including myself, are not thinking about thinking. We might be thinking, but we're not thinking about the thinking. And then we take it a step further and think about that, but that's when you start to get
Starting point is 00:33:43 nutty. All right, that's when your mental immune system kicks in and says, it's time for lunch. Do you think that getting nutty, quote unquote, is reaching that universal language model? That's an interesting thing. That touches on this lethal experience, lethal concept idea. As we said last time, if you see the face of God, it'll be the last thing you've ever seen. Can we reach that, right? Or is that transcendent? You know, will you just simply die or maybe not in the physical
Starting point is 00:34:16 sense, but you're not a human anymore. And in thinking these unthinkable thoughts, you have to be willing to go crazy. And if you're afraid of going nutty or afraid of going crazy, you can't get past these barriers that your mind has built. Well, what's the point? Why do you have to be willing to go to lose your mind? Why would you want to at all? I think one way of defining losing your mind is sort of leaving everyone else behind. Is going off in an abstract vector space direction of thought that there's no one there.
Starting point is 00:34:52 There's no one else there. Or you might be in very rare company. And maybe you could define being human is that overlap. The reason why we can communicate, because we have this shared experience and sort of a common set of beliefs and ideas and languages. And so when you escape that, you're not in the herd anymore. So, there's a difference between having thoughts that other people have not had, so you're in uncharted territory, and also just having a wrong representation of reality Mm-hmm. Okay, so girdle as we talked about in the car ride
Starting point is 00:35:28 It's not as if he was in uncharted territory where you may have some gold that you can then bring back and you're like Steve jobs And you've invented the iPhone supposedly even those engineers also Helped with that. It's more like in girdle's case. He believed the government was poisoning him or someone was poisoning him But I think that's a perfect example of somebody In Donald's case, he believed the government was poisoning him or someone was poisoning him. Mm-hmm. But I think that's a perfect example of somebody who went out into the mines, brought back a gold nugget with his theorems, and lost his, maybe his humanity. I don't want to say lost his mind, but became too far away to where other humans could no longer, you know, hey, just sit down, have a bowl of soup, let's not stress about this. He couldn't take a break for lunch. Literally.
Starting point is 00:36:08 Okay, I would like to talk to you for so much longer, but I know there's some questions here. I'm curious what you think about the, whether you were discussing the software hardware dichotomy and in particular, language and the case of humans and other species that do not have language, is there a completely different level of abstraction that took place at the symbolic level? Because language is fundamentally symbolic.
Starting point is 00:36:36 Right, okay, before you answer the question, can you summarize the question? Yeah, so Dr. Bernholtz is suggesting that when we get this symbolic language layer, did that create a new layer to reality, right? A new expansion of the universe into this new dimension. And it seems like it did, because I think that might explain this extreme chasm between humans, human behavior, and then even other obviously intelligent animals, whales and primates and things like that.
Starting point is 00:37:12 It seems like we are of a different category. And especially if we go back to the ancient world, it was just sort of patently obvious that humans were on this spiritual realm that was just miles above, you know, the plants and animals and the backdrop. And so maybe language did get this universality that we didn't have before, that brains, the birds and dolphins and things have one kind of universality on the hierarchy, but this language instantiates a whole other. But maybe it's just a virtual machine that's much more efficient
Starting point is 00:37:48 and that brains are really powerful, but even better is this software brain via language that rides on top of brains. And that just, maybe it's still universal, but like a Mac comes out today versus the Apple II, they're both universal machines, but they can accomplish more tasks than the others. It seems impossible from the point of view
Starting point is 00:38:10 of like an Apple II. So do you see that as continuous in some ways or completely discontinuous? Well, that's a great question. In complex systems, there's this famous thing, I think it was Anderson said, more is different. And the idea is that you get like a phase transition. And the classic example is you have a pot of water on the stove with a digital temperature
Starting point is 00:38:33 setting. And if you just click up one degree every few minutes, nothing happens. Nothing changes about the system. And then suddenly you hit this inflection point where you get boiling. Right? So it's clear that we see that from a practical point of view, that while there is this universality, we do get phase transitions and kind of capabilities.
Starting point is 00:38:52 So we had cell phones in the 90s, but they didn't do any of the things that our cell phones can do, even though they had universal processors inside them. But they hadn't gone through this phase transition of capability. And so it could be that languages like that, that it gives us a whole new set of apps that we can run.
Starting point is 00:39:16 But it also reminds me of this idea that Marvin Minsky talks about, the founder of the media lab that we're in now. And he talks about a car and when a car is running. And we all know what it means when the car is running, right? And it is sort of this digital thing, right? The car is either running or it's not. Maybe you could get into the weird, you know, kind of states and stuff. But the first approximation, it's sort of this digital.
Starting point is 00:39:43 And he argues that with consciousness is like that. That it's this thing, but he argues that it's not mysterious, right? It somehow comes out of this dynamic of all of the parts and all of those things together instantiate this thing called a running car. And as we know from auto mechanics and maintenance and stuff, it's subtle, or rather it's fragile. And if you have one system in the thing, the car won't start, or the car, you know. So it does seem kind of like this,
Starting point is 00:40:14 and I think we need new language and we need to explore this analog digital dichotomy, because it seems like the boiling, that as we add more complexity, you get new behaviors emerging. And this is exactly what we've seen with these large language models, with the so-called scaling laws, that we took systems that a few generations ago couldn't very well predict the next word.
Starting point is 00:40:41 And if they did, it was just word salad, right? And then suddenly just scaling that system up, just training on a larger data set, and then suddenly it can do arithmetic. And then you scale it a little more and it can do algebra. Scale it a little more, it can do theoretical physics. And what's the limit to that? I think that we're not that far away and we're approximating already not just artificial general intelligence, but artificial super intelligence. And I think that when we start training these things on video, for example, like so the large language model like ChatGPT was trained on Wikipedia and the historical archive of
Starting point is 00:41:15 books and Reddit and things like that. But it hasn't watched YouTube yet, right? It hasn't been embodied in a robot where it actually gets to play in a sandbox For three years in a row like a little kid does right little kids will slosh water around for hours on end Just learning how fluids work and what a container does and all that kind of stuff They don't have any of those tokens. Let's call them right and so I think We don't even need a new recipe, right? I don't think it's gonna be mysterious how how we're going to get to machines that can make real, genuine advancements in chemistry and physics and astronomy.
Starting point is 00:41:51 Just give them a telescope, right? Give them more data and give them experience and I think we'll get those phase transitions to where we have really capable machines. When you talked about animals and that they do experience suffering because at first it sounded like what you're saying was more of the Descartes, well animals do not have consciousness, that's purely a human phenomena and thus you can torture your animals and don't listen to their screaming because that's just the clanking of a machine. Okay, so you're not saying that, but you were saying that animals don't have a self model, but then do we have a self model that persists, that actually persists, or is it more of the
Starting point is 00:42:30 Buddhist notion where there's some transient self? I think it's clear that they would experience things like suffering, but I don't know, it's not clear that they have the ability to represent it. And so they don't know that's what's happening to them. And while they'll probably take actions to try to move into a different environment to reduce that, they can't really lament about it, right? They're not going to write the poetry about it and song and things like that to try to express those internal states because they're sort of farther down on that ladder.
Starting point is 00:43:09 Do you believe that having a self-model is also this binary, having a self-model, or is it also somehow continuous? That's a great question. And the first thing that comes to mind is, do we have one self or do we have many selves? And I'm of the opinion that we all have sort of multiple selves and that when we get to break down in that delicate dance, then we can get multiple personalities that suddenly emerge and things. But I think in our subconscious, we have all of those things. We represent other people in our mind. Something we spoke about last time is I carry people in me in some sense.
Starting point is 00:43:53 And the idea that, and again, this is related to software, you know, there's the classic expression, the ghost and the machine, but I think we are machines made out of ghosts, essentially. And are they like selves? We kind of have this one primary self. But I have a copy of you in me, right? It's how I'm able to communicate with you and think about you when you're not in my visual field and things like that.
Starting point is 00:44:17 It's not as rich as the self that you have, but it's kind of like a hologram. It's a lower resolution, but it somehow captures the whole thing in some weird sense. That's interesting. So, when I speak to people on this channel, people ask me, how do I prepare? So, I prepare for weeks in advance until I get to the point where I can emulate the person. And that's when I say, okay, I've sufficiently prepared. When I can imagine almost any question, when I can answer almost any question that I can imagine from the point of view of the other person and be correct. can imagine almost any question that I can answer almost any question that I can imagine
Starting point is 00:44:45 from the point of view of the other person and be correct. And I can test myself by saying, okay, what questions did this person get asked in an interview? You can put that into an LLM and say, don't give me the answers. And then I can say, what would they say? What are they likely to say? So, you know, like in the large language models, in the neural networks, we have like this weight pruning and we can do like a lower a reduced a bit depth for each of the weights.
Starting point is 00:45:07 You have a low bit depth representation of me, essentially. But then we need to think of it like an ecosystem, because how many people do you have in your mind? How many people have you interviewed? You have all of those ghosts in your machine, essentially. Yeah, speaking of unthinkable thoughts, well, I had these experiences. I've had an experience, let's just say that, and I'll be somewhat vague about it. We talked about it off air, so I'll just briefly speak about it, where I felt like I was losing
Starting point is 00:45:45 my mind and it was one of the most terrifying experiences, if not the most terrifying experience of my life. And as I was processing this, as you go through therapy and think about it and write about it and so on, one of the large contributing factors is that each week I'm interviewing someone who has an entirely different point of view as to someone else and I have to take them on, first emulate them, but also think that they could be correct because I can't be dismissive of you or contemptuous of you and think, well, my model is the right model and I'm only going to entertain your model as a theoretical fantasy but not actually treat you as you
Starting point is 00:46:26 have an element of, possibly have an element of reality. So it was as if I'm running and having all every, every other week, this is how the world works. No, this is how reality is. No, this is what reality is. No, this is, and it just, and it, it sent me into a spiral. How did you, how did you recover from that? Well, in some ways I'm still recovering.
Starting point is 00:46:50 It was quite, quite traumatic. And I do have to distance myself from the thoughts of other people and arguments. Well, I don't have to distance myself from arguments, but I can distance myself from conclusions of other people, especially what they have severed. So what they say with confidence. And many people, we confuse when they speak proclaiming something without diffidence with, okay, we give it more credence than when someone's speaking meekly. And so I have to almost as if they were just typing their speech, I have to evaluate their questions as such and not say, okay, well, what are the accoutrements that come along with their speech?
Starting point is 00:47:32 Kind of like a multiple choice sentences where they're just sort of sitting there and you haven't assigned a true statement to any of them. And one of the, just if people are listening, I went through ACT therapy. So acceptance commitment therapy, I believe that ACT therapy, so acceptance commitment therapy. I believe that's what it stands for. And I have an episode on it. I interviewed this lady named Lillian Dindo. And one of the, one of the pieces of advice is if you're encountering
Starting point is 00:47:56 something that is triggering, you can actually recoil from it. People will say, no, no, no, don't face it. You're supposed to face it as much as you can and do so voluntarily. And one of the ways you can do so is, let's say someone said something that is triggering. This is just now a vague example. You can then look at the words and then just read the words and just say, these are just words on a paper. They don't influence me.
Starting point is 00:48:19 They don't have to influence me. I don't have to buy into it. This is one model of the world. So it strikes me as sort of reading the code but not running the program. That's a brilliant way of phrasing it. Yeah. Because you know, because I had the same and still have the same kind of challenges myself, particularly in this kind of research, in thinking about unthinkable thoughts and this concept of lethal text and
Starting point is 00:48:45 so on. When I first came across that, just the concept of an idea that would do harm, I was very hesitant for a long time to even share that idea in the abstract sense. Not even any particular idea, but just the meta idea of that there are harmful ideas. And I had to, you know, in this act of climbing the mountain of madness, I had to retreat because I felt like I was getting too far away from humanity, from myself, from my past. And when we get to sort of new layers of thought, they are lethal to our previous self. Right? Your adulthood is lethal to your childlike self.
Starting point is 00:49:34 Right? So, why were you afraid of sharing even the notion that there are lethal ideas? Because somewhere in me, that idea frightened me. And I was afraid of doing harm to other people. I worried that maybe I would encounter someone who didn't have the right constitution or wasn't in the right place. And that even that idea wouldn't... Because the idea itself, the meta idea is enough of a
Starting point is 00:50:06 seed to then either instantiate a lethal framework or to open your perception where you start to find them because I think they're everywhere. Something that we talked about over dinner with Jacob Barndes, so we just recorded an episode of Jacob Barndes, link on screen, obligatory remarks, I recommend you check it out. Is that I want to make sure that what I'm doing with this channel is good or it is not promulgating harm. And so it's extremely tricky because even this, it sounds like what you're saying is it's necessary for the creative endeavor to go out outside the norm and to allow yourself to indulge in some dosage of madness. But then at the same time, there is such a thing as madness.
Starting point is 00:50:58 Right. And that hasn't been separated in this conversation. So I would like you to make that distinction. What comes to mind is the real scary idea is that those that have gone mad are not wrong. We have this tendency, and I think it's an unfortunate framework, that those that suffer from mental illness are broken somehow. There's a chemical imbalance, their brain's wired wrong, or they have traumatic experience that sort of messed up their software. But maybe that's not the case at all.
Starting point is 00:51:28 Maybe they're sort of astronauts that have been to the moon, and the rest of us just haven't been there. And like you were saying before with your guests, you have to assume that they have this valid experience. And if we imagine that we are software, then all experiences are real, right? Because they're just virtual machines anyway. So all of our thoughts are made out of just patterns. And so if someone has that experience, it's genuine.
Starting point is 00:51:55 It's not a fallacy. It's what they experience, and by them instantiating it, it's real. And so this idea that the mentally ill aren't broken, but are just at the edge of evolution or the edge of these unthinkable thoughts is, I think, it's something that stirs me. There could also be misattributions. So for instance, they go to the moon and then they come back and say, I was on a balloon made of cheese. And so we then say there are no balloons made of cheese. However,
Starting point is 00:52:33 if they had said and correctly identified there's a rock that orbits the earth and we never noticed it, you understand what orbits are and so on, then we would be like, oh, that's interesting. Can we also, can we go look for that now and we find it? Now, in this example, it's quite foolish because we see the moon, but you get the idea. Yeah. I think that's probably the situation that we're in. When these people go to these places and they come back and they try to use ordinary word vectors,
Starting point is 00:53:02 they try to decompose it and serialize it, and they model in their mind how vectors. They try to decompose it and serialize it, and they model in their mind how your mind is going to respond to their experience, and they describe it, it doesn't match. And then we hear words like balloon and cheese, where in their mind they're thinking about something much richer, but that's sort of the token that they were able to get out. And then we say, oh, no, that person is nutty because they're talking about balloons of cheese
Starting point is 00:53:26 and that's not how we think about it. But if we were to go back millennia and describe the moon in modern terms to the wisest people we could find, they would say that we're nutty. They would say, no, no, no, this is the goddess and this is Luna and this is how it works and this is what it means and so on. And so when we describe it as a collection of rocks in an orbit in a gravitational well,
Starting point is 00:53:53 that would sound like balloons and cheese to them. So part of me is maybe you felt the same. One of the reasons why I didn't talk about my experience, but I do more and more, but I still rarely do relatively compared to how often I have these podcasts or talk to people, is that I'm ashamed of it. And I also think that it's, I thought that it was much more rare than it was. And as I speak to people, there's some professors, there's the prominent professor of math who I can talk to you about off air.
Starting point is 00:54:26 His name is a household name to mathematicians who told me that, I'm so glad you talked about this in the Karl Friston episode because I was experiencing something like this myself. I think it's going to, we'll find out it's a much more universal phenomenon. And this immune system concept applies sort of at the cultural level where we don't share those ideas for fear in some sense because we know that they can be lethal to relationships, to our standing in society, to our financial well-being, that really exposing that raw self, even though the experiences are genuine and valid,
Starting point is 00:55:09 we don't share that. And I think that's kind of what we need to overcome as a culture. If we wanna make it to this next era of evolution and humanity, we have to embrace that. We have to embrace that diversity of thought and people that previously were laughed off the stage.
Starting point is 00:55:29 That's precisely where the interesting ideas are going to come from. So we mentioned like with the moon, but if we take anything out of the modern world, whether it's quantum physics or information theory or even just the idea of software, and we go back, like we said, 100 years, 200 years, 300 years, these ideas were just, they were insane, right? The whole world is just this utterly unimaginable creation compared to what we thought about before. It reminds me of, I've been traveling recently
Starting point is 00:56:02 and seeing sort of modern cities up close. And I think if you took someone from 200 years ago and you brought them into a modern city like Boston, I think they would think we're a million years into the future, not 200 years. And I think it would be overwhelming and unimaginable. They wouldn't really even be able to take it in, all the lights and little computers and all the plethora of cars and all this kind of stuff that we have in the modern world. I think it would be overwhelming and I think they would suspect that they had time traveled a million years and not 200. by the same token, if we think about where the human mental space is going to be in say 25 years, I think it's like farther out than just the linear 25 years. Right?
Starting point is 00:56:53 The vistas, the Lovecraft vistas we talked about last time, they're going to get open, that are being opened up by these AI tools, are either going to drive us mad or they're going to open up a new Renaissance. And we have to get going prior to going, the audience comprises a general audience, but a large audience of researchers. So now you're speaking to researchers and then also people who want to become, who want to go into the fields of physics, mathematics, philosophy, computer science. Yeah. Well, you've got such an amazing audience and community,
Starting point is 00:57:25 and I'd really love to know what unthinkable thoughts they're thinking about. You know, put down in the comments and tell us the stories and the experiences that you're having in terms of being off the map, and where your GPS doesn't get any signal kind of thing, and you find yourself either in ecstasy or despair or discovery. And I'd be curious to kind of mine the beautiful minds that you have in your channel and what they think.
Starting point is 00:57:55 And I'll also put a link to help, if they require help for the various countries. I'll look at the top 10 countries and just put whatever is the national hotline or what have you. Will, thank you so much. Thank you, sir. Always a pleasure. Looking forward to speaking with you again. New update. Started a sub stack. Writings on there are currently about language and ill-defined concepts as well as some other mathematical details.
Starting point is 00:58:23 Much more being written there. This is content that isn't anywhere else. It's not on theories of everything. It's not on Patreon. Also, full transcripts will be placed there at some point in the future. Several people ask me, hey Kurt, you've spoken to so many people in the fields of theoretical physics, philosophy, and consciousness. What are your thoughts? While I remain impartial in interviews, this sub-stack is a way to peer into my present deliberations on these topics. Also, thank you to our partner, The Economist. Firstly, thank you for watching. Thank you for listening. If you haven't subscribed or
Starting point is 00:59:01 clicked that like button, now is the time to do so. Why? Because each subscribe, each like helps YouTube push this content to more people like yourself, plus it helps out Kurt directly, aka me. I also found out last year that external links count plenty toward the algorithm, which means that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows YouTube, hey, people are talking about this content outside of YouTube, which in turn greatly aids the distribution on YouTube.
Starting point is 00:59:32 Thirdly, there's a remarkably active Discord and subreddit for theories of everything where people explicate toes, they disagree respectfully about theories, and build as a community our own toe. Links to both are in the description fourthly you should know this podcast is on itunes it's on spotify it's on all of the audio platforms all you have to do is type in theories of everything and you'll find it personally i gained from rewatching lectures and podcasts i also read in the comments that hey toe listeners also gained from replaying so how about instead you re-listen on those platforms like iTunes, Spotify, Google Podcasts, whichever podcast catcher you use.
Starting point is 01:00:09 And finally, if you'd like to support more conversations like this, more content like this, then do consider visiting patreon.com slash KurtJayMungle and donating with whatever you like. There's also PayPal, there's also crypto, there's also just joining on YouTube. Again, keep in mind, it's support from the sponsors and you that allow me to work on toe full time. You also get early access to ad free episodes, whether it's audio or video, it's audio in the case of Patreon, video in the case of YouTube.
Starting point is 01:00:37 For instance, this episode that you're listening to right now was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough. Thank you so much. you

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.