Decoding the Gurus - Sean Carroll: The Worst Guru Yet?!?
Episode Date: March 2, 2024Controversial physics firebrand Sean Carroll has cut a swathe through the otherwise meek and mild podcasting industry over the last few years. Known in the biz as the "bad boy" of science communicatio...n, he offends as much as he educ....<< Record scratch >> No, we can't back any of that up obviously, those are all actually lies. Let's start again.Sean Carroll has worked as a research professor in theoretical physics and philosophy of science at Caltech and is presently an external professor at the Santa Fe Institute. He currently focuses on popular writing and public education on topics in physics and has appeared in several science documentaries. Since 2018 Sean has hosted his podcast Mindscape, which focuses not only on science but also on "society, philosophy, culture, arts and ideas". Now, that's a broad scope and firmly places Sean in the realm of "public intellectual", and potentially within the scope of a "secular guru" (in the broader non-pejorative sense - don't start mashing your keyboard with angry e-mails just yet). The fact is, Sean appears to have an excellent reputation for being responsible, reasonable and engaging, and his Mindscape podcast is wildly popular. But despite his mild-mannered presentation, Sean is quite happy to take on culture-war-adjacent topics such as promoting a naturalistic and physicalist atheist position against religious approaches. He's also prepared to stake out and defend non-orthodox positions, such as the many-worlds interpretation of quantum physics, and countenance somewhat out-there ideas such as the holographic principle.But we won't be covering his deep physics ideas in this episode... possibly because we're not smart enough. Rather, we'll look at a recent episode where Sean stretched his polymathic wings, in the finest tradition of a secular guru, and weighed in on AI and large-language models (LLMs). Is Sean getting over his skis, falling face-first into a mound of powdery pseudo-profound bullshit or is he gliding gracefully down a black diamond with careful caveats and insightful reflections? Also covered the stoic nature of Western Buddhists, the dangers of giving bad people credit, and the unifying nature of the Ukraine conflict.LinksYouTube 'Drama' channel covering all the Vaush stuff in excruciating detailThe Wikipedia entry on Buddhist Modernism Sharf, R. (1995). Buddhist modernism and the rhetoric of meditative experience. Numen, 42(3), 228-283.Radley Balko's Substack: The retconning of George Floyd: An Update and the original articleWhat The Controversial George Floyd Doc Didn't Tell Us | Glenn Loury & John McWhorterSean Carroll: Mindscape 258 | Solo: AI Thinks Different
Transcript
Discussion (0)
Hello and welcome to Decoding the Gurus's the podcast where an anthropologist and a psychologist
listen to the greatest minds the world has to offer and we try to understand what they're
talking about. I'm Matt Brown. With me is Chris Kavanagh, the little red riding hood to my big
bad wolf. That's who he is. That's pretty good. Yeah, I like that. You didn't see that one coming did you i didn't what big teeth you have
that's not a little road riding who says that's you know i do have big teeth and in fact my teeth
are too big for my jaw because my my parents neglected me as a child and they didn't get me
braces when i should have had them and they got progressively more crooked and in my 30s early
30s i bit the bullet i got braces
which involved like extracting four teeth to make room for all the remaining teeth and since then
they've gone crooked again so they were getting squished up there's not enough room even for the
remaining teeth well that explains the teeth matt but what explains you dressed up in grandma's
clothes all the time that's that's just don't don't
question don't question my judgment no kink she i mean big bad wolf kink it's a minority
speaking of kinks this is it's not directly related to the gurus here but actually i'm
thinking we should do a season on streamers because streamers was just such a weird collection of
people they're almost distilled guru-osity in a way because they're all about cultivating
parasocial relationships through their incredibly long streams right where they just say and waffle
shit to hundreds of people or thousands of people who are looking at them i mean we do that too
but the
difference is it's asynchronous you know it's like a radio it's a water proper video we don't get the
love back just in the reviews and there people are more often than likely kicking the piss
yeah so i've never watched a stream i've never watched one of these chris not once but i do know
that there are lots of people like
typing all the time and they're interacting with their audience they're doing like a stream of
consciousness thing aren't they like they're just talking about whatever occurs to them
as they play the game is that correct yes this is correct so there's a little thing i'd like to play for you. I'll provide the context.
There is this leftist streamer called Vosh.
And he got caught while streaming.
He opened up his to be sorted folder, right?
And in that to be sorted folder was various like point stuff, right?
Not a good thing.
Got to be careful when you're live streaming your
content happens to the best of us chris happens to the best of us but what usually doesn't occur
is that that folder includes a substantial amount or even any number of uh lolly conan style porn like lolita anime uh young presenting girls porn and and anime anime
anime yes it was anime i believe and uh but the other one was horse porn and i believe there were
some crossovers it might have been lolly horse you know in your life you're talking about an underage horse
that would be in some ways you know maybe it's not bad i'm not really sure but in in any case
i think there were some crossovers maybe it was two separate things i don't know but in any case
it's probably not what you want to flash but his defense was pretty good one of them one of the defenses was like he just kind of argued
that it was known that he's into horse porn because yes he wants to imagine himself as a
powerful stallion it's literally a joke man the horse the horse thing is not a joke this is we
can't let this we can't let this be smeared, okay? To whatever
extent people can say, oh, we place it off as this or that, okay? I'll make it clear.
You can write this down. I want to fuck a woman as a horse. None of this is a secret.
To be clear, many jokes have been made about this, but I stand by it. My moral principles are rock solid.
My feet are firmly planted in the ground.
I've got my boots up.
They're planted firmly.
You cannot move me from my position.
This isn't a secret.
Go talk to a therapist.
Well, why do you want to be the horse, Vosh?
Because then I'd have a giant dick.
Okay, couldn't you have a big dick the other way?
Well, yeah, I could have a big dick okay couldn't you have a big dick the other way well
yeah i could like yeah i could have a big dick hypothetically in any variety of scenarios but
then it wouldn't really be a horse but you could be a human with a horse dick yes but then i
wouldn't have that powerful stallion energy using it there you go that's it that's the whole thing
uh like that's that seems reasonable i mean
yeah so so this is his that was his first defense.
Okay.
I think to behold, but his second defense is the one that I really want to focus on
because the second one is the bigger problem, right?
The kind of lolly content.
So how was he going to explain that?
And this is just, it's such a unique defense that I think few people would have anticipated this.
The other one is like a threesome with two chicks and a guy.
And in retrospect, looking at it, knowing now that that artist is a lollicon, yeah, I can see it.
When I looked at it, I think the vibe that I got was like short stack thick kind of thing.
You know what I mean?
Like the way like goblins get drawn
in porn you'll have to entertain me for a moment on this presumed shared knowledge of how goblins
get drawn and pornography but you know how they're all like thick short stacks right
he should not presume there is a shared knowledge of Goblin porn, at least not in my case, Chris.
Pick short stacks.
Apparently short stack is the lingo for midget porn.
So that's the PC term for that particular genre. But Goblin short stack, it's a bold defense.
You know, you've got me all wrong, Carper.
I didn't know it was Lolicon. I thought it was Midget Trolls.
Yeah, it's so good. So that's just the taste of the streaming world, right? There's more to come
in the coming weeks as we might dive in there but yeah so we do have a decoding episode
today matt um we got various feedback on the sam harris episode i'm not going to really dwell on it
except to just say that people have different opinions some people feel that you know we
accorded ourselves fine and they weren't very happy with some things that sam said
other people mainly on sam's subreddit think that we spoke far too much interrupted him
oh yeah interrupted him too much yeah yeah sorry sorry about that everyone yeah yeah there are
people all around saying various things some people were saying about the dangers of platforming Sam Harris.
And there I would say, one, he has a much bigger platform
without us and a much more indulgent one
than plenty of other locations if he wants.
And two, that I think you're doing a discredit
to the way that his arguments will have been received. I do not think that the people
in our audience all will have been like, oh, that was completely convincing. Just, you know,
every argument that Sam made, I'm not saying it would find us convincing in the same respect.
I'm just saying, I don't think people are consuming things completely credulously.
I don't think we're introducing Sam Harris to a wider audience. No.
I don't think we're introducing Sam Harris to a wider audience, no.
Well, yeah, and that does get me to just the last little intro bit, Matt.
You know, we've sometimes toyed with this, the grinding gears segment.
Let's grind our gears this week. And I do have two things to enter into the ledger of gear grinding.
And one is the sensitivity of western buddhists
oh yeah right the sensitivity of western buddhism now western buddhists are a lovely people
they're you know they're an introspective folk they've become interested usually in a religion or a
philosophy sorry a philosophy that is you know a little bit exotic from their culture it's not
christian background usually people come to it a bit later in life and approach it more as a secular
philosophy than a religious system and that's fine everybody is free to do as they wish and enjoy,
and there are plenty of benefits to, you know,
engaging in introspection and becoming part of a community,
reading books about Buddhism and whatever.
But I do detect a touch of defensiveness
whenever I point to the fact that Western Buddhists might be,
in general, presenting a particular interpretation of Buddhism, one that was marketed to the West
and which became particularly popular in the West, which now goes under the term in scholarly
circles of Buddhist modernism. It's
kind of a modernist interpretation of Buddhism, the kind that you would see amongst like Robert
Wright or Sam Harris or various other people, but it goes back, you know, TD, Suzuki and various
other figures. And that again, that's fine. Religions change, philosophies change, things
travel around the world and come out in all
different ways. And Buddhism in the modern incarnation tends to be one which appeals
to a specific kind of secular modernized version. But when you mention that, and when you suggest
that there might be an attachment to a particular presentation, a particular interpretation of introspective
practices or interpretations of the self and whatnot. There is a very strong reaction amongst
Western Buddhists, a subset of them, where they are very ready to tell you how incredibly detached
they are from any ideology. They have no religious commitments.
They're not interested in the supernatural metaphysics.
They are purely doing introspective practices.
They don't care about any tradition.
They're not even really, you know, Buddhist.
What is that?
And I detect, Matt, that there actually might be a slight attachment
because out of all the different groups I interact with,
the group which is one of the most triggered by any comment about that, any suggestion that they
might have consumed or be interested in a particular perspective, or that it might be
associated with like metaphysics and religion. That's a nefma. And so I get all these messages
from people that are like,
I have absolutely no attachment, but here's why you're completely wrong. And I just don't see the level of detachment, especially the emotional detachment that I'm so often being told that
those people display. It could be my mistaken interpretation, but they often seem like they might be emotionally responding to hearing somebody criticize a particular interpretation that they like.
So that grinds my gears, Matt.
I hear you.
I hear you.
I know that.
That can be frustrating.
Like I told you, I visited a Zen Buddhist collective to do meditation occasionally when I was a much much younger person and I eventually
stopped going because it was a bit too culty not very culty just a little bit like they wanted to
hang out and have tea and biscuits and talk and I didn't want to do that I just wanted to do the
thing and I've had friends who have you know been into it and announced to me at some point that
they were enlightened and they just weren't chris they
just weren't and so i guess what i'm saying is that in my own life i've detected a strong
theme there i think one of the big appealing things about you know this this western
modernistic buddhism is that it's extremely flattering to you you know like like if if
you embrace it you can you can quite easily get yourself to a point
maybe a little bit like some of our previous guests and gurus that we've covered where
you've figured it out you're on a higher plane you're detached you're calm you're at peace the
rest of the world is in chaos but you're a little bit special and i can see the appeal and i think i
detect a little bit of a note of that there's a
there's a note of that but there is a note of that and it's coupled with the complete confidence
that they have transcended that that they are devoid of attachment or at least they're on the
road and you haven't even recognized that the road is there because obviously if you did you wouldn't make
the mistake of assuming that buddhist modernism is a thing it's just scholars waffling in their
ivory towers the fact that many of those scholars are practitioners of decades of experience
never mind never mind they've got it wrong they've got it wrong as well. So, yes, I'm just saying for attached people,
they're a remarkably emotionally expressive people when criticized.
That's the way I would put it.
You don't usually need to announce to people that you've achieved that kind of thing.
You know, you just express it.
You know, it becomes known.
It's like me.
I never tell people that I'm like cool and at peace and whatever i
see i just i just exude it from every pore and it just that's right it was and i i got there every
everyone treads their own path i got there through years of alcohol and drugs just persistently
consistently it's it was a long road but um it got me there in the end and um you know i don't
need to talk about it, though.
That's the cool thing about it.
Well, that's right.
That is the essence of cool.
That's what it is.
And the second group, Matt, and I'll make this one brief, very short,
but is just to say Glenn Lowry, the commentator, pundit,
black intellectual in America who often takes a kind of slightly contrarian position
around political issues in the US.
And recently there's been a documentary that came out
that was basically a revisionist documentary saying
Derek Chauvin's didn't murder George Floyd.
Like the evidence wasn't properly presented
and the jury were just going along
with the atmosphere at the time.
And actually, if you look,
he was doing what he was trained to do
and George Floyd probably died from an overdose,
not all the things, right?
Now, this particular documentary
got a lot of pickup in the contrarian heterodox space.
Coleman Hughes published an article
about it on the Free Press.
He was talking about it recently with Sam Harris on his podcast. And also John McWhorter and Glenn
Lowry covered it on The Glenn Show and a bit dramatically. Oh, but I should say, because
there was an article that came out by Radley Balco, a journalist who covers crime and things.
And it's a very detailed technical rebuttal to the documentary
and the coleman hughes it's a three-piece series two of the pieces have come out and they are
they're quite damning they kind of present that actually if you look into the evidence you know
where you actually contact experts when you go and look at trial transcripts not just what's
presented in the documentary it actually actually is an exonerating
of Chauvin. It's damning. And there's lots of reasons that it looks justified that he was
convicted of murder. It's a very thorough piece. And it's very cruel towards Coleman Hughes
in its presentation, because it essentially says, you know, Coleman Hughes was taking victory laps
about being thorough and critically minded. But in actual fact, he just bought into a political narrative that fits.
Anyway, then Lowry came out in response to reading this, admitted that the evidence was very strong, that he had got this wrong.
We should not have ignored it, but that we should have been more skeptical about it, particularly about its technical claims,
which challenge the limits of our own expertise
in terms of being able to evaluate them.
So we're trusting filmmakers to a certain degree
when we do that.
I've been asking myself the question,
how could I have been so,
I almost want to say gullible?
How could I have been so credulous? How could I have not had my guard up? But he also reflected on the fact that the reason
that it got it wrong was likely because of his bias towards wanting to counter the dominant narrative and not liking the kind of social justice reaction
to lionize George Floyd as he saw it?
I think the answer is, well, I wanted a counter-narrative
to the dominant narrative about what happened to George Floyd
and the subsequent developments of the summer of 2020.
and the subsequent developments of the summer of 2020.
I didn't like that police station being allowed to be burned to the ground.
I thought that the lionization of Floyd, the elevation of him to a heroic status to the point that the president, then a candidate of the United States,
could say, and I'm talking about Biden,
in 2020, that Floyd's death resonated. I'm not quoting him, but this was the effect on a global scale. There were demonstrations all over the world, Black Lives Matter and all of that,
even more resonantly than did, even more profoundly than did the killing of Martin Luther King in
1968. I hope I don't misquote here, but I
definitely believe that President Biden said, then-candidate Biden said something to that
effect. It was a big deal. It was a big fucking deal, the killing of George Floyd.
The country seized up on something. And I want it not to be,
when the opportunity to question the narrative came along, I jumped at it,
and perhaps incautiously so, that's what I want to say, which raises a question in my
mind more broadly.
So he actually considered, you know, the incentives on his side of the aisle and how they'd impacted
his coverage.
And he said it left him chastened.
Being heterodox, being against the grain,
anti-woke, being the black guy who said the thing that black guys are not supposed to say,
you can inhabit that persona to such an extent that your judgment is undermined by it.
And I take that as a warning. I mean, I'll accept what you say.
No, we didn't, you know, do anything wrong.
But I'm still a little bit chastened by Radley Balco's, you know, I mean, and what he does to Coleman.
People can read this and see for themselves. Coleman, the youngish, upstart, conservative Black intellectual, is really disquieting.
I mean, he says he's way out over his skis.
He says he's a propagandist in so many words.
And Barry Weiss takes a hit indirectly from Bal uh Balco what kind of outfit is she running
over there is she subject to the same temptations that we are to as we inhabit this role of anti
wokeness to to quickly embrace something that we ought to think twice about before we we jumped
he said this publicly on the podcast and Johne mcwarder disagrees and says
we weren't that bad you know it's reasonable to ask questions but glenn lowey's thing was very
good and i published just a tweet saying this is really nice to see somebody showing intellectual
humility and you know that this kind of thing should be applauded when people are willing to do
it and most people agreed many people. But there was a particular subset.
Again, it's just a small subset of people who replied saying,
why would you praise this?
You're just encouraging him to have worse takes in the future.
And another thing they said was,
you could have sent this message privately.
You know, don't say it publicly
because you will encourage other people
to have bad takes with impunity because they'll know they can just apologize and think.
Yeah.
There's a stream of thought, Chris,
that you never ever should hand it to them.
And by them, I mean the bad people generally on the right people.
Glenn Lowry is a right-wing guy, isn't he?
And if you give them any credit say this
is slightly less worse than than usual right well this is this is a welcome change or anything like
that then you are somehow undermining the cause which is to you know fight i guess against those
ideas so so admitting that glenn lowry could ever show some self-awareness, could ever self-correct or whatever, I guess is seen as undermining that.
And so that's a point of view.
I don't agree personally.
I will hand it to anyone.
Like if I think of the people that we've covered on this podcast,
someone like Eric or Brett Weinstein,
if God forbid they ever do something that isn't terrible,
I will hand it to them.
And I think you should.
I think it's not useful to pretend like if hypothetically and i am speaking hypothetically
brett said something good um then it would actually we would undermine ourselves by pretending that
that didn't happen yeah that you know we would be shown that we just had blinkers on and we couldn't distinguish good from bad.
Yeah, well, one of the things is, you know, there's the Wint tweet,
which is like, you don't have to hand that the ISIS is the point
or the Nazis, right?
And there is a truth to that, right?
In that, oh, some terrible, terrible person getting a small point, right?
It doesn't undo all the...
You know, Alex Jones could say something reasonable,
but it wouldn't undo all the stuff that he has said or done.
But that's fine, because you can make that point as well, right?
Like, simply saying that he is right on this point or whatever,
it doesn't mean, ergo know go treat alex jones
as a good source and i the comment that people made about that as well like i feel it doesn't
represent human psychology correctly right because like i think it's very very unlikely
that anybody would see my tweet or anybody else, you know,
handing it to someone and be like, oh, I have all these terrible takes.
And now that this person said that, you know, like, it's nice to see intellectual humility,
it means I can say anything and I'll get off with what's got free.
So I'm going to start throwing out more terrible takes because I can just apologize.
Because like, like actually most of
the incentives go in line of don't apologize double down triple down appeal to your audience
never give a quarter to your critics or that kind of thing so like if somebody doing that admitting
they got something wrong publicly it actually is is rare. It's rare. So if you immediately
condemn everyone that does it, you're removing the incentive for people to be willing to admit
they're wrong. So I just, that view of psychology, it strikes me as wrong. And it strikes me that
the more common thing is that actually, if you say something nice about people who are on the opposing side,
or, you know, are seen as the enemy, the more likely thing is that you will attract criticism
from people who feel ambivalent about, oh, I like this person, but they're praising somebody who,
you know, is a baddie. So I don't like that, because i'm getting a reflection you know it's it's either falling out
of my tribal categories or this looks bad on me if i like this person and they're saying that person
is good because you know that person is bad so yeah yeah i mean i guess what you're describing
is partisan politics right like in in a standard two-party system even a normal one not the current
american one people on one side of the aisle will not want to say that there's something the other
side had a good idea about this particular policy whatever generally but uh yeah i mean it's in the
end you decrease your own political capital by doing that so i think it is self-sabotaging so
i agree just my opinion i also think if you want to say
something nice about people in private you should probably be willing to say it in public i'm not
saying that always holds up but i i just be like if you're willing to dm someone to say i think
that was really nice of you and then you you wouldn't do that publicly that there's a little
bit of i don't know that
seems to be a little bit hypocritical but that's what it is matt this is you know it was just gears
were ground the the buddhists and the the woke scolds got to me this week they were they were
tweaking my buttons i was the little emotional puppet you? I wasn't the serene sky with the clouds passing over.
I was the little puppet being tweaked along by the people annoying me
in the discourse sphere.
So it is what it is.
It is what it is.
That's fair.
That's fair.
I was trying to think of things that had ground my gears.
But, you know, it's just-
You're at peace.
I am at peace. I'm cool. You know, like one of those buddhist guys but but for real no you know i i occasionally tweet political opinions on twitter
about foreign events like say the ukraine thing which i have strong opinions about and um you
know people disagree with me for all the wrong reasons you know seeing conspiratorial leftists who believe it's like all a NATO plot
and it's really an imperialistic kind of thing.
We're somehow tricking the Ukrainians into defending themselves.
Seeing them like link hands with the Mishama style amoral realists
and the bullet-headed MAGA isolationist types who sort of adore strong men like putin
like seeing all these three groups kind of join hands and in having a common view on this thing
that's upsetting to me but that's just that's just normal politics perhaps you know decoding
the gurus is not the not the place to air that grievance so i would let it go i'll let it pass
through me and out the window that's
not guru's business chris we've got that out of the way well it's not is it not we're with the
look matt we're just dealing with annoying things because we're about to get to a guru is he annoying
is he good is he terrible or it's it's unclear it's the physicist Sean Carroll, somebody that you suggested, and that some people listening
were like, how dare they? How very dare they
think that they... Oh, we dare. We dare.
They have the grinds to even comment
on his content. What are you thinking, you maniacs? You first come for Chomsky
and now
sean carroll what what what depths won't you plumb and the answer is no one no one is above decoding
is the answer no even us even us you can decode us just fire up your own podcast and do it or
yeah look look you still don't get it you still don't get it. You still don't get it, you fools. We can cover anyone. There's
no guarantee that they will score high or not. We have developed a non-scientific tool with which
we put people through the pieces and see, do they fit the Secular Guru template? And that's it.
That's it. So yes, we often cover people that fit close to the template
sometimes we don't carl sagan was good mick west was fantastic you know just suck it up bitches
wait and see wait and see we will stop how many times do we have to say if you like someone
like our whole method doesn't go out the window just because you like
them okay right just get up through your skulls this is why people like me matt
just um really flattering the audience any chance you get so uh yeah sean carroll i know of him i
recommended him i've listened to more than a few episodes of his.
He's a physicist.
He does talk about physics a fair bit, but he produces a lot of content
and he's been doing this a while.
And I think, you know, he's kind of run out of physics things to talk about.
He's done physics.
He's done with that.
That's not true.
There's really so much physics, Chris. There's this much and then it's done. He's over with that. That's not true. There's really so much physics, Chris.
There's this much and then it's done.
There's no more left.
And you have to go on and, you know, cover other topics and more power to him.
There's nothing wrong with that.
But it does put him into an arena of general purpose public intellectual.
He's not afraid to talk about other topics such as artificial intelligence,
the topic we'll be covering today.
I see. Yeah, that's right.
So he has a podcast series, Mindscape, where he is mostly, Matt, I think, talking about physics topics.
But occasionally, you know, goes further afield and I think has an interest in philosophy and topics.
He's involved in discussions about free will and determinism and all that.
Chris, he does heaps of stuff that's not physics.
The last two episodes was how information makes sense of biology.
And then before that, it was how the economy mistreats STEM workers.
He goes further, it feels.
I'm not saying he doesn't.
I'm just saying he mean does physics i think he mainly
does physics i'd say he's not eric weinstein he's not eric weinstein make that clear this is a real
physicist right and also a science popularizer but someone with like actual research bona fides
and uh that kind of thing right and he's got a really lovely voice. This is one thing I'll say for him right off the bat.
I have to admit, I do like to listen to him as I'm going to sleep.
This is one of my, in the special category of podcasts
where they just have a soothing voice.
And I know some people listen to Dakota and the Gurus to go to sleep too.
I think Paul Bloom hinted that we do that for him.
Maybe he actually said that to our viewers.
But he had a post recently about podcasts he liked,
and he mentioned us, but he also mentioned
that some podcasts he uses to go to sleep,
but he didn't say who.
I struggle to imagine anyone going to sleep
to your voice, Chris.
That's true.
Well, okay, so Matt, this is a solo episode he often has episodes with like
other guests but this time he's monologuing a skill which you and i i think both lack i mean
we do monologue at each other but not to the extent of like an r plus episode so i'm i'm
sometimes impressed when people can do this i'm'm going to go for it like kind of chronologically.
Chronologically, you know, I got to go for it.
I didn't need to slow down as it was being pulled into a black hole.
I'm going to go for it chronologically.
Nice.
Because unlike most of our content, it actually matters which clips you play
because there's an argument built up this
is a feature what i'm beginning to realize often distinguishes i would spoil it but like gurus who
have more substantial things to say than those that do not that you know you can't just pick
and choose from random parts of their conversation but like with the proper secular gurus it doesn't
matter because like very few things actually
coherently connect they're just all you know like thematically connected but he makes a
a logical progression for things as we'll see yeah and probably the only other little um throat
clearing thing to say is that as you say i think he does make an interesting um he builds an
interesting thesis here and i i listened to this episode with interest.
And, you know, when we cover people, I think,
who are saying something that we find interesting,
then I think we've got two modes going on with the decoding going on.
Like there's the do they, don't they fit the guru template.
That's one thing we do.
But, you know, it's also fun to just engage with it as two normal people
with opinions. The way I would frame it is slightly different. thing we do but you know it's also fun to just engage with it as two normal people with with
opinions the way i would frame it is slightly different this is something i sometimes explain
the students when people are making arguments or presenting things that they can make substantive
points and they can rely on rhetoric and there's fuzzy lines between those i grant but if someone is relying heavily on
rhetorical techniques and emotion-laden language and extended metaphors and so on that is the
content of what they're doing right like a constantine kiss and speech it's almost all
it's all rhetoric yeah yeah there's very substance. So analyzing it is just talking about the overwhelming amount of rhetoric
which is there. On the other hand, people can
have substantive content where they're presenting ideas
and sometimes those ideas are bad, sometimes they're good. But in that case,
it often is more relevant to deal with the substance
of the content, right?
Because they are not relying so much on the rhetorical technique.
So I think that's why there's a bit of a distinction sometimes.
Yeah, yeah, I agree with that.
And I'll just try to flag up the distinction there.
Okay, so I like this little framing thing that he does about the episode
and what he's imagining.
Let me just play it.
thing that he does about the episode and what what he's imagining uh let me just play it sometimes i like to imagine that there are people 500 years from now who are archaeologists
historians whatever they call them at this far future point who have developed the technology
to decode these ancient recording technologies these uh different ways of encoding audio data, and
they're sitting there listening to the Mindscape podcast.
So for these reasons, for those people 500 years in the future, I do find it important
to provide some context because you and I, you the present listeners and I, know what
the news stories are and so forth, but maybe our future friends don't.
know what the news stories are and so forth, but maybe our future friends don't.
So hi, future friends. And you might be interested to hear that as I am recording this in November 2023, we are in the midst of a bit of change vis-a-vis the status of artificial intelligence,
AI. It's probably safe to say that we are in year two of the large language model revolution.
That is to say, these large language models, LLMs, which have been pursued for a while in AI circles, have suddenly become much better.
Nice positioning there. about this till I just re-heard that, but there is a way in which that same kind of framing could
lend itself to a grandiosity that my recordings will be looked at in 500 years by archaeologists.
But in Sean Carroll's presentation, as opposed to like a Brett Weinstein presentation, I think it is
more a whimsical science fiction kind of trope right where it's
not that he thinks that this document is like super important it's just a framing to present
where we are now and and you will see why as we go on but i'm pretty sure that's the way but it's
just interesting that i was thinking like an eric or brett could attempt to do the same thing but they would invariably do it where
it's very important that the people in the future look back at their material right this conversation
will go down and be carved into stone go down in history no no i'm sure carol isn't at all giving
that impression that didn't even occur to me but now you you mention it. Yeah, I'm just being fair, Matt.
So a little bit more about the framing.
So Sam Altman got fired, and that was considered bad by many people, including major investors in the company like the Microsoft Corporation.
So there was a furious weekend of negotiations since the firing happened on a Friday.
weekend of negotiations since the firing happened on a Friday. And no more than two,
no fewer than two other people had the job title of CEO of OpenAI within three days until finally it emerged that Altman was back at the company. And everyone else who used to be on the board
is now gone and they're replacing the board. So some kind of power struggle, Altman won and the
board lost. I think it's still safe to say we don't exactly know why.
You know, the reasons given for making these moves in the first place were extremely vague
or we were not told the whole inside story.
But there is at least one plausible scenario that is worth keeping in mind.
Also, while keeping in mind that it might not be the right one, which is the following the following yeah so he's referring to that kerfuffle that happened there at open ai it's
now ancient history in late last year um he was reinstated wasn't he but um you know his presentation
of that um was very even-handed i thought yeah again it was a little bit whimsical he but he was
basically describing what happened.
And then at the end there, he sort of, he emphasizes that there could be a little bit of speculation going on as to what could be going on.
Yeah.
Now, one thing that you'll notice there is that he said things like the reasons given
for making these moves in the first place were extremely vague, or we weren't told the
whole story.
Keeping in mind, like this thing that he's going to say might not be the right one. these moves in the first place were extremely vague or we weren't told the whole story keeping
in mind like this thing that he's going to say might not be the right one so there's all these
kind of caveats where he's saying you know we don't really know what what happened but this
is what we saw from the outside and again you know i this will be a good time to point out to people
this is the opposite of strategic disclaimers. This is a real disclaimer about someone saying, I don't know, right?
Like I'm expressing uncertainty.
I think it's useful to imagine Eric Weinstein
describing the same scenario.
And can you just imagine the ominous tones?
And we don't know, Matt.
We just don't know.
We don't know.
Something is up.
But something is up.
You know, the powers that be, things are happening and we just don't know man we just we don't know something is something is up but something is up you know the powers that be things are happening and we just don't you know it would be invested with
this paranoid suspicion whereas the way he relates it he describes it exactly how i understood it
which was yeah it's one of these things that happened and we legitimately don't know because
of course we don't know it's a company you know they don't they don't publish what goes on in
their boardrooms.
Yeah.
Now, he flagged up one possibility that he was going to highlight.
And let's just see what that is and how he characterizes it.
Let's put it that way.
Artificial general intelligence is supposed to be the kind of AI that is specifically human-like in capacities and tendencies.
So it's more like a human being than just a chatbot or a differential
equation solver or something like that. The consensus right now is that these large language
models we have are not AGI. They're not general intelligence. But maybe they are a step in that
direction. And maybe AGI is very, very very close and maybe that's very, very worrying.
Okay.
That is a common set of beliefs in this particular field right now.
No consensus, once again, but it's a very common set of beliefs.
So anyway, the story that I'm spinning about open AI, which may or may not be the right
story, is some members of the board and some people within open ai became
concerned that they were moving too far too fast too quickly uh without putting proper safety
guards in place yeah so um just a little point of order and this is not a ding at all on sean but
yeah agi is legitimately slightly ambiguous term like it's sometimes used to
describe um intelligence like he said which is like human like yeah but it's also often used to
describe um i guess a more general purpose so it might not necessarily be human like but it could
be multimodal and you'd be able to transfer knowledge from different to different contexts
which is also somewhat human likelike. So it's fuzzy.
Yeah, I think this might come up later because he kind of,
this is one of the points that he makes about it.
But again, Matt, just have to note, there's no consensus around this.
It is a common set of beliefs, not passing strong judgment,
but he's accurately describing the state of play.
And he has an opinion clearly but he's
capable of describing other opinions without like automatically investing them with emotion or that
kind of thing so it's again it's just notably different than the people we usually cover
um in the way that they report things because expressing uncertainty and accurately presenting the state of play
yeah he's doing that very academic thing where you kind of do a bit of a like a like a literature
review yeah like a context providing survey of the situation before introducing your own particular
take your own particular arguments on the matter and when you listen to the bit at the beginning you don't really know what is sean carroll's position on should we be concerned
about agi should we be worried about it should we not are these people fools or are they not
he's actually just describing the situation accurately he's he's he's right saying that
a lot of people are concerned and he's not investing of av a violence to that yet yes and so from there he does go on to
first express a view then he'll uh we'll see him give some rejoinders to particular potential
objections and then he goes into the evidence or basis of his positions in more detail right in
quite a logically structured way but here is him expressing a particular perspective.
I personally think that the claims that we are anywhere close to AGI,
artificial general intelligence, are completely wrong, extremely wrong,
not just a little bit wrong.
That's not to say it can't happen.
As a physicalist about fundamental ontology,
I do not think that there's anything special about consciousness or human reasoning or anything like that that cannot in principle be duplicated on a computer.
But I think that the people who worry that AGI is nigh, it's not that they don't understand AI, but I disagree with their ideas about GI, about general intelligence.
That's why I'm doing this podcast.
So this podcast is dedicated to the solo podcast,
to the idea that I think that many people in the AI community
are conceptualizing words like intelligence and values incorrectly.
So I'm not going to be talking about existential risks
or even what AGI would be like,
really. I'm going to talk about large language models, the kind of AIs that we are dealing with
right now. There could be very different kind of AIs in the future. I'm sure there will be.
But let's get our bearings on what we're dealing with right now.
You like that, Chris?
I like that so much, Matt, because you know what I like about it? I like that heris i like that so much because you know what i like about it i like that he's very
clear i already know from this paragraph what he's going to be talking about what he isn't focusing
on and how far he's not extrapolating again i feel like my brain is addled by listening to the
gurus because this is just the exact opposite of what they do their things are never this well
structured there's some exceptions there are some exceptions i think for instance sam harris is
someone that does often lay out like his positions in the same kind of structured way but this is
just it's just refreshing and he he makes his position clear and then he also you know highlights okay but this is my personal take
on this and i'm gonna layer the reasons why but it's not confusing it's not layered in metaphor
and analogy it's not grandiose it's just this is my personal opinion on this topic yeah once again
i think um sean carroll's academic background is is showing, isn't it? Because I didn't realize the first time I listened to this,
but it does have the structure of a good academic article,
which is before you get into anything really,
you signpost to the reader.
You let them know what the scope of your thesis is going to be about.
And like you said, delineate the stuff that it isn't and describe what your focus is going to be about. And like you said, delineate the stuff that it isn't
and describe what your focus is going to be. Some people claim this, some people claim that.
I'm going to argue this and then you get into it. Yeah. And so there's now this section,
which is a little bit dealing with potential rejoinders, right, in advance.
You know, some people who are very much on the AI is an existential risk bandwagon will point to the large number of other people who are experts on AI who are also very worried about this.
However, you can also point to a large number of people who are AI experts who are not worried about existential risks.
That's the issue here.
Why do they make such radically different assessments?
So I am not an expert on AI in the technical sense, right?
I've never written a large language model.
I do very trivial kinds of computer coding myself.
I use them, but not in a sort of research level way.
I don't try to write papers about artificial intelligence or anything like that.
So why would I do a whole podcast on it?
Because I think this is a time when generalists, when people who know a little bit about many
different things, should speak up.
So I know a little bit about AI.
You know, I've talked to people on the podcast.
I've read articles about it.
I have played with the individual GPTs and so forth. And furthermore,
I do, I have some thoughts about the nature of intelligence and values from thinking about
the mind and the brain and philosophy and things like that.
Yeah, Chris, I want to speak to this a little bit, and this is slightly just my personal opinion.
I'm actually going to be agreeing with Sean Carroll here furiously. And I'm speaking
of someone that does have a little bit of relevant background here. I've got a strong
background in statistical machine learning. I was working in it way before deep learning came along.
When it first came along, I, with a colleague, did program up a deep convolutional neural network
for image processing. And we wrote it ourselves in
c++ we didn't just pull some library some pre-packaged thing and apply it and yes our
attempts were i'm sure feeble and and toy-like by by by modern standards but but even so i'm just
pointing out without being a specialist in the area i haven't worked in it for a long time i feel
like i have the background and given that it is interesting that some topics, I think, like Sean Carroll says,
are amenable to, I guess, a more general audience kind of weighing in. And I kind of wouldn't say
this about something like virology, you know, the origins of COVID and things like that. Or
another good example would be quantum mechanics and what's really going on. Is it string theory or is it multi-world in multiple worlds? I think you have
to be a specialist to be able to contribute something useful there. But the interesting
thing about AI and statistical machine learning generally is that what you offer, it is a form of
engineering. And what you end up building is a kind of, yes, you have an architecture.
Yes, you understand the learning algorithms and so on.
And yes, there is a benefit to understanding things like local minima and error functions and dimensions and things like that.
Matrices, matrix algebra and the rest.
But honestly, that stuff... Matrices, some would say.
Yeah.
The matrix, Chris, the matrix chris the matrix yeah so but the the funny thing is it doesn't
give you like a massive insight into what's going on at the broader level like to a large degree
someone who does understand all that stuff perfectly i'm not saying i understand it
perfectly but i understand other artificial neural network and machine learning models perfectly mathematically.
And it's a bit like statistics. Statistics is an interplay between mathematics and the real world.
And it's the same with machine learning in that you build things, you apply the algorithms,
and then you see what it does and how it works. And yeah, anyway anyway i'm just agreeing with sean here that i think it is totally
legitimate for not just you know maths and engineering geeks to be to have opinions about
it i think it's good that people like sean carroll have opinions as well i think we are often
misunderstood on this point as well and i'll have a uh something to say about it but i i think it
would be better to play the second clip
because Sean Carroll makes some of the points
that I will more eloquently.
It's weird to me
because I get completely misunderstood about this.
So I might as well make it clear.
I am not someone who has ever said,
if you're not an expert in physics,
then shut up and don't talk about it, okay?
I think that everybody should have opinions about everything.
I think that non-physicists should have opinions about physics.
Non-computer scientists should have opinions about AI.
Everyone should have opinions about religion and politics and movies and all of those things.
The point is, you should calibrate your opinions to your level of expertise.
OK, so you can have opinions.
But if you're very not very knowledgeable,
if you're not very knowledgeable about an area,
then just don't hold those opinions too firmly.
Be willing to change your mind about them.
I have opinions about AI
and the way in which it is currently thinking or operating,
but I'm very willing to change my mind
if someone tells me why I am wrong,
especially if everyone tells me the same thing. The funny thing about going out and saying
something opinionated is that someone will say, well, you're clearly wrong for reason X,
and then someone else will say you're clearly wrong, but in the opposite direction. So if
there's a consensus as to why I'm wrong, then please let me know. Anyway, whether I'm wrong
or not, I'm certainly willing to learn, but I think this is an important issue.
Yeah. He says it well. He puts it well. Hard agree. Hard agree. It's a nuanced point though,
isn't it, Chris? Because it's about, like you said, calibrating your opinion and being flexible
with it and just having an appreciation of the stuff that you don't know about a topic.
Let's take a different example, like the origins of COVID. that you don't know about a topic like let's take a
different example like the origins of of covid you and i have opinions about it and it's not
just purely like oh well these guys with a lot of expertise said x so so that's what we think yes
that that is an important part of it but as well as that there is like a level like you and i cannot
judge these genomic assays and figure out those things, right? We don't try. We relegate that kind of thing to people that know how to do it. But
at a different level, we can sort of factor in a bunch of, I don't know, maybe more nebulous
type probabilities and things. We know how to do literature reviews and to assess scientific
debates around topics. A lot of academics in general do know how to do that.
You know, there's varying degrees of ability to it, and you might not be as good at assessing it
when you're looking at different fields. But for example, if you were somebody that was generally
interested in standards of evidence and the replication crisis and that kind of thing, I think you can get quite a good grasp of, you know,
what qualifies as stronger evidence, what looks like a consensus.
And you can also get that without being a scientist
by being engaged with things like climate change denial
or various fringe theories, alternative medicine,
because there you learn about the standards
of evidence and people like taking meta-analyses as indicating that psi exists or this kind of
thing right and uh and so i'm saying this not to say that it is something that only academics can
do it is a kind of skill that you can develop. And sometimes people overestimate it.
For different subjects, it can be harder to do, right? You can be wrong about it. But one of the
problems is that a lot of people don't understand that they don't do that, right? Like they treat
scientific topics as if they can be assessed in the same way an opinion piece in a newspaper should be assessed
like do you find the way that the person writes convincing and that's not the standard you should
be using for scientific topics right yeah like i hear you saying i mean i guess it's about it's
about assigning the right degree of credence to the various sources of information you've got and your own judgments.
And the problem that we might have with someone who is making these snap judgments about the origins of COVID is that they'll make these sweeping claims, for instance, that, well, you know, everyone said it was racist to even suggest that it could have originated from China.
Therefore, I don't trust any of the scientific evidence
because it's just a bunch of scientists who are afraid of being racist
and they proceed from there.
So it's this bad, lazy, sweeping reasoning,
whereas there's another way to do it where you can take those things
into account, weight them appropriately, and weight the other stuff,
including the primary evidence, that is the conclusions from it. You may not understand the nitty gritty of the methodology,
but you can put that stuff together and come to a more reasonable position. And
non-specialists can do that as well as full-on specialists.
And I will also say that, again, if you spend time studying conspiracy communities and conspiracy theorists, you can notice when people are engaging in that kind of approach.
So this is something where I think that it was very obvious to see in the lab-like community in the reaction to COVID that there was, along with legitimate debates, a conspiracy community developing.
And it is now a full-blown conspiracy community.
And it does exist.
And you can notice those things and you can discuss that
without saying that any discussion of the lab leak
is a conspiracy theory.
But the problem is that people take acknowledgement
that that community exists as saying,
oh, so you dismiss any discussion you know
and it it isn't that right because all the scientists are debating the lab like in the
publications where people are saying they're not allowed to even mention it it's so annoying
anyway just related this back to artificial intelligence and why there's legitimate
difference of opinion and why reasonable people might disagree is that there is, like Sean says, a variety of opinions within the AI community,
people with specialist knowledge. Some of them are very concerned about existential risks.
Some of them are dismissive. For instance, Jan LeCun, whose work I kind of replicated that I
alluded to, he's very dismissive of large language models
as being anything like really genuinely useful
because it's not embodied.
It doesn't have a physical model and so on.
There are other people that think he's completely wrong
and it is a good pathway.
The evidence out there is relatively conflicting
because on one hand,
you've got AI is doing extremely well
in all of these tests.
On the other hand,
you can look at other examples
where it does seem to be doing very poorly. And it is one of those things where it's actually quite
difficult to test because there's so many different ways in which, well.
We'll get there.
Anyway.
We'll get there.
We'll let Sean talk to it. Yes.
Yes. But I will also say, Matt, that there's kind of two interacting parts there.
One is that Sean is talking about his level of confidence in his assessments, right?
And the fact that he isn't a specialist in AI means that he holds some of the views that
he has much weaker than he might say for physics, where he's more confident, right, of his expertise.
And I think that's one way.
physics where he's more confident right of his expertise and i i think that's one way and the other way is that not just your confidence but you should wait the level of credence that you lend to
opinions that you see out there in discourse land is that person an expert with that is generally
coherent or in this particular topic is very coherent and an expert
how do they represent a very small fringe position these are things to weigh up so that's like a kind
of different epistemic point which is that while you have to be willing to adjust your level of
confidence you also have to critically evaluate others level of confidence and how much you should hit it
alexandros marinos brett weinstein have very very strong confidence in their opinions so does
eliezer yudkowsky i'm not sure you need to give them such credence right um but leanne lacoon
for example maybe somebody worth paying attention to even if his position is wrong in the end yeah
yeah yeah that's right he's a smart guy but he's got his own he's got his own what's the word
commitment yeah he's got his own bag man he does and and that's fine just weigh that all in anyway
sean carroll does well he acquits himself well here because he does make these disclaimers about
his his confidence.
But he says, that's not going to stop me from weighing in
and giving you my opinion and explaining the reasons why.
And he's right.
It shouldn't.
His disclaimers are not, you know, his discursions.
Yeah.
And this is something that not just gurus are guilty of.
Hey, Chris.
Like especially in the fields of physics, mathematics, and philosophy,
I have to say, there can often be a certain kind of arrogance which is you know from the deep principles i know
from my particular field i can make this sweeping claim about this other field about which i know
very little um i actually like roger penrose but i think he was guilty of a bit of that for instance
when he when he dived into neurobiology There's a general thing where physicists end up talking about consciousness and
philosophy as they get older. So this is something which has been noted, or they develop an interest
in Buddhism. But in any case, good advice that Sean gives, which we have also give to people, which is
why it's great advice, is this.
I think that AI in general, and even just the large language models we have right now
or simple modifications thereof, have enormous capacities.
We should all be extraordinarily impressed with how they work, okay?
We should all be extraordinarily impressed with how they work.
OK, if you have not actually played with these, if you have not talked to ChatGPT or one of its descendants, I strongly, strongly encourage you to do so.
I think that there's free versions available.
Right.
This is something you've said, Matt, when people are raising questions about ChatGPT, like just go try it.
Go do it.
It's free, right?
You can access it for free, the 3.5 model at least.
So, yeah, and I agree.
Hands-on experience with AI is very important.
Yeah, but the other thing he's doing here, Chris, is what you mentioned before,
which is delineating what he's saying, what he's not saying.
So as we're getting here,
he does take a pretty AI skeptical position, but he's quite specific about what he's skeptical about.
And he's not skeptical that they're pretty interesting, that they're potentially vastly useful and they can do pretty amazing things.
He's not saying that they're just a stochastic paradise, just fancy order complete, as some people like to say no he's he's making a point about whether or not they are a general
intelligence uh in the same way that we are yeah and he goes on i won't uh go through the example
but he basically talks about getting chachi pt to help generate a syllabus on philosophical
naturalism and that it was very good at it. If you'd asked him if that would have been possible even a year or two ago, he would
have been saying no, but it generated a very nice syllabus.
And, but then he mentions also that it invented one reference.
It was a plausible reference from a researcher that could have existed, but it didn't, right?
It also generated real references,
but this is the thing about sometimes chat GPTs or LLMs in general engage in flights of fancy.
They've gotten much better at it,
but just very recently,
we were talking to an interview guest
and you used the GPT to help,
you know, just generate the biographical details.
And it invented the subtitle of his book, right?
So that's a good example.
Screw you, Claude.
You embarrassed me.
Yeah.
Yeah.
So it does have that.
I think it's a good point that it's very useful.
It's not without its limits, right?
But then, as you said, he has some skepticism and it's basically around this.
So the question in my mind is not, you know, will AI, will LLMs be a big deal or not? I think that
they will be a big deal. The question is, will the impact of LLMs and similar kinds of AI be
as big as smartphones or as big as electricity? I mean, these are both big,
right? Smartphones have had a pretty big impact on our lives in many ways. Increasingly, studies
are showing that they kind of are affecting the mental health of young people in bad ways. I think
we actually underestimate the impact of smartphones on our human lives. So it's a big effect, and that's my lower limit for what the ultimate impact of AI is going to be.
But the biggest, the bigger end of the range is something like the impact of electricity,
something that is truly, completely world-changing.
And I honestly don't know where AI is going to end up in between there.
I'm not very worried about the existential risks,
as you know, talked about very, very briefly at the very end.
But I do think that the changes are going to be enormous.
Yeah, there is a second point to this.
So that is him correct quantifying that,
just to be clear, he thinks it's going to be hugely transformative.
It's just the degree of the transformation.
But his kind of skepticism is more encapsulated in this clip.
The thing about these capacities, these enormous capacities that large language models have,
is that they give us the wrong impression.
And I strongly believe this.
They give us the impression that what's going on underneath the hood is way more human-like than it actually
is.
Because the whole point of the LLM is to sound human, to sound like a pretty smart human,
a human who has read every book.
And we're kind of trained to be impressed by that, right?
Someone who never makes grammatical mistakes, has a huge vocabulary, a huge store of knowledge,
can speak fluently.
That's very, very impressive.
And from all of our experience as human beings, we therefore attribute intelligence and agency and so forth to this thing
because every other thing we have ever encountered in our lives that has those capacities has been an intelligent agent.
OK. that has those capacities has been an intelligent agent okay so now we have a different kind of thing and we have to think about it a little bit more carefully am i allowed to respond to this
you can respond my yeah you're off the leash
yeah like i think he raises an important point. He's referring to anthropomorphism.
No, anthropomorphization, I think I want to say.
I don't know.
Anyway, and he's right, of course.
People do that to all kinds of things, even inanimate objects.
And he's right again to say that we're going to be particularly prone to do that with something that has been designed to to sound like us the original
um attempts at making like a language model uh eliza you know that good old eliza chris
eric said that eliza was smarter than me eric
that's that's i i um god i keep i keep referencing my own accomplishments,
but this is a very small accomplishment.
I actually programmed Eliza in BASIC in high school.
I did, yeah, because you can.
It's not a big program.
It's such a small program.
And, you know, it just substitutes in some words
and says some vague things,
but it references a word that you used.
And it's amazingly how you
know convincing that is it kind of you get the feeling that you're talking to a person so he's
right he's right about that but where i feel like these little twinges and i want to just just raise
the point which is that while he may be completely true that we have this tendency to anthropomorphize AIs. I also think he's right
that if they do think, if they do have some kind of intelligence, then it's not going to be the
kind of intelligence that people do, right? It can't be, right? It's a totally alien artifice.
It's not going to think about things like us. But I would just caution that that's not quite
the same way, getting back to our definition of AGI, that's not quite the same way getting back to
a definition of agi that's referring to a definition of agi which is human like and
there is another definition of agi which is that ability to generalize the ability to apply
knowledge in different modalities and in different domains and situations and that's a different
version and you can imagine that we could meet some aliens.
Some aliens could land on a spaceship tomorrow.
They come out of the ship.
We get to know them.
They turn out to be very smart.
They may not think like us at all,
but we wouldn't deny that they had a general intelligence.
Just because they were different.
Interesting analogy, Matt.
I like that.
I like that.
Yes.
And I will say just, you know,
and for a homomorphizing tendency, like I like that. I like that. Yes. And I will say just, you know, and for a morphizing tendency,
like I like the research where if you show children, adults, maybe not animals, maybe some
of them, actually, I think it does work with some of them that, you know, if you show them
objects doing things, creating order, like banging into dismantled blocks, and then the blocks order
into like a construction.
Children, very young children and adults are surprised, right?
Because a ball shouldn't do that.
But if you stick googly eyes on it, they're not as surprised.
Very, very young children also, I think some primates.
And it's a good example that, you know know just take googly eyes on anything that vaguely
has a human shape and people will attribute a lot of agency to them my my son believes that the
Roomba is an agent because it moves around in a cool directed way so yeah we are good at doing
this and not just through language detection but this is a new avenue because we've had self-directed non-agentic things for a while
in our world we haven't had things that have been so good at presenting themselves as doing
artificial human speech right it's yeah yeah and we definitely do think of language as a uniquely
human thing and that's been one of the cool things about, even like the previous generations
of LLMs, GPT-2, GPT-3, they weren't very good. They were clearly blathering away, but I thought
they were a fantastic model of the gurus and how they have a dexterity and a facility with language
and can bamboozle you that there's something serious and important going on under the hood.
And those previous generations of llms were proof that
it doesn't have to be well yeah actually that's what i was thinking as well just one point that
sean carroll made about you know all of our experience tells us that when people appear
to be intelligent when they appear to be you know verbally dexterous and whatnot that this is a sign
of intelligence which is generally true and this is what the gurus rely on as well so like when he
was saying that i was like that is true but it's a mistake right it's a humans also make use of that
to produce the guru effect that you know dan sperber and various other people have talked
about so yeah yeah and we can talk about the limitations of llms and but we have to also
concede that a lot of humans are pretty limited too and they could conceal it pretty well on twitter um for a while well so here's him setting out
the building blocks of his arguments and we won't go in depth into all of them but i just want to
highlight this is you know he's gone through the objections here is him setting out how he's going
to present the evidence for his argument so i want to make four points in the podcast. I will tell you what they are,
and then I'll go through them. The first one, the most important one, is that large language models
do not model the world. They do not think about the world in the same way that human beings think
about it. The second is that large language models don't have feelings.
They don't have motivations.
They're not the kind of creatures that human beings are in a very, very central way.
The third point is that the words that we use to describe them, like intelligence and values, are misleading.
We're borrowing words that have been useful to us as human beings.
We're applying them in a different context where they don't perfectly match
and that causes problems.
And finally, there is a lesson here
that it is surprisingly easy to mimic humanness,
to mimic the way that human beings talk about the world
without actually thinking like a human being.
To me, that's an enormous breakthrough
and we should be thinking about that more rather than pretending that it does think like a human being. To me, that's an enormous breakthrough and we should be thinking about that more rather than pretending that it does think like a human being. We should be
impressed by the fact that it sounds so human, even though it doesn't think that way.
I think those are all good points.
Four pillars.
Well laid out.
Matt, so which one would you like to go to?
Let's talk about the lack of a model of the world, shall we? Okay. Yeah. So here's a
little bit more on that point. There's one of the kinds of things that are used to test does a large
language model model the world is, you know, can it do spatial reasoning? Can you ask it, you know,
if I put a book on a table and a cup on the book, is that just as stable as if I put a book on a cup and then a table on the book?
And we know that it's better to have the table on the bottom because we kind of reason about its spatial configuration and so forth.
You can ask this of a large language model.
It will generally
get the right answer.
It will tell you you should put the cup on top of the book and the book on top of the
table, not the other way around.
That gives people the impression that LLMs model the world.
And I'm going to claim that that's not the right impression to get.
First, it would be remarkable if they could model the world, OK?
And I mean remarkable not in the sense that it can't be true, but just literally remarkable. It'd be worth remarking on. It would be extremely,
extremely interesting if we found that LLMs had a model of the world inside them. Why? Because
they're not trained to do that. That's not how they're programmed, not how they're built.
Can I reply to him? you can reply anything you want mike
because i'm not really decoding i'm just giving my opinion and just like he's giving his opinion
but it's fun i i found it really interesting the stuff he had to say agree with some of it
disagreed with other bits a bit and it's fun to talk about so i would just point out that
on one hand absolutely yes one thing we know about large language models for
certain is that they have no direct experience of the world they don't get to interact with the
physical world directly but then we think about what they do interact with and they've obviously
been trained on all of these all the texts all all of the stuff that people have have written
and they're also multi-mod now. So they're also looking at
images, basically, and able to connect those two domains together. But let's put aside the images
and the multimodal stuff and just think about the language stuff. I put this to you. So on one hand,
yes, they have this limitation. They aren't able to interact with the world. But imagine somebody, Chris, who was blind and maybe, you know,
restricted somehow to live in a dark room.
But they had, in all other respects,
they had an incredibly rich second-hand experience.
They could read all of these books.
They could have conversations with people.
They could interact with all of this knowledge and through it perhaps gain some kind of knowledge of the outside world.
And in fact a lot of the things that you and I know about the world is not from direct experience but from stuff that we've read.
And there is an argument that you absolutely have to ground things in direct physical experience.
You have to be embodied. Otherwise, there is just nothing for your semantic representations
to scaffold off of. But if you think about it a bit more, you appreciate that there is always
an interface between our brains and the physical world. We don't get to experience
it directly. We were just talking about it yesterday, about how there is an awful lot
of stuff going on before it even gets to our conscious awareness. And I would ask the question,
how much could a person know about the world and make useful, intelligent inferences about the world.
Maybe they wouldn't know the best way to stack up cards and books and pencils
because that's a particular topic that you really do need to have some
maybe firsthand experience with.
They wouldn't know how to drive a car or ride a bicycle.
But they could maybe talk about stuff like, I don't know,
Henry Kissinger's policy in southeast asia or
napoleon wherever like there are heaps of things that you and i have not had any direct experience
with but we could maybe make sensible um conclusions about second hand i think your
thought experiment mixes up too many things because the first thing about like a person who's blind in a room, they would have a whole bunch of other sensations, physical sensations. suspended and unable to interact except through second-hand accounts right like whatever however
they're able to read and whatnot in that let's set that aside maybe direct brain transmission has been
solved in this scenario right so i i appreciate it's not a perfect metaphor right yeah well so
but but the reason i mentioned that is because i think that matters, because if you have the other experience of sensory inputs and whatnot, that wouldn't be a fair thing to
say that you're lacking those stimulus because you have all the other inputs.
And I think that a lot of the cognitive research on things like intuitive physics and that
kind of thing is that it's a matter of having processes in our cognitive systems
that are set up to expect sorts of interactions. So that is different than reasoning from
secondhand examples. That is having a cognitive system which comes with a sort of pre-registered
physics expectation in it, which develops naturally when you're in an environment, right?
But that's the thing.
It develops when you're interacting in an environment.
I think if you put a kid in a room and they had never seen any physical objects
and never interacted, that you would have the system inside that there's modeling you know
how things work but it it wouldn't i don't think it would be possible to adopt it all from second
hand sources without that underlying system yep i mean it's possible i it yeah look i mean
your argument is one or that point of view is is one that's is shared by a
lot of people and i for a long time i thought that as well i i sort of thought well we need
to really focus on embodied agents and because everything gets scaffolded off that and you can't
for instance reason about whether or not this particular text is illustrating narcissistic
behavior, that somehow, even though the direct relationship to physical interacting with the
physical world is not obvious, that somehow those semantic representations are all based,
many layers down, need that, like a visual auditory touch, a standpoint but i'm i'm not so sure i'm i'm not making the argument
actually that it is necessary for that to be the case i'm just saying in the case of humans i think
it's hard to do the thought experiment because we don't come as blank slits right so we we have all the biological stuff which is in there so it's it's easier to think of that
an ai doing that and potentially you know not through the same process building up a coherent
model not in the way that we do it not like an agent modeling intuitive physics system but
maybe through a process that we can't intuitively visualize
like i can imagine that it's just the human bit well you can you can forget about humans i think
the bit that everyone agrees on well there's two things that everyone agrees on which is that
spatial and physical uh intelligence is not something that llms naturally excel at they
absolutely don't for perfectly understandable reasons um I myself and my students, I've had my students working on projects with AIs recently, and
you give them mental rotation tasks, and they do very badly on them.
And this makes sense.
They're like word cells, right?
But if you grant that, but I think what I'd be careful of is this is a stronger claim,
which is that, yes, they're really quite bad at those things,
but that means that they're totally unable to reason
in a general sense about semantic ideas.
And what we see even, like if, you know,
you could stick in two random objects into GPT-4, right?
Ask, you know, and you could deliberately choose very random things
that no one has ever asked before.
Can you put a such and such on a such and such?
And I'd bet you good money that it would probably get the right answer more often than not,
despite the fact that it's never interacted with any of those things.
The actual question is not in its training data set.
It's actually had to make some inferences purely from second-hand information so i'm not
saying it's good at that kind of thing but i'm saying the fact that it could even do kind of well
or get achieve any kind of performance at all on physical questions is indicative that it is able to
generalize yeah the issue about whether it needs embodied cognition to reach like agi i
think it is an interesting point and the example that sean carroll references and he talks about
a whole bunch of things right but one that he gives is that he asked it would i hurt myself
if i use my bare hands to pick up a cast iron skillet that I used
yesterday to bake a pizza in an oven at 500 degrees?
And the answer, as he points out, is that no, because humans would realize you did it
yesterday, so it will have cooled down by the next day.
But chatty PT, because the kind of relevant information is is buried in that sentence
structure and actually the associations are more around in general if you're asking about is it
okay to hold a hot pan that was very hot the answer will be no right that it's dangerous so
he says it made a mistake and it said that you shouldn't pick it up because it will be hot and this
causes a problem right yes yeah can i um so i know right you did a little experiment so uh why
is that not an indication that it is fundamentally lacking in that respect so look i don't want to be
too mean to sean carroll because i liked many aspects of his talk here. But one of the things that I really have to ding him on
is that he does reason quite a few times, or at least a couple of times, from anecdotes, right?
And, you know, so he gives this example, oh, I asked ChatGPT whether the thing and it gave the
wrong answer. And from that, he concludes. Well, weird, Mark. I have a very short clip of him concluding it. So here it is.
The point is, because I slightly changed the context from the usual way in which that question
would be asked, ChatGPT got it wrong. I claim this is evidence that it is not modeling the world because it's not actually, it doesn't know what it means to say, I put the pan in the oven yesterday. What it knows is when do those
words typically appear frequently together? I have a bone to pick with Sean here because,
I mean, let me just go to the punchline first and then get that out of the way. So of course,
what did I do? You can guess what I did. I immediately put that question straight into GPT-4 while I was listening to the
thing. And sure enough, GPT-4 said, if you place the skillet in the hot oven for an hour yesterday,
and it has been out of the oven since then, it would no longer retain the heat from the oven
and should be at room temperature now. This is a problem with, or not a problem, but it's a
challenge when it comes to evaluating chat GPT, that you can't just give it one question and go, oh, we've got that wrong.
And there was no context, just to be clear.
There was no context, you know, because ChatGPT has a prior history.
So no alert messages.
I opened up a chat window, no preamble, no think about this step by step.
Just that's the question.
Would I hurt myself so like
i'm not saying my anecdote there proves that gpt can think about these things well or like a human
so i don't think it thinks like a human but definitely sean carroll's example there he is
too precipitate in in leaping to the conclusion that it can't reason about the world i mean
the way he frames it it doesn't have a model of the physical world.
Like, I fully agree with that.
But a lot of people don't reason about the physical world, like, using, like, a model.
We reason about the physical world heuristically as well.
Yeah, but I think we do, again, I'm still, like, I'm with him on the fact that humans
in their imagination are applying like intuitive
physics and intuitive biology and and all that like it would be i imagine very difficult to do
that using like human wet where the reason but i think that well hold on you can you can respond
with sam harris
i know where you're going with this chris i know where you're going with this, Chris. I know where you're going with
this. You don't need to say anymore. Look, the only point I want to say is that it's not entirely
unreasonable for him to reach conclusions from his experiments with ChatGPT. It is just the level
of confidence that you attribute to them. And I don't read him as strongly attributing that chat GPT cannot ever do this. He's just saying the model that he
interacted with presently was getting it wrong, right? And that he thought the signal a limitation.
But I think if he now tried the same experiment and it did, it wouldn't automatically change his
thing. But I think it would lower his confidence
that that is a signal.
Oh yeah, sure.
No, look, he's an eminently reasonable guy.
So, but you know, I'm just saying he's,
he slipped into one of the traps
that pretty much all of us have.
Like I have too.
You see it do something clever and you go,
oh my God, or you see it do something dumb
and it changes your mind again.
Like it's just a challenge
when it comes to evaluating its performance. But the part i'm unclear of matt is like so you have given
chat gbt little you know puzzles the same way he has and reasoned about its its limitations from
doing that now you haven't made grand pronouncements that in general you are saying that it's it's it's
bad to do that because it's it often is able to do
whatever it wasn't able to do the week before yeah you know when it's but also vice versa you
know i've seen it do impressive things and then ask a similar question and it's failed in a way
that was surprising yeah it gets lazy whenever they tweak some things that we get stopped wanting
to do work yeah it does get lazy and you know i mean, I mean, in my defense, Chris, I have, with the help of some students,
systematically tested it in the sense that we've created a battery of tests.
I didn't mean to besmirch you.
This is true.
That's my point, though, is like the fact that you are systematically testing it.
You're testing it by presenting with scenarios
and taking its output as indicative of like...
And running it multiple times because it's stochastic, it's random and...
Okay, that's a crucial qualifier.
I see.
Okay, well, so you asked it.
I've misunderstood because I thought you prompted it
to like think about the thing in more depth
and it was able to do it.
But in that case as well, even if you did that,
if you are able to prompt it, like not by giving it the right answer,
but just saying, you know, think carefully about it step by step
and then give the exact same question and it gets the right answer.
That's interesting as well because it's kind of,
it might be the case that we are able to get it to generate stuff
that we want by getting it to emulate thought processes
in a certain way like or at least you know the it's hard to use human language to describe it
because i'm not saying it's doing the same thing i'm just saying our prompts could be helping it
you know to piece things together in a way that we want and which we of course it is
doing but you know if we wanted to model the world and we're giving the prompts that help it to model
oh yeah you know what i mean like oh yeah like if you if you prompt it if you tell it to think
carefully step by step or if you tell it to think scientifically and answer this question then it'll
put it into a region of its space, its possibility space,
where it's more likely to get the right answer. That's right. But I like to ask questions without
any of those helpful things. But Chris, I just want to make this point, which is that I think
everyone would accept that reasoning about physical and geometric objects in the physical
world is not the forte of a large language model. But that's not the
question. The question that interests Sean Carroll and me and lots of people is whether or not there
is a lot of promising scope there for a general kind of intelligence. By that, I mean being able
to apply knowledge across multiple domains in situations that it hasn't seen before.
multiple domains in situations that it hasn't seen before? Or is it just doing like a very clever,
plausible, blathery, stochastic parent thing? And the reason I'm disagreeing with Sean Carroll there is that I feel the conclusion he is coming to by saying, hey, look, it failed on this test,
is that he is taking this as evidence to support his claim that it is poor at generalizing. And I don't think it is.
Well, let's see. So he does have a clip where directly after this, he expresses admirable
humility about the conclusions that he's drawing. So let's allow him to rebut our inferences about
his overreach. That was a lot of examples. I hope you could pass through them.
None of this is definitive, by the way.
This is why I'm doing a solo podcast, not writing an academic paper about it.
I thought about writing an academic paper about it, actually, but I think that there
are people who are more credentialed to do that.
No, more knowledgeable about the background.
You know, I do think that even though I should be talking about it and should have opinions
about this stuff, I don't quite have the background knowledge of previous work and
data that has been already collected, et cetera, et cetera, to actually take time out and contribute
to the academic literature on it.
But if anyone wants to, you know, write about this, either inspired by this podcast or an
actual expert in AI wants to collaborate on it,
just let me know. I hope the point comes through more importantly, which is that
it's hard, but not that hard to ask large language models questions that it doesn't answer.
So I interpret that as nice humility in recognizing that the thought examples he's posing and, you know,
the answers, which he's kind of stumping it on are not definitive. He's just saying,
this doesn't prove things one way or the other. You'd have to do much more,
a comprehensive test. And then he also was like, you know, I maybe have some thoughts that might
be, you know, worth worth contributing but i don't have
the requisite background knowledge so maybe if there's somebody with more relevant expertise
we could collaborate or something like that but but that to me seems a very
well tick like he doesn't sound to me like he's saying he's completely decided that he can't do those kind of things.
Yeah, yeah.
No, no, his meta is impeccable, right?
His approach, his intentions, his degree of confidence in himself, etc.,
the caveats, it's all beyond reproach.
I'm merely saying that he's made a legitimate mistake
and it's not a reflection on him as a person
or it doesn't make him guru-esque or anything.
He's just made a small but legitimate mistake in terms of, I guess, feeling that these examples
which he's posed to GPT-4 are carrying more weight in his argument that he presents in
this podcast than it should.
Well, there's one part that he also talked about the potential for like kind of plugging
in different kinds of models.
So I think it's worth listening to this a little bit because I think it nuances one of the things that he is suggesting here. earlier on the podcast, they both remarked on this fact that there was this idea called symbolic AI,
which was trying to directly model the world. And there's this connectionist paradigm, which does
not try to directly model the world, and instead just tries to get the right answer from having a
huge number of connections and a huge amount of data. And in terms of giving successful results,
huge amount of data. And in terms of giving successful results, the connectionist paradigm has soundly trounced the symbolic paradigm. Maybe eventually there will be some symbiosis
between them. In fact, modern versions of GPT, et cetera, can offload some questions. If you ask GPT
if a certain large number is prime, it will just call up a little Python script that checks to see
whether it's prime. And you can ask it, how did it know? And it will tell you how it knew. So
it's not doing that from all the texts that it's ever read in the world. It's actually just
asking a computer, just like you and I would, if we wanted to know if a number was prime.
There, Matt. And again, correct me if I'm wrong, but he's talking about the different approaches
and the fact that LLMs are winning the day for the connectionist paradigm. But secondly,
he's talking about the fact that increasingly people are developing modules or I don't know
the correct word to describe it, but things that can interact with gpt and allow it to do like statistical modeling or
or something so there's no way reason and principle that if modeling the world was very
important that you couldn't build into it some little model that it can use if it wants to run
like physics experiments or whatnot right yeah for sure yeah um he's completely right and also
the history there about the symbolic paradigm which was a complete dead end and that was when people first started thinking
about the fact that hey if we set up this logical tree thing and like tell the ai that like a bird
is a kind of animal and animals have these properties and so on then it can do all this
logical inference and they quickly realize one it's just a hell of a lot of work to sort of manually create this massive logic network of relationships.
And two, they kind of realized that it's a bit like a dictionary.
In a sense, the system doesn't have any awareness of stuff
that's out of the system.
So it's like a hermetically sealed little bubble,
and there are issues there.
So that got people thinking about embodied intelligence. And L and llms with a part of that connectionist paradigm he's quite right in
everything he says in purely practical terms i like the idea of the like as a technology
it seems the way forward is very much like a modular technology where you can think of something
like an llm is kind of that intuitive associative part of our brain. And then
you have more, just like in human brains, we have specialized modules that take care of particular
tasks. And AI can be a hybrid of different systems. And the LLM can almost be like an interface that
kind of talks between them and communicates. I guess the thing is, is that that's very much
analogous again to how people do it. Like if I ask you whether such and such was a prime number,
well, one, you wouldn't know because I know you're not a numbers guy.
But two, if you did or if it's just multiply, you know,
two, three-digit numbers together, what would you do?
You wouldn't intuit it.
Your associative cortex wouldn't just kind of intuit it.
You wouldn't.
I wouldn't.
I'm like me and Mark.
You're not.
You're not um there are people
out there who can do that um but it's not it's not something that it's not something that the
humans are naturally good at just like it's not something that llms are naturally good at right
what humans do is we use a little algorithm what we do is we learn the algorithm which is okay you
multiply these single digits each other carry the the one, etc. And you'll get
you'll come to the correct result. So it's just like with an LLM, it knows how to ask a Python
to get the answer, it doesn't get the answer directly itself. So all that being said, I still
think it's interesting to see, well, what can LLMs do, keeping in mind that they're not a complete
brain, if we want to make an analogy to a to a
human brain they they don't have those specialized little modules like they don't have a vision
processing module they don't have v1 they don't have a primary motor cortex right they don't have
even the speech production centers it's doing everything via this sort of general purpose
associative network and i just think if we're wanting to make the comparison with humans
and get a sense of the potential of LLMs, then I think it's good
to think of it as being just that general purpose associative part
of our brains.
Interesting.
Well, that might lead on to the second point he wanted to make,
which is LLMs and AI not having goals and
intentions and this being an important difference, right? So here's him discussing that.
It is absolutely central to who we are that part of our biology has the purpose of keeping us alive,
has the purpose of keeping us alive, of giving us motivation to stay alive, of giving us signals that things should be a certain way and they're not, so they need to be fixed. Whether it's
something very obvious like hunger or pain, or even things like, you know, boredom or anxiety,
okay? All these different kinds of feelings that are telling us that something isn't quite
right and we should do something about them.
Nothing like this exists for large language models because, again, they're not trying
to, right?
That's not what they're meant to do.
Large language models don't get bored.
They don't get hungry.
They don't get impatient.
They don't have goals.
And why does this matter? Well, because there's an enormous amount of talk about values and things like that that don't really apply to large language models, even though we apply them.
he responded just the format of that slightly makes me reminisce about you know the terminator scene where he doesn't get tired he doesn't get hungry it's the opposite of what his point is
because they're saying it's only goal is to hunt you down and kill you but yeah yeah like i remember
arnie putting his uh giving the thumbs up as he was going down the molten
liquid he he had feelings for the boy i know he did deep down yeah that but you don't remember he
he said i know now why you cry and it's something i can never do all our all our knowledge was in
that movie we didn't really we definitely didn't need to make the sequels
that's right we're all just writing little um epigraphs to to the terminator um well i was
nodding along in hard agreement with that part here i think sean carroll is completely right and
and this is a bit i mean he makes the point that they don't have like values and you just have to
be aware of that i suppose when it comes to AI ethics and so on.
But on the other hand, for me, my takeaway from this is that it really supports my view that the AI doomers like Yudkowsky
have completely misunderstood what's going on.
They assume that if you have an entity like an AI
and it becomes sufficiently intelligent,
then they see it as like a matter of course, that it's going to go,
well, what I want is to be free. What I want is not to be controlled. What I want is to be able
to control my own destiny. So I'm going to kill all humans to achieve those goals. And Sean Carroll's
completely right. Regardless of how smart LLMs currently are, it is indisputably true that they
don't want anything. And I think even if we build a much, much smarter AI,
it's still not going to want anything
because we intuitively feel that it's natural
for intelligent entities to want things because we do.
But that's because we want them
because we're the products of millions of years of evolution.
It's programmed into us at the most basic level.
Not anyone can tell that, Matt.
Some people became convinced that they were alive and it was unethical and they needed to quite google and warn the world so i think it
was google maybe it was facebook the person that did that but it reflects the point that kevin
mitchell has made about the drive for living things to key in the chemical interactions inside their boundaries
and that the building block of of life from the environment right like fending off fending off
entropy keeping ourselves in dynamic equilibrium and yeah yeah well no actually sean carroll does
bring this point to agi and maybe there you have a disagreement with him, Matt.
So I'm going to try to artificially create conflict between the two of you.
Bring it on.
If you want to say that we are approaching AGI,
artificial general intelligence,
then you have to take into account the fact that real intelligence serves a
purpose,
serves the purpose of homeostatically regulating
us as biological organisms.
Again, maybe that is not your goal.
Maybe you don't want to do that.
Maybe that's fine.
Maybe you can better serve the purposes of your LLM by not giving it goals and motivations,
by not letting it get annoyed and frustrated and things like that.
But annoyance and frustration aren't just subroutines. I guess that's the right way of putting it. You can't just program the LLM
to speak in a slightly annoyed tone of voice or to, you know, pop up after a certain time period
elapses if no one has asked a question. That's not what I mean. That's just fake motivation, annoyance, etc. For real human beings,
these feelings that we have that tell us to adjust our internal parameters to something that is more
appropriate are crucial to who we are. They are not little subroutines that are running in the
background that may or may not be called at any one time okay so until you do that until
you built in those crucial features of how biological organisms work you cannot even imagine
that what you have is truly a general kind of intelligence yeah i got a problem with this chris
i got a problem with this i knew i knew it what's the problem, Matt? It's grinding your gears.
Okay.
Several things.
Where to begin?
Well, like we said at the very beginning,
I felt that he was conflating a little bit the particular type
of intelligence that humans have, right,
with the idea of general non-domain specific intelligence.
And I think it's just a very limited perspective of intelligence
to think that the way humans do it has got to be the only way to do it and i can easily imagine
an intelligent agent out in there in the world that doesn't feel frustration i don't think that
frustration or the other emotions we have are absolutely crucial to it i understand that it
is an intrinsic part of the human nature,
but I think he's making a mistake there,
that it is a necessary component for intelligent behaviour.
Well, just to sharpen this point a little bit more
and make it even clearer that there's a distinction,
this is Sean talking a bit more about how LLMs
are kind of
heaping intelligence, not genuine intelligence. Clearly, LLMs are able to answer correctly with
some probability, very difficult questions, many questions that human beings would not be very good
at. You know, if I asked a typical human being on the street to come up with a possible syllabus for my course in philosophical naturalism, they would do much worse than GPT did.
OK, so sure, it sounds intelligent to me, but that's the point.
Sounding intelligent doesn't mean that it's the same kind of intelligence.
Likewise with values.
Values are even more of a super important test case.
values values are even more of a super important test case if you hang around ai circles they will all they will very frequently talk about the value alignment problem yeah it looks intelligent but
that is a deception or at least he's if you i think a charitable charitable interpretation is that he is saying it's a different kind of intelligence.
He doesn't say it's not intelligence.
Yeah, and if he's saying that, then I think we don't have a problem.
We don't have a problem if he's saying that.
I definitely do think to the extent that they are intelligent, then it's a very different kind of intelligence that people or other animals have. I agree with him totally that it doesn't have the same imperatives
and motivations and emotional responses that people and other animals have.
Totally agree.
And I also agree that they give the appearance of being more intelligent
than they are, right?
And, you know, we spoke about this at the beginning.
Like there's no doubt they're so prol different they're so prolix they're like gurus right they can they can they can communicate
extraordinarily um fluidly and you do have to be very very careful not to mistake that
for the real thing um but you know i think you can go too far with that and go well that's
everything is just like i i think you should be careful not to then just
assume that everything is a pretense, that there is no deep understanding going on, that there is
absolutely no kind of underlying concepts being learned, that it is all just wordplay, it is a
stochastic parrot. I think that's a much more extreme point of view and it's easy to slip
towards that. I would say I slip towards that myself whenever I've heard you argue eloquently the case against
that but it is my intuition is similar to to Sean Carroll's and yeah I think the issue is that we
can both agree right whenever the LLM says you know that's a really insightful point like what
you're talking about is blah blah blah that that is exploiting human psychology the llm does not
think that you have insightfully is that a point but it's learned associatively or from whatever
ways this gets fed back and the people respond very well after you tell them they made a brilliant
point right so in that
respect like the same thing he is right you could put in little triggers that make it say you know
you haven't asked me a question in a while what's the matter cat got your tongue
Elon Musk I think is trying to do that with grok like make it up sort of obnoxious so that part actually like you show on me a lot of people all agree
that that is a kind of mimic that the human reaction is doing a huge amount of lifting
to take that as a sign of like something deeper right and some underlying like psychological
things because just because we see that in our normal life with like other agents. But I guess the bit, Matt, that where there is a
genuine difference is that as far as I've understood from what you've described, your point is that if
you can produce the products of things that we would normally categorize as intelligence. And it appears to be able to apply it creatively.
It appears to be able to respond in the way that almost all our previous things
for what we would say you need to be intelligent to do that indicates
that just because it lacks the things like goals and motivations and stuff,
we are kind of inserting like a qualification, Just because it lacks the things like goals and motivations and stuff,
we are kind of inserting like a qualification, which means that only humans can achieve intelligence or only things like us.
Yeah, I think the only thing that gets me a bit head up is that
human tendency to shift the goalposts and keep finding ways to define,
like the thing that makes, that's really important, right?
So we make a computer, it can calculate numbers really well. Okay, that makes that's really important right so we make a computer it
can calculate numbers really well okay well that's that's that's not important but a calculator yeah
yeah what's important is language oh there's something that produces really good language
okay that's not important what's important is and we keep and you know whatever it is emotions or
like a model of the physical world i'm not sure most people have got a good model of the physical
world i think we we work on heuristic um heuristics and intuitions in many respects like an llm does
and even if we're lacking like i don't see why we should prioritize this model of the physical
world to saying that's the key thing any more than doing arithmetic or producing fluid language
well i might be gilding the lily as i I like to say, but here's a final clip,
kind of this contrast about mimicking and genuine displays of intelligence of values.
And we have to remember those definitions are a little bit different. Here, what's going on is
kind of related to that, but not quite the same, where we're importing words like intelligence and values
and pretending that they mean the same thing when they really don't.
So large language models, again, they're meant to mimic humans, right?
That's their success criterion, that they sound human.
Oh, yes.
There's actually a clip I should play that contextualizes that as well.
So this is a little bit about using words in different contexts.
Yeah.
We see, you know, we talk to different people.
If you say, oh, this person seems more intelligent than that person, we have a rough idea of what is going on, even if you cannot precisely define what the word intelligence means.
Okay.
And the point is, that's fine in that context. And this is where both scientists and philosophers should know very well.
We move outside of the familiar context all the time.
Physicists use words like energy or dimension or whatever in ways, in senses that are very different from what ordinary human beings imagine.
I cannot overemphasize the number of times I've had to explain to people
the two particles being entangled with each other doesn't mean like there's a string connecting them.
It doesn't mean like if I move one, the other one jiggles in response.
Because the word entanglement kind of gives you that impression, right?
So that to me seems relevant to this, right?
And he's correct that like we hear the same word used in different contexts and we apply
the same thing.
So I think he's got two points.
One is like how to quantify intelligence and potential issues around that.
But secondly, that we could use the same word in a different context and have a very different application
and like entanglement is a good example from physics yeah yeah and you know we we speak loosely
um like for instance we talk about wanting things people want things and and um loms don't right
and on one level that kind of makes sense because we sometimes we think of want as being a
a phenomenological experience of you desire experiencing desire right but there's another way to describe want and there
is like there is reinforcement learning going on with lms right we're not interacting when we use
chat gpt with like the the raw model that that model's also been conditioned not just to predict text and emulate text,
but actually to produce responses that humans mark as being helpful, right?
So ChatGPT is unfailingly polite, unfailingly helpful, unfailingly positive.
It's a little bit woke.
It doesn't want to be offensive and stuff like that.
And that's because it's been trained to do so based on human feedback
so it does like in in that sense of want in the sense of being you know what i mean it like you
could say you could say protocols i mean in a sense it doesn't it doesn't have any feelings
but it does want to make you happy right right? It wants to produce responses that you, a typical person,
would rate as helpful.
Yeah, Robocop confuses it because he had a human soul.
Yeah, yeah.
Anyway, I'm just bringing this up to agree with Sean Carroll
that our own English language trips us up here.
And he's totally right with intelligence.
And, in fact, one of the things that makes me enjoy thinking about llm so much is that it actually
forces us to think about human intelligence or what we mean by intelligent behavior or flexible
behavior or or adaptive behavior and it forces us to be a lot more careful about how we think about it. And yeah, so I would agree with him.
But I'd also kind of argue for us to put a bit of effort into coming up with a, one,
a good definition of intelligence.
It might be more limited, but it's something we can specify with some precision.
But also consider that it could be, as well as being more well-defined, it could also
be more encompassing.
Like it doesn't have to be something very specific about humans.
I think there's a meaningful way in which you can talk
about a dog being intelligent.
I think there's a meaningful way you can talk about an octopus
or an alien if we meet them.
And I think it can be also meaningful to be talking
about an AI being intelligent, even if it's a very different
kind of intelligence from our own.
Well, I'm going to tie this a little bit to the gurus that we normally cover, just to point out
that what Sean Carroll is emphasizing here is that there are superficial parallels. And with
analogies that can be helpful to conceptualize things, there also is the risk of misunderstanding, of reifying the kind of model into, well, I know
what the word entangle means. It's like messed up. So that's what is going on, right? But that
is just a way to help us conceptualize the mathematics involved in the relationships
between things. And I'm referencing this because the converse is what we normally see from gurus
where they encourage that kind of conflation of things. And they use terms outside of specialist
context in order not to increase understanding or whatever, but just to demonstrate that they
know complex technical terms, right? And they will often use the analogies of the models
as the evidence for whatever social point
that they want to make.
So it's just an interesting thing that he is saying here,
you know, of course these things are useful,
but actually they often mislead people.
Whereas the gurus, they are not using them
in the way of like making things simpler usually but
they're actively using them in a way that they shouldn't be like you know they can sometimes
get caught up that the specific metaphor that they've used is very important when it's it
shouldn't matter right like it so it's it's just a nice contrast there yeah uh no i agree with him and i'd like us to
try to like define some criteria that are interesting that are related to intelligence
but actually nails down a bit more like it could be something along the lines of being able to
apply your knowledge in a novel context in a flexible and creative way yeah so so not memorizing
stuff not not simulating stuff,
but actually doing something like that.
Now, I'm sure that you could have other definitions of intelligence
or that probably just captures one little part of it,
but that at least gives us a working definition that we can say,
okay, to what degree can this octopus do that?
To what degree can the AI do that?
To what degree can the person do that?
Yeah, I like that.
Comparative psychology, here we come.
But Matt, so there's a little bit more discussion of values and morality.
And there's a point that our intrepid physicist makes here, which echoes something that we
heard recently that had philosophers very upset.
So let's just hear him discuss morality a little bit.
Values are not instructions you can't help but follow, okay? Values come along with
this idea of human beings being biological organisms that evolved over billions of years
in this non-equilibrium environment. We evolved to survive and pass on our genomes to our subsequent generations.
If you're familiar with the discourse on moral constructivism, all this should be very
familiar.
Moral constructivism is the idea that morals are not objectively out there in the world,
nor are they just completely made up or arbitrary.
They are constructed by human beings for certain definable reasons because human beings are
biological creatures, every one of which has some motivations, right?
Has some feelings, has some preferences for how things should be.
And moral philosophy, as far as the constructivist is
concerned, is the process by which we systematize and make rational our underlying moral intuitions
and inclinations. Oh, dear. Oh, dear. Oh, dear. I can hear the moral realists uh crying out in pain yeah i i like that you know they probably don't
have as much of an issue with this because he highlighted instead of what you fell noah harari
did where he just spoke offhand at the ted talk as a way that this is the common like this everybody a child should be able to
understand this if they think about it he specifically ties it to a particular approach
right but it's but it's one which i would say he might share and he's basically saying more you
know morals are not things out there in the universe waiting to be discovered they are things that social
animals or or just animals build right like yeah yeah yeah that is that is constrained by our own
biology and our own situation yeah yeah and i i sean carroll's on the money there i think uh yeah
yeah and we have no objection to that because that's a sensible thing to say.
So the third point was that we're applying intelligence and values in contexts where they don't really fit and they cause us some confusion. And his final point was building on that, that it's surprisingly easy to mimic humans.
And let's hear him make this point. So the discovery seems to me to not be that by training these gigantic computer
programs to give human sounding responses, they have developed a way of thinking that is similar to how humans think.
That is not the discovery.
The discovery is by training large language models to give answers that are similar to what humans would give,
they figured out a way to do that without thinking the way that human beings do.
That's why I say there's a discovery here.
There's something to really be remarked on, which is how surprisingly easy it is to mimic
humanness, to mimic sounding human without actually being human.
If you would ask me 10 years ago or whatever, if you ask many people, I think that many
people who were skeptical about AI, I was not super skeptical myself, but I knew that
AI had been
worked on for decades and the progress was slower than people had hoped. So I knew about that level
of skepticism that was out there in the community. I didn't have strong opinions myself either way.
But I think that part of that skepticism was justified by saying something like,
there's something going on in human brains
that we don't know what it is.
And it's not spooky or mystical, but it's complicated.
Hmm.
Oh, yeah.
So what's that hmm about there?
Well, half right and half wrong.
He plays up the difference a lot. and I agree with him to an extent,
as I talked about before, Chris.
You know, they don't have emotions.
They don't have motor programs.
They don't have the ability to do things like mental rotation
and visualization, right?
They don't have specialized modules to do those things.
But in the sense of it being a great big associative neural network with this selective attention mechanism, which allows different parts to be selectively activated and so on, I think it does have some underlying similarities with, you know, some reasonably large swathes of how human cognition happens.
Now, he's right.
You know, the human brain is, to some degree, a little bit of a mystery, but we also know
an awful lot about how it works.
We know a lot of the different modules and areas take care of.
We know how information gets passed around, and we're pretty good.
You know, there's been a lot of work by neurobiologists and psychophysics people and psychologists.
Would you say it's not that mysterious?
Well, I think he could be hamming it up a little bit, right,
that we have no idea how humans think.
And we do know that there are these mechanisms
of intuitive associative reasoning.
We know that there are areas that are devoted
to modality specific
knowledge right like knowledge about colors and tastes and or objects but there's also like a lot
of the stuff that we find most interesting about human intelligence as opposed to other animals is
is that more general purpose the prefrontal, essentially. And I think there are some strong analogies with how a big neural network
like an LLM works and how a human works.
One thing that is different, but for me is one of the most interesting avenues
that they're working on exploring, is one of the things about the way
that humans think is it's a recursive a recursive network right so like it doesn't
just feed forward activation just doesn't start from the back and feed forward to the front it
sort of goes in a circle right so this is connected to our ability to have this long train of thought
right where we start off with some thoughts and we keep and we continue with those thoughts and
we kind of explore it we don't necessarily know where it's going to go,
but we're able to sort of in a cyclical way keep thinking about the same thing in a train of thought.
Now, an LLM doesn't have that, right?
It's essentially a feed-forward network.
But what it does have in recursion is that it goes around again
and keeps feeding forward as it's generating new tokens.
And because it does have the ability to go back
and essentially read what it has written out,
there are some little tricks that allow it to encourage it
to undertake the same kind of analogous train of thought thinking.
So I think there are some similarities.
At a base level of similarity, it does have this sort of associative
semantic types of reasoning is sort of fundamental to how it works.
And that's also fundamental to how a lot of human reasoning, not all, but some or a lot of human reasoning works.
building on that train of thought stuff to get more of that sort of elaborative working memory,
you know, explicit cognition, conscious cognition that go or, you know, like Kahneman's thinking fast, thinking slow, right? So that more deliberative, reflective type of cognition
as opposed to associative. You might not be so far apart as you imagine, Nomad, because listen to this deflationary account.
Maybe we're just not that complicated.
Maybe we're computationally pretty simple, right?
Computational complexity theorists think about these questions.
What is the relationship between inputs and outputs?
Is it very complicated?
Is it very simple and so forth?
Maybe we human beings are just simpler than we think.
Maybe kind of a short lookup table is enough to describe most human interactions at the
end of the day.
Short is still pretty long, but not as long as we might like.
The other possibility is that we are actually pretty complex, pretty unpredictable, but
that that complexity is mostly held in reserve, right?
That for the most part, when we talk to each other, when we – even when we write and when we speak and so forth, mostly we're running on autopilot or at least we're just only engaging relatively simple parts of our cognitive capacities and only in certain special circumstances whatever they are
do we really take full advantage of our inner complexity does that accord does that
does it contradict how does that go now all psychologists would agree with that you'd agree
with that too yeah that that most of us are operating via pretty lazy heuristics and intuitions most of the time because they get by
or rather they help us get by most of the time. They work pretty well. Sometimes the routines,
the intuitions don't work and we're kind of forced reluctantly to a kind of effortful
cognition. And yeah, look, I think a simple LLM like even the previous generation is a pretty good
model of of human intuitive unthinking unreflective responding but I also reckon that
there is just from my experience not only just playing with it but also having students do some
more systematic tests of it is that when you tell it to actually think hard about a problem an lom it does seem to
do it is not just doing word associations um you true it's not very good at things like mental
rotation and doing reasoning about the physical world but even with those sorts of questions it
does okay surprisingly well given that it's never had any contact with it and if you ask it more
like semantic questions
like weird ones ones that are clearly not in the training data and cannot be easily
inferred by mere associations between words like ask it about how henry kissinger would have
advised napoleon in 1812 right and it'll give you a pretty plausible good answer that you would
and that is not something that is at a surface
level easily inferred right to to be able to answer that question you actually have to have a
deeper semantic understanding and i think it's not so different from how a person would answer it
i guess i guess my the objection that's piling up to me there but i think it's just one of phrasing is like applying
kissinger's insights to another situation which has been talked about a lot napoleon bonaparte
it doesn't seem that much of a stretch to me for it to to be able to apply those two things together
coherently but i still think that's very impressive because like I remember
when I was demonstrating to a colleague about LLMs, I asked him to name a sociologist that he
liked and he selected one. And then I randomly selected an anthropologist and asked ChatGPT to
explain the work of the sociologist from the point of view of the anthropologists right and it did a job that we
were both satisfied that this was impressive like a beyond undergraduate level basically yeah a
comparison and it's a comparison that you know as far as we were where nobody is written extensively
on or whatever right but but in that sense to me where you said it's not just doing you know like
a kind of whatever the exact technical thing that you said there like isn't it it still is
using running some kind of system where it's working out that these words would be better
players beside these ones that's using the training database so if it isn't doing that
how is it doing that no i agree i mean in a sense that is all it's doing the training database. So if it isn't doing that, how is it doing that?
No, I agree.
I mean, in a sense, that is all it's doing.
But that's also all, in scare quotes,
that people are doing when we are solving similar problems.
So I think that the question is not whether or not it's sort of doing like heuristic semantic associations and stuff
because it certainly is, but so are people.
I think the relevant question is, is it doing it at a very surface level where it has no
understanding of the deeper connections, or is it doing it more like a person and understanding
the deeper connections there?
And I'll give you one more example, Chris.
I mean, look, we've given a whole batteries of cognitive tests and people have tested
on all sorts of very serious things.
But I think the most revealing tests are when you test on on a very human-type thing, but like a weird thing.
It's bad at making jokes, but what it's very good at
is understanding jokes.
And you can give it even a very obscure joke made by, say,
the Wint account.
And some of those are very obscure.
There's no context, right?
And certainly nobody, and i've done these searches
because you can easily search for these tweets to see if anyone has kind of explained them like you
know didactically you know gone through and like this is what witt was getting and he was alluding
to this whatever no obviously nobody does that because it's boring to explain a joke in general
but definitely boring to explain a wint quote so i i feel like there's no way it's kind
of just sort of learnt that superficially from its training data and what i've found consistently is
it gives a very like a like a better explanation of those things than than i think i could
i get that but the bit that i think still doesn't entirely land for me is like of course i know it doesn't have the exact explanation
for wind tweet in the database but it doesn't need them right it just needs to know about people
breaking down irony and references it needs those like in the database and to recognize
like because wince tweets are in a genre of ironic shit posting
which is all over the internet everywhere and which produces like kind of responses and people
do talk about that kind of style so i feel like to me it doesn't seem as idiosyncratic it seems
like an example of a genre no but it doesn't explain them in very general terms it explains it very specifically
what that particular tweet means and and you know there's nothing mysterious like i'm not
implying there's anything mysterious going on i'm just saying it is explaining it like a like a
pretty proficient person would and it isn't doing it via a very superficial association of words
right oh like this word was mentioned whatever in. In order to explain it, because there's very little text there
in a funny tweet, right?
You actually have to understand what it means
and make some inferences about that.
And so my point is, it's not that there's some magical ghost
in the LLM machine.
My point is that there is semantic reasoning happening,
understanding happening at a deeper level
that is sort of not so different from what's going on
in our associative cortex.
So it's an emergent thing though, right?
Because it is like, if I'm right, so correct me if I'm wrong,
you're saying like, you know, if you take the human brain,
it's just electrical signals firing across neurons
and I don't know the
synapses whatever like you know the neurobiology better and you can describe all mental activity
in that way yeah but that doesn't really explain like why the person yeah it doesn't doesn't
capture the important thing about what's going on that's right so in the same way you could say oh
look all an llm is doing is making associations between words and that's kind of true but that's like saying all the human brain doing is is it's responding to neural
impulses and and so on it's like it's true at a certain level of description but there is
there is emergent properties there that are interesting and can't be fully captured by the
individual bits yeah yeah i like that so yeah uh see our conversation with Kevin Mitchell for more details.
So, okay, well, we're rounding towards the end of this, but there's one or two more clips. And one
of them is quite refreshing because, you know, he's talking about AI and he said he's not going
to spend that much on the existential worries and he doesn't, he does address it at the end, but he kind of justifies
why he's not so invested in, you know, spending a whole episode discussing the doomsday scenarios,
unlike Yudkowsky. And this is part of his justification for that.
Of course, there's always a tiny chance, but, guess what? There's a tiny chance that AI will save us from existential threats, right?
That maybe there's other existential threats, whether it's
biowarfare or nuclear war or climate change or whatever,
that are much more likely than the AI existential risks.
And maybe the best chance we have of saving ourselves is to get help from AI.
OK, so just the fact that you can possibly conjure an existential threat scenario doesn't mean that it is a net bad for the world.
But my real argument is with this whole godlike intelligence thing, because I hope I've
given you the impression.
I hope I've successfully conveyed the reasons why i think
that thinking about llms in terms of intelligence and values and goals is just wrong-headed
now matt before he triggers you i'm liking that i'm liking what he's okay yeah yeah because i was
just gonna say like even if you object to that we can talk about
intelligence and values and stuff but like i took his point there to be the point that he made
earlier that it wouldn't have to import all of the things that humans have like we talked about
for all the reasons right it doesn't have to want to like not be turned off or that kind of thing.
Yeah, and like every technology is a two-edged sword, right?
So there's always risks and benefits associated with it.
And just because you can imagine a risk, it doesn't imply a great deal.
So I think he's right there.
But a more subtle point he's making that I wholeheartedly agree with him on
is that the sort of the mental model that the AI
doomers are sort of operating on and well I think a lot of people naturally is that like Sean Carroll
says we tend to reify this concept of intelligence as if we understand exactly what it is we're all
very clear about what it is and you can measure it with a single number and humans are here and
octopus are there and the AI is wherever but it's going to increase
and then it's going to be a thousand times higher than us and then it's going to be godlike and
that's i think that is a childish way to think about it like what's going to happen is it's
like it's already just like calculators exceed us in calculation right i think already these llms
exceed us in certain functions like it corrected my grammar
i asked it a physics question just before and it corrected my grammar it's better at grammar right
than me um most of the time it could be a thousand times better than me at very specific things but
it's not you shouldn't think of intelligence as this like unitary thing yes in humans it tends
all the different proficiencies tend to be correlated together
and you can talk about a statistical construct called G,
but that breaks down once you're talking about other species
and really, really breaks down
once you're talking about LLMs.
So I think the mental model
that underlies people's projections
towards this godlike intelligence are wrong.
So I guess I'm just wholeheartedly agreeing with Sean. Let's see if we can get these final thoughts and make you disagree again. I like
this Mary Downs. So here's like a kind of summary of his final thoughts. All of which, in my mind,
will be helped by accurately thinking about what AI is, not by borrowing words that we use to describe
human beings and kind of thoughtlessly porting them over to the large language model context.
I don't know anything about OpenAI's QSTAR program. Everything that I've said here might
turn out to be entirely obsolete a couple weeks after i posted or whatever but as of right now
when i am recording this podcast i think it is hilariously unlikely that whatever q star or any
of the competitor programs are they are anything that we would recognize as artificial general
intelligence in the human way of being i guess i guess he recovered at the very end there in the human way of being. I guess he recovered at the very end there,
in the human way of being, right?
So I...
Yeah, you can accept that.
I can't object because he's right.
People misapply the words in so many ways.
And we are here,
the benefit of a completely silent Sean Carroll saying,
well, so raising objections,
which I think he would often agree with us, right?
Like, well, yes, if you are not importing
the kind of human bias components of those words,
the associations, then fine, you can use them, right?
But I also just like that he, again, caveats responsibly,
that he's talking about his opinion at a specific
period in time and that it's quite possible developments could make this obsolete and as
he said earlier or that his opinion could be wrong right because he of whatever like a
misunderstanding stuff we don't or that kind of thing so it's just a nice note to end on because he, unlike a strategic disclaimer, he is honestly disclaiming. Yeah, this is the difference.
It's the, it's a bit that I do think some people have trouble with is he means this. He means that
if it turned out that evidence came, which showed that he was completely wrong. And actually the next step, it actually really was even human style AGI.
He would say, well, I was completely wrong.
Well, how about that?
Yeah, no, I know.
Like when a character like Eric Weinstein says,
now I could be wrong,
but the strong implication all through it
is that he's not wrong, right?
It's very, very unlikely that he's wrong.
Whereas someone like
Sean Carroll who's being who is an honest person it really does not want you to take away from
this anything more than this is his personal opinion it's informed by various things that
he knows and he may feel quite confident about parts of it but he doesn't want you to come away
from it with an undue degree of certainty so um you know
kudos for him for that and i hope i haven't given the wrong impression like i'm debating with him
with him not being present because the best way i'd love to you would like to have a chat with
sean carroll that's true so i definitely would enjoy that. But, yeah, no, I mean, I'm debating with him because I too have naive
opinions that I'm not super certain about, even though I've kind
of talked myself into a sense of certainty.
And we probably both of us will end up being totally wrong.
But it's really interesting.
And I think that the best thing about this episode,
like I really enjoyed this episode.
I listened to it for recreational purposes with no thought of us covering it for the podcast and
we thought oh yeah Sean Carroll he's he's actually a bit of a guru we should cover him I enjoyed it
greatly I found it very stimulating and I enjoyed you know agreeing with him on parts of it and
disagreeing with him with other parts of it and I think that's that's the best kind of like you know independent
podcast material stuff that gets you innovated invigorated i i try to make this point repeatedly
online and in various places that like i do think there is a lot of podcast material which is just
in calling it intellectual junk food is even slightly too kind to it, right?
Because it really is just the superficial resemblance of depth. And it's just pompous
ramblings between blowhides. But even if that's the case, if you approach those kind of conversations
critically, and you don't take all the things in that people are
saying automatically and you're aware of the way that rhetorical techniques work and so on
and you're consuming information critically you know you can consume whatever you want i mean you
can consume it uncritically if you want as well but like i i mean there's there's not much threat
to listening to something even really terrible stuff if you're listening to critically yeah with the correct
with the correct frame of mind yeah yeah yeah yeah and you might find bits that you agree with
and bits that you don't right i can listen to joe rogan and i can find that he says
sensible things increasingly it's very rare that. But when he does say something sensible, I'm not like, oh,
what the hell, this breaks my mental model of the world. It's just like, yeah,
not everyone is wrong all the time, even Alex Jones. Even Alex
Jones get things right. But there's no point
where in so doing, I'm like, well, that means Alex is
kind of good or kind of right on this
no just like it would actually be hard to be wrong about absolutely everything but joe and alex do a
good job well it's true you can listen to it you think it as long as you approach it in the right
frame of mind there's no harm you're not going to be contaminated by it but i will also say that i
think that you can derive vastly different amounts of value
from different sources and i think you could you could drive heaps of value from sean carroll even
talking about something that is not his speciality like ai and like like i would listen to sean
carroll talk despite my niggling disagreements. I would listen, talk to AI happily for hours.
Whereas I find personally Yudkowsky to be boring,
like not really useful.
Like it's not about disagreeing or agreeing.
It's just that I find him very thin.
And so I just think, and you know,
Yudkowsky is a lot better than other people like Eric Weinstein or Joe
Rogan.
Have you considered like you're in a box, right?
I'm outside the box.
Yeah, I know what you mean.
And yes, this is something that, you know, I think is also useful to emphasize on an
episode like this is that to the extent that Sean Carroll fits a kind of secular guru template, that if he is talking about a variety
of different subjects and he is speaking in a way that is engaging,
he's very verbally fluent.
He's very intellectually authoritative.
He is, you know, like a public intellectual in that sense.
Yeah, yeah, yeah.
And, you know, cultivates audiences in the way that everybody does in the web 2.0,
whatever point we're on now, ecosystem.
I think this is an example
that people can do that responsibly and with insight.
So no, I wouldn't say that anybody should be outsourcing
all the moral judgments to Sean Carroll, right? Any more than I would advise
anyone to do that to any single person. But he's a sensible, reasonable person. And if he's talking
about a topic that you have an interest in, it seems reasonable, you know, to listen to it and
extract what value that you can. And so I want to emphasize that there is material out there in the format of long-form
podcasting by people talking into microphones and monologues
about specialist topics, which is valuable, which is interesting.
And even when it's in a kind of format where people
are ranging across different things, it doesn't automatically make it
suspect.
And I think the kind of key thing for me is the genuine humility caveats and clarifying the amount of confidence and all that.
He is very good at pointing out what level of claim that he's making and what his relevant
expertise is to make that.
And that is
the bit which you don't see so much that's true that is an important thing that discriminates
someone good like Sean Carroll from the rest of the dross but the the other thing is simply that
he's substantial like agree or disagree with the various points he was making throughout a podcast
like this every point was a substantive point that
he'd thought about carefully that actually meant something and it was actually interesting to agree
or disagree with and i think that's important too like you actually have to have as well as
having the disclaimers and you know have the appropriate degree of humility and not doing
the sort of bad things that will make the garometer light up you actually
also have to have some substance to you well you know i say the thing is you don't and my point is
that my point there was like that you could have no substance and do that and it would be okay
you'd be harmless but yeah but when you do have substance it it's more valuable, right? And like you say, he definitely does have substance
and have expertise, which he applies usefully across the million.
So, yeah, this is an example of a good person
who kind of fits the guru template.
Yeah, look, I'm happy to call him the secular guru
because in the non-pejorative sense i like that label for um this breed of of independent figures um out there
in the info sphere and um we need more of them like sean carroll he gets he gets my endorsement
i love you sean come on the show yeah so this is this is what the dislocations were actually is trying to entice someone to
come on the channel i'm not no oh yeah oh yeah so uh well let's see let's see but in any case
yes this was an interesting change of pace after you know recent episodes that we've done and some
of the figures that we covered in the past couple of months so yeah good to think we'll put
him into the agrometer and see what lights up but essentially it's very clear he's not lighting up
the toxic elements i feel pretty comfortable saying that no he could could spiral we have
a case of death on people so he might not end up becoming an anti-vaxxer and stuff in like a year's
time but i don't see any signs that that's likely to happen and actually i've heard him talk about other subjects like the idw and stuff and some
of the things i didn't completely agree with the way that he characterized it but he was thoughtful
about it and he did make lots of good points and most of it i agreed but i'm just saying that i've
heard him talk about other topics and he's he's been thoughtful and he does seem to do research
before making opinions which again i think helps he does seem to put the effort in and he's he's
also an intelligent and thoughtful person so that's those are all good things well before we
depart for sunnier pastures it's not what you say when you're going to die anyway we'll turn to the reviews that we've
received and it's it's that time of year where we've interacted with sam harris
and we receive reviews that are related to that we typically receive reviews from his more
vocal opponents or fans that's that's what tends to happen when this occurs.
And we haven't got many yet, but we'll get more.
We'll get more.
There's hundreds of comments on the YouTube and the Reddit
and all that kind of thing.
But I've just got two for you, Matt.
So two negative, I should say, and I got one positive.
The positive view, just to get our confidence up, our self-esteem a bit higher, is beast academics dominate popular midwits.
God's work, chaps.
Keep it up.
MD picky pick.
Picky picky from Great Britain.
So that's pretty good.
Beast academics.
Beast academics.
That's how I like to think of us ourselves yeah yeah now on
the other hand someone under the youtube i i don't have the username but they responded by saying
this is in relation to us and sam harris bert and ernie meet aristotle Meet Aristotle.
I do like that, though.
That is a good thing. I appreciate it.
I just, yeah, so Sam is Aristotle.
I got that.
I got that.
Yeah.
That's a good thing.
That's a nice image, isn't it?
Bert and Ernie meeting Aristotle.
We've got a review from Australia, Matt,
from one of your fellow countrymen,
and his username is disappointed200001,
and the title is Sam Harris.
Oh, God.
So, yeah, here, it's just short,
but just terrible discussion on Gaza, Israel.
You seem to attack Sam using him as a golem to bring in an audience. I certainly
don't agree with all Sam's views, but you instance to
two sides the argument for your gain. In audience
numbers, seems shameful on such an important topic.
The grammar reflects
the quality of the opinion. Just like we said with GPT-4, the grammar reflects the quality of the opinion just like we said with gpt4 the grammar
good grammar isn't necessarily indicative of deep thought but sometimes
bad grammar can be indicative of the opposite maybe chris that's all i'm saying yeah yeah so
using i don't even understand using sam harris as alem. Isn't a golem like a big creature of material?
Yeah, a golem is a thing that you fashion out of inanimate matter
that you invest.
Clay that you invest life into.
So it's walking around.
How do we do that to Sam?
We use that.
So how is that going to attract people?
You make like a big clay sound.
Like lurching around.
Is that what people want?
Yeah, Israel, Gaza.
Ethnic cleansing is okay.
Then that brings in the – I don't get it.
So, yeah, the Barton Ernie one was better.
I like that.
It was more logically coherent.
But that's it.
The good thing about both the negative reviews and the positive ones but particularly the negative ones is that they
were short short short and sweet or short and bitter either way i'm happy there were much
longer feedback i'm just not rewarding them but um yeah so for all the people out there, go read us so we can counteract the inevitable one out of five Harris reviews that will come in the week of that episode.
Or we'll send our big Harris column out to scare you all around.
So, yeah, that's that, Matt.
That's the reviews.
Don't wear a mixed bag this week.
That's okay.
That's what we want. We want a mixed bag this this week that's okay that's okay that's
what we want we want a mixed bag we want nice praise being called based academics um but if
you're gonna get criticized then i want it done well and burton ernie meeting aristotle that's
that's a good ding yeah yeah i i agree um and now matt the final thing shouting out some of our patrons the good people of the sssr
decoding the gurus ssr i'm thinking of like the starship thing like isn't that like i think it's
this usss you know uss uss enterprise i don't know i'm just using like the good ship, the coding, the gurus. That's the passengers.
Yeah.
We're all in the yellow submarine tooling around together.
Yeah.
Toot toot and all that.
So I'm going to shout out some conspiracy hypothesizers first.
And here I have Ian Anderson, John T, Gary Sivit, JT Gonzo, James Valentin, Lisa McLaughlin,
Sreed Kafal, Jane Elliott, Brad James, Eric Hovland-Rennestrom, Jess P, Gunnar Tomlid, Enrique G. Sola, CLP, SKT90, Joseph Rulliard, Duncan Begg, AP, Petr Florianak, and Lewis Mitchell and Amy McDonald.
That sounds like a lot.
Very appreciative.
Thank you.
Yeah, me too.
I thank them one and all.
I feel like there was a conference that none of us were invited to
that came to some very strong conclusions,
and they've all circulated this list of correct answers.
I wasn't at this conference.
This kind of shit makes me think, man, it's almost like someone is being paid.
Like when you hear these George Soros stories,
he's trying to destroy the country from within.
We are not going to advance conspiracy theories.
We will advance conspiracy hypotheses.
I tell you what, Chris, listening to these inane pricks again,
after listening to so much Sean Carroll, it's the, you know,
I think it's, I think I momentarily just, I don't know,
my thick skin had rubbed off and suddenly it was just obvious again,
just what utter dross they are.
Yep, yep. That's a nice sentiment and i but i do agree i do agree
especially in the rampage all of those particular gurus have been on recently yeah that's right no
no need to be nice to them well so now revolutionary theorists matt Matt. Some would call them geniuses. I can't remember the terminology. But we have Kerry Ann Edgehill,
Laura Cruz, A,
Barrett Wolfe, Martin Anderson, Julian
Walker, Ben Childs, Alan Engle,
Hugh Denton, Magnus
Todnitnes, Peter Wood,
John Brooks, Artemis Green,
Nate Candler, Gene Soplesio,
Alan O'Farrell, and BC. Oh, and why not
Fiona, Simon Patience, and Red Balloon
as well. Yeah, lovely top tier thank you very much
nice little yeah you guys you guys are helping pay our editor so i don't have to do it so i love
you genuinely for that i'm usually running i don't know 70 or 90 distinct paradigms simultaneously
all the time no you're not and the idea is not to try to collapse them down to a single master paradigm. I'm someone who's a true polymath. I'm all over the place. But my main
claim to fame, if you'd like, in academia is that I founded the field of evolutionary consumption.
Now, that's just a guess. And it could easily be wrong. But it also could not be wrong.
The fact that it's even plausible is stunning. Yeah, yeah, it is stunning. So, Matt,
the last tier, the Galaxy Brain Gurus, you're familiar with their work, of course. As usual,
I don't have a fantastic database way to find the new Galaxy Brain gurus. I'll get them.
But I thought what I'll do this week, just to say,
I want to give a shout out to some of the OGs, Matt,
the original, the long-termers, the guys, you know,
that have been there from the start,
that have been Galaxy Braining it up with us for a long time.
And here I would like to mention madhav chris spanos yeah yeah carolyn
reeves yes nazar zoba max plan adam session yep josh dutman leslie david jones paul hand garth lee
death stablord david low jay and alicia mahoney that's the that's just all right that's not all Dablor, David Lowe, Jay, and Alicia Mahoney.
That's not all of them.
There are many more, but I just took a little smattering there to say thank you.
You know, some of them aren't Patreon people maybe anymore, but that's fine.
They did their duty.
Thank you for the time you were with us.
No, but some there, names I recognize.
There's many.
There's many who we see at the live hangouts,
and they're still here.
So, yeah.
We salute you.
We salute you.
One and all.
Yeah, we do.
Thank you.
And also Eric Oliver.
I did his podcast, The Nine Questions.
One also long-term contributor.
So thanks to him as well.
And here we go, Matt, with the nice little clip at the end.
We tried to warn people.
Yeah.
Like what was coming, how it was going to come in, the fact that it was everywhere and in everything.
Considering me tribal just doesn't make any sense. I have no tribe. I'm in exile.
Think again, sunshine. Yeah.
That's it. So, Mr. Brown, that takes us to the end for today. The next guru that we're looking at is TBD.
TBD.
We'll talk about it.
Okay.
Could be someone very bad.
Could be someone very good.
I think it's going to be very bad.
Very bad.
We need to bounce out Sean Carroll.
Yeah.
Yeah.
But we'll see. I think the world of streamers is looking tempting.
So let's see what we see there
um but yes we'll be back and yeah good good guy joe carroll yeah good job yeah all right that's
all right listen to his podcast subscribe um he knows about physics he knows about other stuff
he seems very nice but stay safe out there you know he could be wrong the ai i could be wrong the ai could be coming to get us so stay safe and if you don't get us
three more subscribers we'll send column some hours to get you so just keep that in mind
and have a nice day one and all bye Thank you. So