Hello Internet - H.I. #52: 20,000 Years of Torment
Episode Date: November 30, 2015Grey and Brady discuss: subvocalization, Liberian county flags, Brady worries about who is driving him and the proliferation of screens in cars, Brady fingers ballots and then brings up coincidences a...nd dreams, Brady tries to convince himself to buy an iPad Pro, what cars should people drive, before finishing off with how artificial intelligence will kill us all. (If we're lucky.) Brought to You By Igloo: An intranet you'll actually like. Audible: get a free 30-day trial by signing up at audible.com/hellointernet Squarespace: Use code HELLO for 10% off your website Listeners like YOU on Patreon Show Notes Discuss this episode on the reddit Subvocalization Instapaper (for the speedreading thing mentioned) Liberian County Flags Liberian County flags redesigned John Nash death Lyft Pink mustache The Hello Internet flag referendum What Grey was prioritizing Grey's 'fetish' podcast discussion of the iPad Pro iPad Pro Superintelligence: Paths, Dangers, Strategies, Nick Bostrom Computerphile: Why Asimov's Laws of Robotics Don't Work I Have No Mouth, and I Must Scream by Harlan Ellison Black Mirror: White Christmas
Transcript
Discussion (0)
I tell you what, it is one of the great myths of Hello Internet and CGP Grey folklore
that you are competent and have technical ability.
The last show certainly sparked some conversation in the Reddit.
I couldn't help but notice that it was a show that reached a thousand comments.
People talking about when sir versus ma'am is appropriate. People talking
about sub vocalization with many a minds blown. Lots and lots of discussion from the last show.
There was, there was, and I'm sure we'll come to a few of the other things in follow up. But
on this sub vocalization thing, a lot of people seemed really interested in it.
And it is very interesting, but I don't
feel like I have anything else to say. What about you? The thing that I left out of the conversation
last time, which people were picking up on a little bit in the subreddit, was I came across
sub-vocalization in the context of this is not a thing that you should do if you are a well-developed reader.
That this is a hindrance.
This is something that you do when you first learn to read when you are a child.
But that by the time you become a man,
you should be able to look at words and understand them
without hearing a little voice in your head reading the words to yourself.
Yeah.
There was a lot of comments on that point.
But my only follow up is when I came across this, I thought, oh, OK, well, this is very interesting.
Let me see if I can get rid of this sub vocalization. And there's a whole bunch of
things that you're supposed to do. And my experience with them has been a total failure. Like there are exercises you're supposed to do
where you're listening to like a recording of a voice that's counting up in numbers, one, two,
three, four, five, and trying to read and trying to do that. So like your brain learns to not use
the audio part of your brain for this. And I tried that. And the result was I was just incapable of
reading. The one that I thought was the most interesting was there's a bunch of software
out there, which I don't know if you've seen this Brady, but it does this thing where it flashes
words individually on the screen from an article. So, so instead of saying like,
here's an article that I want to read and it's written normally, it just flashes all of the words in sequence in the center of the screen.
Have you ever seen something like this?
Yeah, that does.
Yeah, I have very briefly.
I'm vaguely familiar with it.
Yeah, I believe Instapaper on the phone has it built in.
But there's a few websites where you can paste text and do the same thing. One of the ways in which you're supposed to train yourself out of sub vocalizing is by using something like this cranked at a ridiculous speed.
And so, OK, well, let me try.
I'll try this.
But it was almost comical because no matter how high I cranked it up to where it's like 500 words per minute, I'm just hearing a faster voice in my head.
It's like there's no point at which I can still understand it. And there isn't
also a narrator. And it was a bit like when I edit this podcast, I edit this podcast, and sometimes
accidentally send it to you in a fast forwarded mode, where we're talking, you know, two times
faster than we normally do. So I've tried a bunch of the get rid of sub vocalization stuff. And
none of it seems to work for me at all i just
i'm not sure that i'm not sure that it can be gotten rid of i guess the question i have
is what's the difference between you sub vocalizing and if i was sitting next to you
in bed reading the book to you wait what wait wait i think there's there's a big difference
between those scenarios hang on i'm sitting I'm sitting next to the bed.
I'm not in bed with you.
I'm just on a chair next to the bed.
Oh, yeah.
This is way less weird.
Way less weird.
Yeah.
You've got like your hot chocolate and you're getting ready to go to sleep.
And you're like, Brady, can you read me a story?
Like, is that basically what's happening?
You're reading yourself a story?
Like, it seems really similar to that.
Or if I was reading you the story, are the words then coming into your head and you're then reading them to yourself again
for you to think about it? Or it just seems like this unnecessary level of thought.
There's no doubling up, right? I'm not like hearing it twice.
Maybe this is the best way to think about it. Like when we're talking now, right? You and me
are talking. Neither of us are thinking about the thoughts,
right? Like we just, you don't know how you speak, right? Words just appear, right? This is how this
happens, right? And so when I ask you a question and then you answer me, right? You are using a
voice, but you're thinking the thought at the same time that you're speaking it. And for anyone who's done something like a podcast where you speak for a very long time,
and I'm sure Brady, you've had the same experience.
Sometimes you say something and you think, wait, do I actually think that?
I'm not sure that I do think that, right?
Because it's just like a stream of thoughts coming out of your mind, right?
Have you ever had that experience?
You say something, you think, do I think that?
Pretty much every time I speak.
There we go.
So in the same way that you talking out loud is like the same thing as you thinking, it's just like that for reading.
It's almost like if someone put duct tape over your mouth because you weren't able to speak that would impair your ability to think.
That's kind of like what it is internally.
I did read when they're doing like experiments on sub vocalization, they put little sensors
because you are almost imperceptibly reading to yourself.
Like they can see movements in your tongue and your lips and stuff.
So you literally are kind of reading out loud.
Yeah.
I would be really curious to know if that was the case for me. Like,
as far as I know, I sit silently and I don't, I'm not moving my lips or my tongue, but I have
seen these things saying like, oh, you can, under the right circumstances, measure that there's
still electrical impulses going to someone's vocal cords when they're doing this, even if there's no
external side that they're reading out loud. But I guess your analogy of you reading me a bedtime story just really threw me off.
I think perhaps the most straightforward way to describe it is that
me reading a book out loud to myself and me reading a book silently to myself
are not very different experiences.
Oh, really?
Oh, that is weird.
Human brains are weird. Well, some are. I don't know how you read. I don't understand how you read if that's not the
experience that you have. And you're like, you are like imagining things still. Like you are
like picturing the scene, obviously. And, you know, you're imagining the mountains and the hobbits.
And yeah, I have the same. I mean, this gets really weird, right? Like when you think of
something in your head, you can see it, right right but where are you seeing it i still have that going on like i'm
imagining the scene that unfolds in say a fictional book right that that definitely takes place but it
really is just like there is a narrator talking over the whole thing but so do you just do you
just have a like a scene silently playing in your head when you read? No, it's just it's in another realm.
It's in a realm where voices don't exist.
It's like it's your thoughts.
It's your consciousness.
It's that infinitesimal point in the center of your brain where everything happens that you don't understand, but it's just the place.
And there's no, I don't know. Like I said last time,
though, there's a collapsing of the wave function. As soon as I think about thinking,
everything becomes words and everything becomes pictures. But it's only when I think about
thinking, it's not. That's why I think the same thing's happening to both of us. And you're just
incapable of getting lost in it. And you're always thinking about it.
So you're always collapsing the wave function and thinking about the words and the pictures.
I know this is wrong and there are studies into it.
No, no, no.
And it's arrogant for me to think everyone thinks like me.
But that's just what it feels like to me.
It feels like we all do it.
Because as soon as I try to think about that, as soon as I talk to you about it,
suddenly I am reading to myself and everything is a lot more simple and basic.
But that's just because I'm analyzing it.
I just think you're analyzing it too much.
I think you do get lost in reading and thinking.
And it's only when you stop and think about it that it all collapses into this really simple thing.
Yeah, this is exactly what a non-sovocalizer would think, though.
Yeah, well, I mean, how can I argue with that?
How can I argue with that? And I could say say and of course you would say that because that's
what a cgp great would say you know and then you can't argue with that right of course this is we're
fast getting into the realm of inarguability but yeah as i the reason why i do think that that
you're wrong is because i from the descriptions I genuinely wish that I could read in this way
that didn't have internal words. Like it seems like it's a much better way to read, but I am
always aware of the narrator. Like the narrator is, is never not there. Mental images can be in
addition to the narrator, but the narrator is always there. Like I can do the thing everyone
can do, right? Where you can, you can imagine a dog and in your brain somewhere there's like a picture of a generic
dog that pops into your head without hearing someone also go dog right like so i can have
thoughts without a narrator but reading without a narrator is not possible okay but i would still
say that i think the vast majority of my thoughts do have
some kind of narrator and that the the picture part of it is is much rarer like i have to more
consciously like imagine the dog to not have the narrator to be a thing that happens i know but and
i do and i do realize there are academic studies into this that's another reason i'm wrong like
this is like a this is a field of study this is you know so i can sit here and be an armchair expert but i do realize there are this is a thing so and i would be
curious in the subreddit if anybody has any other recommended techniques besides the listen to
something uh spoken while you're reading or try to do the one word really fast things i'm open to
trying other methods to to get rid of the habit of sub-vocalizing,
but everything I have tried so far has been hilariously unhelpful.
Do you know what? I haven't told you this yet. What? But I've been buying up stamps and all
sorts of merchandise with the Liberian county flags on them. Have you really? Yeah. Just today I got an envelope that was sent like during like the Liberian war
or something with one of the stamps on it that's been postmarked in Liberia
and I'm loving it.
I'm loving it.
I'm getting really into stamps and postcards and that whole world of mail
and stuff.
I think I'm becoming a fully fledged nerd.
Like the one thing that I didn't do that's nerdy is stamp collecting. And I think I'm going to get into stamp collecting.
Well, you know, there is a whole world. There's a whole world to get into with stamp collecting.
I know. I know. Well, I mean, I've already, obviously, I've already started with my crash
mail. But now... Yes, the crash mail you so proudly showed me last time i was
there yeah i'm gonna have a whole bunch of other liberian stuff to show you next time oh boy
there was a thread on the vexillology subreddit very often on there they do redesign projects
which i actually think are some of the most interesting things that appear on that subreddit
sometimes they'll just do flags in a particular theme, like Canadaise every nation's flag. So you make a Canada version of all the
flags. But sometimes they just do a straight up redesign. And so someone who actually listened to
the show, and it's Foodman Dunian, he redid all of the Liberian county flags. And I will put the link in the show notes.
I am very impressed with this redesign.
And I think the redesign is really interesting
because I can't figure it out
because I look at the redesign
and these flags are still very, very busy flags.
But I like them all.
But I wonder if it's because my brain
has already fixed its point of
reference as those horrific, horrific original flags. And so my brain is going, oh, these flags
are much better than those old flags. So I feel like I have a hard time seeing them objectively,
but I think they are very interestingly done redesigns.
Do you know what my problem is with all these redesign
competitions and things like that? Because of these rules of good flag design and this kind
of accepted style and grammar of the time, all the flags begin looking a bit the same.
And I always think that, and that's one of the things I like about the Liberian County flags,
if I can like anything, It's that it's different.
It's so refreshingly different.
And isn't that a great thing about some of the wacky flags,
whether it's something really crazy like Nepal
or something that's just a bit different like Brazil, for example.
If you didn't have those points of difference,
flags would be the most boring thing in the world.
You need some of the crazy guys to make flags work.
And I think whenever you have these little competitions
where people say,
let's imagine we didn't have the crazy guys.
Let's make the crazy guys the same as all the other guys.
All of a sudden flags become really dull.
So I always think it's a bit unfortunate
when people have these little,
let's take the wacky flag
and turn it into all the other ones. And it just, it leaves me cold. Like if you're going to make a new flag,
okay, make a new flag and make it good and follow the rules of design. But there's something about
all these, if only this crazy flag was like all the other ones moments that people don't get it.
They just don't get it. All right. I am more sympathetic to your point
than you might think that I am, Brady.
The thing that I think complicates this
is that you and I are looking at it
from the perspective of flag connoisseurs,
potentially professionals
who help other nations develop their flags, right? This is
our perspective. So we see many, many flags. People send us on Twitter and on subreddit,
many more flags. We've seen a lot. And so I think from that perspective,
the more unusual becomes more valuable.
Like a welcome respite from the sameness of every single flag.
Yeah.
Yeah.
It feels like, oh boy, isn't this quite a relief?
And I think this is something that you can see sometimes with people who are
professional critics in any field.
Sometimes.
We're professional flag critics.
Yeah, that's exactly right.
We do earn money by criticizing flags. I guess we are professional flag critics. Yeah, that's exactly right. We kind of are. We do earn money by criticizing flags.
So I guess we are professional flag critics.
Yeah.
Quick, someone add that to the Wikipedia pages.
Well-known professional flag critics.
Touted in some circles as potential advisors to the government of Fiji.
Right.
But so I think that's why, like movie reviewers, you know, sometimes if they're movie reviewers
you follow, they'll occasionally like movies that you feel like, God, how could they possibly
like this terrible, low budget, awful indie movie?
And I think it's a bit of the same thing where they're like, man, it's just so interesting
to see something that's different, right? Even if it's a bit of the same thing where they're like, man, it's just so interesting to see something that's different, right?
Even if it's not great.
But the thing with flags and the reason why I will still push back against you on this is that I think a vital part of a flag is not just its uniqueness, but it's that the people who live under that flag should want to put that flag on things that they have. So I feel like everybody
should have a flag that they can attach to their backpack, right? Or that they can, you know,
fly from their house. Everyone should have that. And so the original Liberian County flags,
if you lived in one of those counties and you were super proud of it and you wanted to
demonstrate that to the world, you had a terrible, terrible choice of flag. So that's why I'm going
to push back to you is I think everybody deserves to live under a flag that they can proudly fly.
Have you yet seen, because I have not, have you yet seen anyone from liberia or anyone who lives
in any of these counties criticize the flags and say they don't like them because i mean you and i
have had a right old laugh and we've seen everyone on reddit having a laugh and saying these are the
worst flags in the world but it's entirely possible the people of river G County think that their flag's awesome.
It's got to be, it's got to be key.
River G County,
I think.
River G.
You told me just to say it and go with it.
So I did.
And now you're stopping me.
Yeah.
You gotta,
you gotta own it,
Brady.
You gotta push back.
Well,
I thought I did own it.
I'm sorry.
I would never want to just give you a hard time.
No,
but I mean,
maybe, maybe they do.
Maybe they're incredibly proud.
And if we were saying these things on a podcast in Liberia, we'd be tried for treason.
I mean, this is the part where I have to admit that I know almost nothing about the great nation of Liberia.
Except you know that River Gee County is pronounced like that.
I definitely know that.
Yeah, yeah.
I'm an expert in pronunciation for Liberian counties.
But yeah,
so I don't know.
I know.
I have to start calling you C.
Guy P.
Grady.
No,
but it's double E.
Don't you know,
don't you know pronunciation rule?
No,
I don't.
I don't know them either.
And it's,
it's because nobody in English knows because english doesn't have any pronunciation rules english just
likes to pretend that it does i do know that i do know that river gee county has a place called
fishtown so i think that's awesome although it does seem to be landlocked but i guess they have
freshwater fish or it's just a great name yeah but so i have seen neither proponents nor deponents of the Liberian county flags that are from Liberia.
So I have seen no feedback on either end.
And my guess is this is a lot like the city flags in the United States, which is that just most people don't have the slightest idea
what the flag of their local city is. This is normally one of these times when I would make
a comment like, oh, we're going to be hearing from everyone from Liberia. But I don't I don't
imagine I don't imagine that we're actually going to get a lot of Liberian feedback on this one.
This episode of Hello Internet is brought to you by Igloo. Now, many of you might be working at a big company with an internet that is just a terrible, terrible piece of software to work with.
I mean, actually, is it even really a piece of software?
It feels much more like it's a bunch of pipes connected to old computers held together with duct tape.
Most internets are just awful. I used awful
intranets at my school. But Igloo is something different. Igloo is a feeling of levity compared
to other intranets because Igloo is an intranet you will actually like. Go to igloosoftware.com slash hello and just just take a look at the way igloo looks they have a nice
clean modern design that will just be a relief on your sad tired eyes compared to the internet
that you are currently working with at your company and igloo is not just a pretty face
igloo lets you share news organize your your files, coordinate calendars, and manage your projects all in one place.
And it's not just files in a bucket either.
Their latest upgrade, Viking, revolves around interacting with documents, how people make changes to them, how you can receive feedback on them.
If you're the man in charge, there's an ability to track who has seen what across the internet.
So you can have something like read receipts in email where you know if everyone has actually seen and signed off on whatever document they need to.
So if your company has a legacy internet that looks like it was built in the 1990s, then you should give Igloo a try.
Please sign up for a free trial at
igloosoftware.com slash hello to let Igloo know that you came from us.
So the next item I want to talk about is Uber.
We just did Uber last week. And the week before.
Yeah.
We're going to have an Uber corner at this rate.
We are. I just did have a moment after we'd spoken about it
because I caught I think three Ubers in a short space of time
and the first person who drove me across San Francisco
I was saying to him where are you going next and he said I've got to go to work
I'm actually a bartender and then the next girl that picked me up to take me
to the next place was in a hurry as
well because she actually wants to be like a singer in a band and she was like auditioning that night.
And then the next person who drove me to the next place was like a mum who was picking up her kids
from soccer practice after she gave me a lift. And it suddenly occurred to me, and I know this
is kind of true for taxi drivers, but it seems even more the case with Uber.
Who on earth is driving me?
Like, who are these people driving me at 70 miles an hour along highways who could kill me with the turn of a steering wheel?
And they're just like this random selection of people.
And their only qualification is that they have a mobile phone.
And they have a driver's license.
Well, I didn't see their driver's license. I'm assuming they went through some process to prove that, but. The driver's license process is very
rigorous. Very rigorous. Okay. They have a driver's license. And it suddenly just occurred to me,
I know nothing. I mean, has this person had 30 car crashes? Are they, are they, I don't know,
like, I still like Uber. I still think it's cool. Like, it really won me over. But there were a few moments.
I think I'm quite sensitive to it, especially since we spoke earlier about the terrible car crash when that mathematician John Nash died in when he was going back from the airport.
And that was a taxi crash.
Right.
But ever since then, especially when I'm in America driving from airports along highways, I'm always thinking, I'm always very conscious that my life is in other people's hands, much more so than when I fly.
Yeah.
Probably because I can see the person driving using their mobile phone and stuff.
Yeah.
And I think driving in America is scarier.
Because I Uber most of the time just around London.
And there I'm aware, like, okay, even if we get into a car crash,
how fast can we possibly be going in a head-on collision?
Exactly. London. London traffics.
Yeah. Whereas in America, you have big stretches where, yeah, you can get up to 70 miles an hour
and then you head it on collision with somebody else going 70 miles in the other direction,
right? Driving in America is definitely more of a
dangerous experience. Also, the fact that Uber is such a mobile phone oriented platform, the Uber
drivers, even more than taxi drivers, always seem to be attached to their phone. They're always using
the maps. They're always using the apps. They're very phone obsessed. And I think mobile phones
are very dangerous in cars. And I'm very conscious of
how often they're looking at their phones and they've got a map sitting in their lap and stuff
like that. I just, I think I actually said to one of the drivers, like, do Uber give you some,
have Uber built something into their app where you can't use the phone while you're doing this
or that? Because you guys are just always on your phones. No, no, no, there's nothing like that.
This again is the interesting difference of of how
things are around the world because again at least in london the the phones that they get
are only usable for uber and they are issued by uber they're like factory installed iphones that
run uber and nothing else which is why in London,
almost all of the drivers have hilariously at least two and sometimes three
phones attached to their dashboard precisely because the Uber phone can only
be used for Uber.
And so the one app bring up other stuff on the other phone.
So they'll have like two different uh like software for routing
the directions like they'll load it up on google maps and something else uh but so i'm always aware
of like this many many screen phenomenon at the front of the cars and it's extra funny when
whatever car they're using has a built-in screen that they're obviously not using because their
phone screens are just superior so it's like, there's four screens at the front of this car.
It's like, okay, you've got the Uber phone, you have your secondary GPS,
and you have what is obviously your personal phone
and the built-in screen in the actual car itself.
It's a lot of screens.
The other thing that came up time and again when I was talking to Uber drivers
was this rival app called Lyft.
Yeah, now this is something I've never used because I believe it's only in the United States.
I don't think it's in the UK, but I've always gotten vaguely the impression that like Lyft
is for hippies. Like it's a ride sharing kind of thing.
Oh, okay. I didn't get quite that impression, but-
They used to have like pink mustaches on the front of their cars, you know.
Okay.
This is the kind of company that it is in my mind. I have no idea if this is true.
Most of the drivers were using both Uber and Lyft simultaneously.
And they all preferred Lyft. And they gave me a few reasons.
One of the big reasons was the ability for passengers to tip.
And I did you proud, did you i did you proud gray i did you proud i gave them a real hard time about that and i told them why i didn't like that
for the obvious reasons you know it creates this it recreates the tipping culture and you start
could start getting assessed based on your tipping but actually what they told me and i was told this
a few times i don't i haven't checked it myself but I was told this a few times, I haven't checked it
myself, but I was told it a few times.
The tipping actually works in quite an interesting way.
You do the tip afterwards anonymously via your phone, and they don't find out who tipped
them.
And at the end of the day or at the end of the week, they just get their tips and they
don't know where they came from.
So they like it because if they do really well, it gives them something to strive for
beyond just getting another five stars. They could get the tip or if you're really pleased with can you know it gives them something to strive for beyond just getting another five stars you know they could get the tip or or if you're really pleased
with them you could give them a tip but it did sound like that pressure and awkwardness wasn't
there and there was no no judging because no one knows who tips who i don't know if it's true that's
what they said when i challenged them sorry that was lulu lulu actually just shut the door yeah
she's getting pretty smart huh
she's she's clever dog oh yeah i just sent you sent me one of those lyft cars with the mustache
apparently this is a thing that they no longer do but i was suddenly thinking am i a crazy person
for imagining that there used to be pink mustaches on cars and no i'm not a crazy person i looked it
up and yes this is something that lyft used to do. That way that you describe tipping is a very interesting idea that I haven't ever come across before.
The idea of delayed mass tipping.
I think my initial reaction to that is I find it much more acceptable.
I'm even thinking like in a restaurant,
if tipping worked that way, right, that you could do it later. And it's distributed amongst a large
number of customers so that the waiters don't know directly. I think that's interesting. I think
that's a very interesting idea. People are fundamentally cheap though, aren't they? So
I think without the social pressure of tipping, the tips may come down.
This is why my fundamental thing with tips that I always need to remind people,
when I'm arguing against tips, part of the argument that is unspoken
is that you have to raise the wage for people who depend on tips.
Yes.
I'm not just Uncle Scrooge here thinking let's take away these tips and not add anything
else. Like I would rather raise the wage and remove the tips. I think if under those circumstances,
if tipping was not required and it was done later and anonymously, I think I would probably very
rarely do it. And again, like with all the other stuff, it's way more about just the like the
having to think about it part.
But I don't know.
I don't know.
Maybe I would just set it as the default amount of tip.
I don't know.
It's an interesting idea.
That's a very interesting idea that I haven't come across before.
I have to think about this for a little bit.
So we have a note here about the vote looms, the deadline for our vote looms for the flags.
I mean, this could possibly be our final warning before, I mean, the next time you listen to the
podcast, it will almost certainly be too late to vote. So this could be the last time you listen to
the Hello Internet podcast and still have the option of voting in our flag referendum.
That's how high the stakes are now.
Yeah, this is going to be the last podcast before yeah before uh we count the
votes i guess yeah i think so i was gonna say it's certainly going to be the last podcast you
listen to that where you have a chance a chance of sending a postcard that makes it in time but
even that i'm realizing as we're speaking is somewhat in doubt because we are recording this podcast uh on our at our usual time but this one
may be out a bit late uh because i have some other things that i have to prioritize above it so i
actually don't know when this one is going to go out and how much time there will be it may be that
you have to be in the uk to get the postcard in on time we'll have to see you've been you've not
just lately i've been adding three or four days to every date you set
as well for the podcast. If you say it's going to be out on Monday, I sort of say to myself,
Thursday. Yeah, that's an excellent piece of advice. It's funny because I try to do that to
myself when I make estimates, you know, because I'll come up with an initial estimate and I'll go,
yeah, but I never make it on time. Let me add a few days. And of course,
it's like, you can't overestimate for yourself. You're still always wrong,
even if you try to incorporate your own overestimating. So yeah, whenever I tell you,
Brady, any deadline, you should just automatically add a few days to that. I do. I do. And although the deadline is looming, I have next to me right now probably over 1,000,
but probably closer to 2,000 postcards in a box with votes.
Here's a listen.
Here they are.
Here's some of them.
That is the sound of actual ballots in our election.
Yeah, well, you weighed them.
And then I was asking you, i was pestering you for a
while to weigh 10 of them yeah we could do an estimate for the total amount and at least when
that was maybe about a week ago uh the calculation came out to be about 1800 postcards then and i
presume that you've gotten more since that point so last time we were discussing we're thinking
like oh maybe we'll get a
thousand and we're clearly going to get double that at this stage. So yeah, it's going to be
a lot of votes to count. That's for sure. I know. I love looking through these, by the way. I know
you keep telling me off and telling me not to, but. Yeah. Listeners, listeners, Brady keeps
spoiling himself and me by constantly going through fingering looking at all of these postcards
i'm just minding my own business and brady sends instant message after instant message of
of interesting postcard and i feel like they're just spoilers i want to go there and just and
count them all and see them all at once but brady can't help himself you're like you're like a little
kid i'm not telling you what's getting a lot of votes or what's going to win the vote i'm just
sending you the pictures and but it's still spoilers. You know what that is? That's
when someone says, oh, the movie's great. There's a twist. I haven't spoiled anything, but I'm just
telling you that there's a twist. No, no, it's completely different. No, it's not. It's completely
different. It's exactly the same. Let me tell you why it's completely different. Because the election
is all about what's on the back of these postcards. Who's voted for what?
Right.
I have sent you or told you nothing whatsoever about that.
Nothing.
And now the only thing I'm spoiling is where some of them are from or some of the funny pictures.
But trust me, Gray, there is no way in one day
you will be able to get anywhere near seeing them all.
It is overwhelming how many there are and how different
they all are. So, like, if I send you some funny one that's been sent of some bridge in Norway,
like, you probably wouldn't have seen it on the day anyway, because we're going to be concentrating
on the back of the postcards mostly that day, aren't we? So, I'm not spoiling anything. I'm just
excited. It's like I've got all my presents and I just want to feel the presents a bit.
Yeah, were you the kind of kid who'd open Christmas presents early?
I bet you were.
No, I'm not.
Definitely not.
No, you definitely were.
Definitely not.
But I tell you what, I can't wait to do the count.
I know it's going to be one by one that I don't want to win.
I feel it in my bones. But I don't wait to do the count. I know it's going to be one by one that I don't want to win. I just, I feel it in my bones, but I don't know. I do like them all, so it's going to be all right.
I am going to act like a monarch and I have officially decided not to vote in the flag referendum, unless, unless by some miracle, it's a tie. If it's a tie, then I think I will cast a
ballot. But that's my thought, is that I am not going to cast a vote. Because I think when you
write something down, like in my mind, I still can't place these flags really in a definitive
one to five order. And I think when you sit down and you write something out, it solidifies something
in your mind. And I think, you know what? No, no, here's what I'm going to do. I'm just, I am
leaving myself open to the hello internet nation, ready to accept what they decide should be the flag. And I think writing down an ordered list
would bias my own feelings toward the actual election. So that's my conclusion. I am not
going to vote in the election. But have you sent a vote in, Brady?
I have not. And I'm thinking pretty much the same way as you that I like the idea of having not voted.
There's only one thing I hope for the election.
I hope secretly in my heart that it goes to a second round.
I hope that one flag doesn't win it in the first, that like doesn't get over 50% in the first round.
I so, so hope that we have to distribute preferences,
because that's the thing I'm most looking forward to.
Yeah, I will be disappointed if we don't have to distribute preferences,
but I would be shocked if one of them gets more than 50% on the first round. I will be absolutely shocked if that occurs. But I will also be deeply disappointed in a way that we
don't get to crank through the mechanics of a
second preference around an election. I had an email I got sent today that was all about
coincidences. And I thought, oh, this is amazing. And then I was thinking, how could I possibly
bring this into the podcast in a way that would make Grey even pretend to be interested?
You've already lost the battle, man. Yeah, I know.
I thought of like nine, ten different angles, different ways I could sell it to you. And in
the end, I just threw it away. I thought there's just nothing about coincidences that could ever
excite Grey in any way. Of course not. Of course not. I mean, do you want to try to sell me on the most amazing one no i don't think
you would i think two guys in tibet could start their own podcast called greetings internet and
they could be called bradley aaron and cgp brown and you would just say oh yeah well of course
that's going to happen there are so many people making podcasts these days and there are only so many names in the world. Of course that was going to happen eventually.
Yeah, that is exactly what I would say. I think, I don't know if I've told you this before,
my favorite example of coincidences is the Dennis the Menace comic strip. I don't remember if I've
told you this, but Dennis the Menace published uh in the united states i think it
was just a post-world war ii comic strip when it started but on the same day that it debuted in
the united states in the united kingdom someone else also debuted a comic called dennis the menace
with the exact same premise it sounds like not only did two people come up with the same idea,
but they ended up publishing the first comic on the same exact day.
So this is why like coincidences like that.
Yeah, of course, you're going to get coincidences.
It's just it's almost impossible not to when you have a huge number of people.
So it's like they can be interesting but they're also
just totally unremarkable and the problem that i have with coincidences is usually people then
want to try to like look for look for meaning behind them it's like no there's there's no
meaning what there is is there's just billions of people on earth it would be astounding if
there weren't coincidences somewhere that's pretty much why I don't talk to you about coincidences.
It's a good decision.
It's like why you shouldn't talk to me about your dreams.
Yeah.
But I think there's more to your dreams, because at least your dreams...
Oh, no, don't even start, man.
Okay.
Because I think with the right amount of knowledge and expertise,
you might be able to glean something from dreams because they are based on,
you know, your brain and inputs and outputs.
And I'm not saying I have that expertise and I'm not saying I want to sit here
and talk to you about my dreams, but I'm just saying.
No one has that expertise.
I'm just saying there is something to dreams.
Like there is like, you know, there is something to that that that's not that's not gobbledygook
it's just beyond our ability to understand it and therefore we imbue it with silly meanings
that we shouldn't even listen when you say that it's our beyond beyond our ability to understand
you're implying that there's some that there's something to understand there, as opposed to what it is, which is nightly hallucinations that you
connect into meaning later on, because that's what the human brain does. It's a pattern creation
machine, even when there's no pattern there. Like that's all that happens.
I don't believe that. I don't believe that. Because I'm not saying they have like any
predictive power.
Yeah, yeah.
Obviously.
Yeah, if you were saying that, I mean, I'd start carting you off to the loony bin right now.
But, I mean, you can't deny that, you know, if you're having a stressful time in your life, you have a certain type of dream.
And if there are certain things going on, your dreams change.
And, like, there is a correlation between what your dreams and what's happening in your real life.
I mean, you must see that.
You must acknowledge that, surely.
You know, when people are going through traumatic times, their dreams become more traumatic.
The link may not always even be that direct, but there is a link between what's happening in your dreams and what's happening in your life.
Yeah, because your hallucinations are constructed from pieces of your life. How could it be any other way? But yeah, I mean, like, I will totally grant you
that there is a correlation between what happens in your life and what happens in your dreams. And
the worst example for me of this ever was my very first year of teaching me and this other
NQT that I worked with. we both discussed how in that first year,
in the first few months, the worst thing ever was you would spend all of your waking hours at work,
at school, doing school stuff. And then because that was your only experience, you would go home
and your dreams would be dreams about being at school and you'd wake up and have to do it all
over again. And it felt like an eternal nightmare
of always doing school. So like, yeah, but then, but that's just a case where it's like, you only
have one thing to dream about and it's the thing that you're doing all day long. So, so of course
there's going to be some correlation, but that doesn't mean that there's like meaning to be
derived from the dream. Like that, I think that's just a step too far.
Well, let me put this to you then mr cgp gray who always thinks that
humans are merely computers yeah your computer doesn't do this your compute like if your
computer if a bunch of stuff came out of your computer or you were looking through all this
sort of code and stuff that was going on under the hood of your computer you would never just
completely dismiss that and say oh, that's just random and means
nothing because it came from your computer. And therefore, even if it was something it wasn't
supposed to do, it came from something and it has a cause and the right expert could look at it and
say, oh, yes, I see what's going on here or something's gone wrong or this is what it's doing
because a computer can only do what a computer can do. And therefore, if a brain is our computer,
if it's serving up all this gobbledygook and you're just saying,
oh, that means nothing, it's just hallucinations,
you should ignore that.
Well, no, because if my computer is doing something,
it must be doing it for a reason.
Like, there must be...
Like, I'm not saying we're supposed to remember our dreams
and then use them in our life.
This is always what you do, Brady.
What? our dreams and then use them in our life this is always what you do what like i can't always always with you brady you're always moving the goalposts underneath me and now you're having a
discussion about do dreams serve a function in the brain and my answer to that is obviously yes
like like humans dream there must be something that the brain is doing during this time that is useful to the brain.
Otherwise, it wouldn't do it.
But that doesn't mean that there is meaning to be derived of our subjective experience of what is occurring in the dream state.
Like that's a whole other thing. Are you telling me if I gave you some machine that was able to completely project
someone's dream, like record them like a... Yeah, yeah, yeah. Let's imagine that exists. Yeah.
Yeah. Imagine I gave you that and I said, I'm going to give you that person over there's dreams
for the last 10 years. Are you telling me that data is useless?
No, I'm not saying that data is useless. Because we just said before that you could
derive probabilities about a person's life from their dreams.
Like, oh, this person looks like maybe they're a teacher because they went through a big phase where they were dreaming
about teaching all the time. But that doesn't mean that there's anything for the dreamer
to derive from their dreams. But you're asking me, like, is a machine
that is capable of peering inside someone else's brain a useful machine? Like, well, yes, obviously that
would be useful. You could derive information from that, of course. It'd be almost impossible not to.
I'm just saying that I don't think there's anything really to learn from your own dreams.
And I also have this very, very deep suspicion that if this machine existed that allowed you to
watch someone else's dream or watch your own dreams, I am absolutely confident that being
able to see them objectively would lay them out for the borderline nonsensical hallucinations
that they are. Because I think when you wake up later, you are imposing
order on a thing that was not full of order at the time. That's what I think is occurring is
you wake up and like you're constructing a story out of a series of nonsensical random events.
And so then you feel like, oh, let me tell people about my dream. And you know, when you listen to
those stories, they're already borderline crazy stories.
But I think, like, you've pulled so much order out of a thing that didn't exist.
So, yeah.
Yeah.
I mean, I agree with that.
I agree.
I agree that, you know, even sometimes the dreams you remember, they're pretty freaky and weird and they're all over the place.
Right.
And it's almost impossible for a human to relay something like that in a way that isn't a story.
Like I think that's just the way our brains remember things.
I just don't think that it's like unusable.
I think maybe in the future when we understand things a bit better, we may be able to get more use out of them than we realize.
I don't mean use i mean
almost like uh diagnostic use i guess is what i mean right right but again you're talking about
used to third parties but yeah but not used to you the dreamer no because again you're describing a
machine that can look inside someone's mind and i would say yes obviously that is useful
yeah but but like a third party might be able to use it to help you though. So. Right. But I'm saying you looking at your own dreams,
like, okay, whatever, man, you're just reading the tea leaves of your own life, right? There's
nothing really here. You're just everything that you think is there, you are putting there. There's
nothing really there. That's dreams. Today's sponsor is audible.com, which has over 180,000 audiobooks and spoken word audio
products.
Get a free 30-day trial at audible.com slash hello internet.
Now whenever Audible sponsor the show, they give us free reign to recommend the book of
our choice.
And today I'm going to tell you about one of my all-time favorite science fiction books.
In fact, it's probably my all-time favourite book, full stop.
It's called The Moat in God's Eye by Larry Niven and Jerry Pornel.
Basically, this is set in the future and humans are travelling all around the galaxy.
There's this area of space called the Coalsack that some people say resembles the face of God.
There's a big red star in the middle that supposedly looks like the eye.
And in front of that eye, from some angles, is a smaller yellow star.
And that's the mote in God's eye.
So that's where the title comes from.
Now, humans have never been to that star.
But all that changes in this book when some serious stuff goes down.
And what they find there, well, it's pretty important to
the future of everything. It's a really clever story. I remember being really impressed by some
of the ideas in it. And the audiobook weighs in at well over 20 hours. So this might be a good
one to settle in for your holiday break. Now, I've said before, audiobooks are a great way to
catch up on all sorts of stories. I love listening to them when I'm out walking the dogs or on long drives.
I know a lot of people have long commutes to work.
Audible is your ultimate place to get these audiobooks.
And if you follow one of our recommendations from the show
and you don't end up liking it immediately,
Audible is also great at letting you trade it back in
and getting one you do like.
I'm sure some of you know I've done this once before
and it was easy peasy, no questions asked.
So go to audible.com slash hello internet
and sign up for your free 30-day trial.
Our thanks to audible.com for supporting us.
That book recommendation again,
The Moat in God's Eye.
And the URL, the all-important web address,
audible.com slash hello internet and they'll know you came from the show.
All right, Brady, you are back from America. Yeah. Have you weighed yourself? Have you had
the bravery to weigh yourself? I did one a few days ago after I got back and I had increased by 1.3 kilograms.
1.3 kilograms.
And how long were you in America for?
Three weeks.
I mean, honestly, I feel like that's not too bad.
I felt like I dodged a bullet, to be honest.
But I haven't been eating well since I got back either.
So, I think it may have gone up even more now. There's always an America half-life where you come back and because the food is so good in America, it takes you a little while to adjust so you would still eat crap when you return.
Even though I have always promised myself on the plane coming back from America, it's like, oh, no, I'm going to be really good now.
But it's like, no, no, it never happens like this.
You need a few days to adjust.
Yeah, you've got to wean yourself off all that fat right yeah and just before we recorded you
sent me a picture of a pizza with audrey looking at it something like super spectacular pizza was
the name of it or something it's just some ridiculous name it was like it was like it was
the fitatron 5000 calories that's for sure yeah
that's what it was yeah but yeah so i got i gotta say i think you could have definitely done way
worse i think if i was in america for the same period of time i would have done way worse so
i'll agree with you there you dodged a bullet dodged a bullet on that one how are you doing
um so it's interesting because we i mean it's been basically a month since we did a weigh-in
because you were in America and we said, oh, we're not going to do it while you're there
because it couldn't be consistent.
And I think I realized that with you, my weight buddy, gone, I was thinking about this stuff
just a little bit less maybe.
And so I was actually quite surprised when I stepped on the scale today, I was essentially within the measurement error, the exact same weight that I was a month ago.
I was like 0.3 pounds, which is zero kilograms down.
But, you know, my daily weight varies by much, much more than that. So it's just interesting to see that I've hit like a little plateau
that has stayed roughly the same for a month.
But I was just surprised that because we hadn't done the weigh-in,
it just hadn't even crossed my mind that my weight hasn't moved in quite a while.
But you're weighing yourself every day.
Yeah, I am.
But I think there's something...
It's like my brain isn't doing the
comparison to the fixed point of the last weigh-in like i was just aware today that i had no idea
what the last weigh-in number was and i had to go look it up and then do the math so it's like my
brain was pushing it to the side but now that you're back in the uk now that we'll be weighing
in again in two weeks time i think maybe it'll be more at the fore of my mind,
but maybe not.
Or maybe I'm really stuck at a plateau
and I need to change things up again
to continue the weight loss.
We will see.
All right.
Well, hopefully I can get my act together.
I've been, I'm just in a spiral of food naughtiness
at the moment, but I need to,
I need to get my act together.
It happens to the best of us.
It happens to the best of us.
I wanted to quickly ask you about the iPad Pro. Oh yeah?
As you know, I don't listen to your fetish podcast, but you did talk about it on that
I understand. Yeah, yeah. I picked one up
on the day of release. All I want to know is should I
get one for Christmas? Because I haven't, I don't,
there's nothing I really want for christmas
and my wife's like well can i got to get you something and i don't want an i watch anymore
an apple watch i've i've i've gone off that uh-huh uh-huh probably for your own good to go off that
um so ipad pro i do like i do like the idea of it although i have absolutely no use for it
i think i've said to you before i'm a sucker for anything with pro in the name although I have absolutely no use for it.
I think I've said to you before,
I'm a sucker for anything with pro in the name.
Yeah, I think this is why you're getting drawn in by this device.
It's pro and Brady thinks,
ooh, I would like to have the pro things.
Yeah, I like, I'm a sucker.
They should have called YouTube Red YouTube Pro.
Actually, it's not a bad idea.
That would have made me think it was awesome.
I would have been, oh, well, I mean, I like YouTube,
but I prefer the Pro version myself.
So, I'm like that with everything.
So, like, I did get the original iPad and used it, like,
eight times and then put it in a drawer.
But now that there's an iPad Pro, I'm like, oh. I love that you're falling for this.
Oh, and I'm completely open about it. I love that. I love that you fall for it and that you also know this about yourself. I'm wondering what's going to happen when Apple inevitably
makes the Apple Watch Pro. I'll definitely get one of them. Because it's the Pro.
Right.
Pro.
Who's going to say no to that?
Exactly.
Like, you haven't got a Pro.
Oh, what's wrong with you?
You call yourself professional.
Should I get an iPad Pro?
Okay, so that's a hard question to answer because... you either say you either say yes or you say no okay no okay here's my thinking about this here's my thinking about this okay let's say i
didn't know anything about someone and they just needed to buy an ipad and they said which ipad
should i buy if i didn't know anything about the person the correct answer is to buy an iPad and they said, which iPad should I buy? If I didn't know anything about the person, the correct answer is to buy the iPad Air 2, which is like the medium size, super light one.
And then if you have a particular reason to get the Pro, you should get the Pro. But I don't have
any idea what you think you're going to do with the iPad Pro, aside from just feel a smug sense of satisfaction that you own the pro version of this device.
Like, what do you imagine yourself doing?
That pretty much sums it up, I guess.
I don't know.
All I want for Christmas is a sense of smug satisfaction.
Money can't buy that.
Well, actually, yes, it can.
That's the best thing money can buy.
Yeah, it's the best thing money can buy. Yeah, it's the only thing money can buy.
I just feel like I want a new toy, you know?
Yeah, you want a new toy?
Here's the thing, like, it's huge in person.
It's surprisingly big in person.
It feels like a dinner plate in person.
Actually, do you have, your laptop is like the 15 inch macbook pro i think
is that right yeah i haven't got it here it's uh yeah yeah yeah but you own that laptop yeah the
ipad pro is essentially the size of that screen right right that's that's the size of it within
within like a quarter inch right so okay it is a big big screen And if you're not planning on doing work on it, like I got the iPad Pro to do work.
And so far, I absolutely love it for work.
Like the video that I'm currently working on, I did just a ton of the scripts on that
iPad Pro, the final versions.
Like it's really, really nice to work on.
But if you're not going to do that, then the question is, well, it's a total couch machine.
Are you going to want to sit on the couch and browse the web or read books on your iPad or
watch TV on your iPad? I don't think you're a do any of those things kind of guy, but maybe I'm
wrong. I don't know. I do watch TV and movies on my laptop every night. And I do spend the first hour of most mornings when I wake up
just sitting in bed.
That's when I do all my emailing and all the things I can just do
without my big machine, all the web stuff.
I do that, but I do that on my laptop, which has got a keyboard.
So I sort of think, well, if I had the big nice screen of the iPad pro, I could sit and do my emailing and check all my YouTube channels
and everything first thing in the morning. But I do that now on my laptop and it's so much easier
with a keyboard to, you know, bang out a few emails. Yeah. I think it's funny that you wake
up and do email from bed. I'll have to remember that the next time I get an email from you. Oh,
Brady probably sent this before getting dressed in the morning. But yeah, if you want to do that, that sounds like
you need a keyboard then. Like, I don't think the iPad Pro is what you want to do unless you're,
you know, you're really happy about typing with your fingers on glass. And it has that little
keyboard, but I don't think that keyboard would work really well if you're trying to do it in bed,
you know, with the laptop balanced on your chest or whatever you're doing.
I mean, I do spend probably an hour a day maybe in Photoshop. So, and I do use a pen,
like I use a Wacom tablet all the time. So, like I do, I could imagine that, but my use of Photoshop and my use of the pen is very,
very integrated with my editing on Avid, which has to be on my, on one of my big computers.
Those two processes are so intertwined.
Yeah. And every, everything I know about you, Brady says, this is not the thing that you want
to do to integrate a new tool into this workflow. So I think the only selling point for this for you is if there's some point where you
want to lounge around and just use this.
And it doesn't sound like you really have a place for this.
Well, I do lounge around a lot with screens.
Like I sit, like at night, I sit with my laptop on my lap or my phone in my hand.
Here's the thing.
Just with the experience that I have had with mine, because I have the iPad Pro and the regular size iPad.
It feels ridiculous to be sitting next to my wife with the iPad Pro for lounging time.
It's just like, oh, say we're watching TV, but then I want to have the iPad in front of me because I'm not paying full attention to whatever's on the screen.
But the screen in front of me then feels so huge, it feels almost obtrusive.
And so I actually prefer to use a smaller iPad if I'm just sitting on the couch with my wife.
What work thing do you do that I don't do that the iPad Pro is good for?
The main thing that I'm using for is just as a bigger screen to write scripts.
And you handwrite your scripts, do you?
Well, this is a whole thing.
For the moment, I'm doing this typing.
But the iPad Pro screen is big enough that what I've been doing is I can have the script
on the left two thirds of
the screen and I have a little notes file on the right third of the screen. So I have two different
text files open at the same time. One of the things that I want to do with the iPad Pro is a thing
that I've done before, which is use the stylus to make editing corrections on the script. Like that
is really useful to me, but the pen is not currently available. So I
haven't been able to try it with that. So I don't know if it will be useful for that yet or not,
but for me, having a bigger screen to write is really useful.
It seems like you should just be using a laptop.
Yeah, you would think so, but I like the simplicity of using iOS. Like I find the
constraints of an iPad helpful. So that's one of the reasons why I like doing that.
Like I've set up my iPad Pro to basically only have the tools necessary to write.
Like it doesn't have everything that a laptop can have.
I can't spend a lot of time fiddling around with it.
It's like, look, there's six programs on this thing which are designed for work.
And those are just the ones that you're going to use.
And so I find that very helpful.
I really like that.
But I don't know, Brady, doesn't sound like it's a total sale for you
unless you really value that feeling of smoke satisfaction.
I feel like you're always talking me out of getting Apple products.
I talk you out of them because I care, Brady.
I said before, I really do. as much as I would love to see
you use an Apple Watch and I think it might be hilarious. I don't think you would like it and
just the conversation with you now I don't see a super slam dunk selling case for the iPad Pro.
Like I don't think it would help you with the kind of work that you do. Me as a YouTuber using an
iPad as much as I do is extraordinarily rare. Like an iPad is not well designed for the kind of work that you do. Me as a YouTuber, using an iPad as much as I do is
extraordinarily rare. Like an iPad is not well designed for the kind of work that most normal
YouTubers do. It's just that for making my videos, a huge part of it is writing and the iPad happens
to be a nice writing tool. But if I didn't have to do a lot of writing, I would have very little
work justification for an iPad. Like I would not be able
to use this tool as much as I do. So that's why talking to you, like, I don't think it's going
to help you with your work. So it's just a question that if you want to lounge around
with a dinner tray sized screen on the couch.
A person on my street is an estate agent And I saw him swapping over his cars. Now, from my experience, estate agents always have one of two cars these days.
They either have like small little novelty cars, like smart cars and stuff that are painted weird colours with the branding of the estate agent, like little mobile ads.
Okay.
And also that makes them easy to park, I assume, for getting into little spaces when they're showing houses and things like that.
Uh-huh.
Or they have their normal rich person car, like a classy BMW.
Okay.
I'm wondering what is the better car to pull up in when you're trying to,
A, sell a house to someone or get someone's business to sell their house.
Because part of me thinks if they turn up in like a really flash car,
I also think this like about accountants and other professional people I deal with.
Do I prefer it when I see them with a really flash expensive car or would I prefer they had
like a more humble car? Because if they've got like a really flash expensive car,
A, it says to me, oh, they're successful and they make a lot of money and that's good. But then I
also think, well, they're making a lot of money out of me to be able to afford that really flash car.
This is easy.
This is easy.
If you are a professional who is directly helping somebody else make money, then you want to show up in the fancy car.
You want to show up in the BMW.
Right.
Right.
Otherwise, you want to show up in the fancy car you want to show up in the bmw right right otherwise you want to show up in the normal car right that that's the way you want to do this so if you're like if you're helping the
person make money like you like you're you're the estate agent and you're doing this thing where
you are helping the person sell their house then you want to show up in the bmw because it's like
look i sell a lot of houses i can like, look, I sell a lot of
houses. I can afford this car because I sell a lot of houses. That's the way you should do it.
When you're helping someone find a house to buy, then you want to show up in the normal car
because then they're much more aware of like, oh, this estate agent is making money off of us when
we buy this house and look at all this money that we're spending. You don't want to see the person in the BMW at that point.
What car do you want your accountant to have? Because they're helping you save money,
but they're charging you fees. What car do you want your accountant to have?
I think an accountant wants to project an image of boring sensibility.
So I don't really know very much about cars,
but I would want my accountant
to project boringness and sensibility.
Like if my accountant showed up in a red Tesla,
I would feel a bit,
I don't know about this guy.
This seems crazy flashy for an accountant.
Do you want them to seem wealthy?
This is a moment where I'm suddenly wishing I knew any car brands by name aside from Tesla.
So I could pull something out, which would be like, oh, this is the car that's the appropriate one.
But I know nothing.
I mean, even BMW is just an abstract notion in my mind of like,
oh, an expensive rich person's car.
Is that what a BMW is?
I don't really even know.
Well, you don't need to give me a brand of car.
Just do you want it to be, do you want your accountant to be wealthy?
Like to appear like someone that earns lots and lots of money?
Or do you then think, well, hang on, how high is this guy's fees?
If he can afford that? Those are two different questions.
Obviously, I do want my accountant to be wealthy because that indicates that they are a good accountant but
that is very different from showing up in a flash car right those are two different things right
that's why i'm saying like i want the i want to have this feeling like oh this accountant
is a really sensible person and they have an obviously nice car, but it's not a crazy
car. You'd want them to turn up in a Volvo then with like airbags everywhere and, you know,
the safest possible car. And you want them to be a really cautious, sensible, safe person. You
don't want them to turn up on a motorbike. Yeah. If an accountant turns up on a motorbike,
that's the end of, that's the end of our meeting. You know what? I don't think you're good with
numbers. That's what I'm getting out of this meeting. Yeah, so that's my feeling. If you're
helping someone earn money directly, then you can show up with your flash car.
Okay.
Does the estate agent buy you have two different cars?
Well, he has like his personal car.
I mean, two cars in addition to his personal
car. I don't. Like across the street, is there a Tesla, a smart car and a Volvo and the Volvo is
his personal car and then he picks the other two depending on the day? No, I don't think it works
like that. I think he's just got his pokey branded car and then he's got his BMW that he takes to
golf on the weekends and things. But I imagine he would, I don't know.
I don't know.
I just think about that a lot.
I think about, yeah.
What car does your accountant drive?
I don't know what car he drives because I go to his office,
so I don't know which car is his.
I do have like a financial guy that's helped out with a few things
like mortgage stuff.
He drives a big Jaguar. Jaguar. And I do have like a financial guy that's helped out with a few things like mortgage stuff. He drives a big Jaguar.
Jaguar.
And I do notice it.
I do notice the car they come in.
So what kind of car should a YouTuber drive?
That's a good question.
Yeah.
When you pull up to do your interviews at the spiritual home of Numberphile,
were you to be driving a car?
What kind of car do you think you should drive to give a good impression to your interviewees don't know do you want to project wealth and power and success brady
do you want to go for academic street cred and pull up in a dinky car like a phd student would
be driving i mean i have a very practical car with lots of storage for all my camera bags and
things like that so i think that's okay isn't't it? Like having a big car for all your bags and stuff.
What car would you get if you were going to get a car?
I mean, if I could get any car, I'd get a Tesla.
You'd get a Tesla. Would you get like one of the sporty ones or would you get like
more of a family one?
Well, I mean, no, I don't have children. I don't need one of the family cars.
Yeah, but you get those sedan-y looking ones or? Well, no, I mean, no, I don't have children. I don't need one of the family cars.
Yeah, but you get those sedan-y looking ones or you get those ones,
you can get those ones that look like racing cars as well.
So yeah, not the racing car.
There's whatever the, I forget,
I'm the worst car person in the world.
I'm only interested in Tesla.
Like I'm super interested in Tesla,
but that is almost entirely because it's like,
oh, it is a computer on wheels, right?
This is why this car is interesting to me.
And it has none of the pieces of a normal car.
So I know nothing about how the engines of cars work.
I know nothing about gear differentials.
And I care about none of this.
And it's because Tesla lacks all of that is precisely why I am interested in it.
But yeah, I went once and just for fun,
like tried to design a Tesla on the website of like, oh, if I had the money and if I had any
reason to own a car, what Tesla would I get for myself? And I ended up just designing what to me
just seemed like the normal middle Tesla car in black with, with you know just understated interior like that's what
i would get if i was going to own a car but i have no reason to drive ever and i would not be
getting a tesla anytime soon i'm waiting for them to bring out the tesla pro this episode of hello
internet is also brought to you by long time Hello Internet sponsors. The one, the only, the Squarespace.
It's the Squarespace because it is the place to go if you want to turn your idea for a website
into an actual working website that looks great with the minimum amount of hassle.
I used to build and manage websites myself. I used to write HTML and
then I wrote scripts and I managed servers. I used to do all of that. But when I started my YouTube
career, one of the early decisions that I made was switching over my website to Squarespace. And I am
so glad I did that because it meant that Squarespace just handles a lot of the stuff that I used to have to worry about.
Is there going to be a huge amount of traffic because I just put up a new video?
No need to worry. Squarespace just has it covered.
I didn't have problems like if my server broke at three in the morning that I'm the only person in the world who can fix it.
No, Squarespace just handles all of this. So even if you know how to make a website,
I still think if you have a project that you just want up and want done,
Squarespace is the place to go.
The sites look professionally designed regardless of your skill level.
There's no coding required.
If you can drag around pictures and text boxes, you can make a website.
Squarespace is trusted by millions of people and
some of the most respected brands in the world. Now, what do you have to pay for this? Just eight
bucks a month. It's eight bucks a month and you get a free domain name if you sign up for a year.
So to start a free trial today with no credit card required, go to squarespace.com slash hello internet and when
you decide to sign up for squarespace make sure to use the offer code hello internet to get 10
off your first purchase if there's a website in your mind that you've been wanting to start but
you haven't done so yet today is the day squarespace.com slash hello internet. 10% off. Start today. Squarespace.
Build it beautiful. We've been talking for ages and ages about talking about artificial
intelligence and it keeps getting putting back. We keep saying, oh, let's talk about it next time.
Let's talk about it next time. And we never do it. Are we going to do it today? We never do it because this always ends up at the bottom of the list.
And just all of the Brady corners and listener emails and everything always
takes up so much time that we never actually,
we never actually get to it.
And even now it's like,
we're almost two hours into this thing.
Right.
Oh yeah.
But you're going to have loads to cut.
So I am going to have loads to cut.
Hopefully.
Yeah. But all that dream stuff for a start no the dream stuff i'll leave right in it's very definitely gonna go no it's not gonna go i'm gonna leave that it has taken us so long to
get to this ai topic that i've kind of forgotten everything that i ever wanted to say about it
because like i'll give you i'll give you the background of this, which is, I read this book
called super intelligence by Nick Bostrom several months ago now, maybe half a year ago now. I don't
even know. It's been so long since we originally put this on the topic list, but there are many
things that go onto the topic list. And then I kind of cull them as time goes on because you realize like, oh, a couple
months later, I don't really care about this anymore.
But this AI topic has stayed on here because that book has been one of these books that
has really just stuck with me over time.
Like I find myself continually thinking back to that book and some
of the things that it raised. So I think we're going to talk a little bit about artificial
intelligence today, but I have to apologize in advance if I seem a little bit foggy on the
details because this was supposed to be a topic months and months ago.
No, I'm sorry. That's my fault, really. No, no, it's not your fault.
It is the show's fault for being a show of follow-up.
Grey, we're trying to build a nation here. These things are difficult.
Yeah. Rome wasn't built in a day.
It wasn't. Go on then. Where do we start? Let's define artificial intelligence. That would help
me. When we are talking about artificial
intelligence, for the purpose of this conversation, what we mean is not intelligence in the narrow
sense that computers are capable of solving certain problems today. What we're really
talking about is what's sometimes referred to as a general purpose intelligence creating something that is smart and smart in such a way that it can
go beyond the original parameters of what it was told to do is this self-learning or we can talk
yeah self-learning is is one way that this can happen but yeah we're talking about like something that is smart and so i and so
maybe the best way to say this is that it can do things that are unexpected to the creator right
right like because it is intelligent on its own in the same way that like if you have a kid you
can't predict what the kid is always going to do because because a kid is a general purpose
intelligence like they're smart and they can come up with solutions and they can do things that surprise you okay so
the reason that this book and this topic has stuck with me is because i have found my mind
changed on this topic somewhat against my will and so i i would say that for almost all of my life, much, I'm sure,
to the surprise of the listeners, I would have placed myself very strongly in the camp of sort
of techno optimists of like more technology, faster, always, it's nothing but sunshine and
rainbows ahead. And I would always see like when, when people would talk about, like, oh, the rise of the machines, like, Terminator style, all the robots are going to come and kill us.
I was always very, very dismissive of this.
And in no small part because those movies are ridiculous.
Like, I totally love Terminator and Terminator 2, perhaps one of the best sequels ever made.
Like, it's really fun, but it's not like a serious movie but
sometimes people end up seeming to like take that very seriously like the robots are going to come
kill us all yeah and my view on this was always like okay maybe we will create smart machines
someday in the future but i was always just operating under the assumption that like yeah
when when we do that though we'll be cyborgs, and we'll be the machines already, or we'll be creating machines, obviously, to help us.
So I was never really convinced that there was any kind of problem here. But
this book changed my mind so that I am now much more in the camp of artificial intelligence, its development can seriously present an existential
threat to humanity in the same way that like an asteroid collision from outer space is
what you would classify as a serious existential threat to humanity.
Like it's just over for people.
That's where I find myself now.
And I just keep thinking about this because I'm uncomfortable with having this opinion,
right? Like, sometimes your mind changes and you don't want it to change. And I feel like, boy,
I liked it much better when I just thought that the future was always going to be great and there's
not any kind of problem. And this just keeps popping up in my head because I feel like,
ooh, I do think there is a problem here. This book has sold me on the fact that there's a potential problem.
I mean, we saw that petition, didn't we, recently signed by all those heavy hitters to the
governments telling them not to use AI in kind of military applications. So, this is obviously like,
you're not the only person thinking this way. This is obviously, this is a bit of a thing at
the moment, isn't it? Yeah, it's definitely become a thing. I've been trying to trace
the pattern of this. And it definitely seems like I am not the only person who has found this book
convincing. And actually, we were talking about Tesla before. Elon Musk made some public remarks about this book, which I think kicked off a bunch
of people. And he actually, I think he gave about $10 million to a fund working on what's called
the control problem, which is one of the fundamental worries about AI. Like he put his
money where his mouth is about like, actually, he does think that this is a real threat to humanity
to the tune of it's worth putting down $ this is a real threat to humanity to the tune
of it's worth putting down $10 million as a way to try to work on some of the problems
far, far in advance.
And yeah, it's just, it's interesting to see an idea spread and catch on and kind of go
through a bunch of people.
So yeah, I never, I never would have thought that I would find myself here.
And I feel almost slightly like a crazy person talking about like, oh, I never would have thought that I would find myself here. And I feel almost
slightly like a crazy person talking about like, oh, robots might kill us in the future.
But I don't know, I unexpectedly find myself much more on that side than I ever thought that I would.
I mean, obviously, it's impossible to summarise a whole big book in a podcast, but can you tell
me one or two of the sort of key points
that were made that have scared the bejesus out of you?
Do you remember a while ago we had an argument about metaphors and metaphors, you know,
even their use in arguments at all?
Yeah.
The thing about this book that I found really convincing was it used no metaphors at all. It was one of these books which laid out its basic assumptions
and then just followed them through to a conclusion. And that kind of argument I always
find very convincing, right? There's none of this, oh, we need to think of it in this way.
He's like, okay, look, if we start from the assumption that humans can create
artificial intelligence, let's follow through the logical consequences of all of this. Like,
and here's a couple of other assumptions. How do they interact? And the book is just very,
very thorough of trying to go down every path and every combination of these things.
And what it made me realize, and what I was just kind of embarrassed to realize is, oh,
I just never really did sit down and actually think through this position to its logical
conclusion. The broad strokes of it are, what happens when humans actually create something that is smarter than ourselves. I'm going to like blow past a
bunch of the book because it's building up to that point. I will say that if you don't think
that it is possible for humans to create artificial intelligence, I'm not sure where the conversation
goes from that, but the first third of the book is really trying to sell people who don't think
that this is possible on all of the reasons
why it probably is. So we're just going to start the conversation from there. If you can create
something that is smarter than you, the feeling I have of this, it's almost like turning over the
keys of the universe to something that is vastly beyond your control. And I think that there is something very, very terrifying about that notion,
that we might make something that is vastly beyond our control and vastly more powerful than us,
and then we are no longer the drivers of our own destiny. Again, because I am not as good of a
writer or a thinker, the metaphor that I keep coming up with is it's almost like
if gorillas intentionally created humans, right? And then, well, now gorillas are in zoos and
gorillas are not the drivers of their own destiny. Like they created something that is smarter and
that rules the whole planet. And gorillas are just like along for the ride, but they're no longer in
control of anything. Like, I think that that's the position that we may very well find ourselves in if we create some sort of artificial intelligence
is like best case scenario, we're riding along with some greater thing that we don't understand.
And worst case scenario is that we all end up dead as just the incidental actions of this machine
that we don't understand.
I'm sorry if this is a bit of a tangent,
and I know this isn't the main thing you're talking about,
and just knock it on the head if I'm out of order,
but is there a suggestion then, or is it the general belief that if we create,
we already are creating really clever computers that can think quicker than us and can process information quicker than us
and therefore become smarter than us.
Is there another step required for these machines to then have like will,
not will as in free will, but like desire or like a want to use this power?
Because you know how like if some human gets too much power, they want to take over power like because you know how like if if you if some human gets too
much power they want to take over the world and have all the countries or you might want to
conquer space or you might right own everything because because we have this kind of desire for
power and things is this is that is it taken as given that if we make super super smart computers
they will start doing something that manifests itself as a desire
for more, like a greed for more? Well, I mean, part of this is there are things in the world
that act as though they have desires, but that might not really.
Yeah. Right. Like, you know, if you think about, you know, think about germs as an example,
right? Germs have actions in the world that you can you can put desires upon
them but a germ obviously doesn't have any thoughts or desires of its own but you can speak
loosely to say that it wants to reproduce right it wants to consume resources it wants to make
more copies of itself yeah and so this is one of the the concerns is that you could end up making
a machine that wants to consume resources that has some general level of intelligence about how to go acquiring those resources.
And even if it's not conscious, if it's not intelligent in the way that we would think that a human is intelligent, it may be such a thing that is like it consumes the world trying to achieve its goal just incidentally, like as a thing that we did not intend.
Right.
Even if the goal is something seemingly innocuous.
Like if you made an all-powerful computer and told it, whatever you do, you must go and put a flag on the moon, it could kill all the humans on Earth in some crazy attempt to do it.
Like without realizing that, oh, you weren't supposed to do that. in some crazy attempt to do it, like without
realizing that, oh, you weren't supposed to do that. You were just supposed to go to the moon.
You weren't supposed to kill us to get there and make us into rocket fuel or something.
Yeah. One of the analogies that's sometimes used in this is, say you create like an intelligence
in a computer and, oh, well, what would you use an intelligence for? Well, you use it to solve
problems, right? You want it to be able to solve something.
And so you end up asking it some mathematical question, like, you know, what is, you know, prove Fermat's last theorem or something, you know, like you give it some question like that.
And you say, okay, I want you to solve this thing.
And the computer goes about trying to solve it, but it's a general purpose intelligence.
And so it then does things like, well, it's trying to solve this problem,'s a general purpose intelligence and so it then does
things like well it's trying to solve this problem but the computer that is running on is not fast
enough and so it starts taking over all the computers in the world to try to solve this
problem but then those computers are not enough because maybe you gave it an unsolvable problem
and then it starts taking over factories to manufacture more computers and then all of a
sudden it just turns the whole of the world into a computer that is trying to solve a mathematical problem.
And it's like, oh, whoops, like we consumed all of the available resources of the face of the earth trying to do this thing that you set about for us to do.
And it's like there's nobody left for the computer to give its answer to because it has consumed everything.
I know that's a doomsday scenario, but I almost feel a little affection for that computer that was just desperately trying to solve a mathematical problem
it was just like killing everyone and building computers just so it can solve this bloody problem
yeah yeah it's it's it's almost understandable right it's almost understandable so anyway so
in answer to my question then is that that will that i was talking about that desire can be just something as simple as an instruction
or a piece of code that we that we then project as a will but in fact it's just doing what it was
told yeah and that's part of what the whole book is about is like this whole notion of artificial
intelligence like you have to rid your notion of this idea that it's like something in a movie
right you're just talking about some kind of problem-solving machine. And it might not be
conscious at all. And there might not be anything there, but it's still able to solve problems in
some way. But so, the fundamental point of this book that I found really interesting,
and what Elon Musk gave his money to, was Nick Bostrom is talking about how do you solve the control problem?
So from his perspective, it is inevitable that somewhere through some various method, someone is going to create an artificial intelligence, whether it's intentionally programmed or whether it's grown like genetic algorithms are grown.
It is going to develop.
And so the question is, how could humans possibly control such a thing? that we could create an artificial intelligence, but constrain it so that it can still do useful
things without accidentally destroying us or the whole world? That is the fundamental question.
There's this idea of like, okay, we're going to do all of our artificial intelligence
research in an underground lab, and we're going to disconnect the lab entirely from the
internet like you put it inside of a faraday cage so there's no electromagnetic signals that can
escape from this underground lab like is that a secure location to do artificial intelligence
research and so like if you create an ai in this totally isolated lab like are you is humanity still safe in this situation and his conclusion
is like no even even under trying to imagine the most secure thing possible like there's still
ways that this could go disastrously disastrously wrong and the the thought the thought experiment
that i quite like is this idea of if you, Brady, were sitting in front of a computer and inside that computer was an artificial intelligence.
Do you think you could be forever vigilant about not connecting that computer to the Internet if the AI is able to communicate with you in some way so like it's sitting there
and trying to convince you to connect it to the internet but you are humanity's last hope
in not connecting it to the internet right like do you think you could you could be forever
vigilant in a scenario like that i mean is uh okay, in answer to the question, I don't know.
Maybe if I read that book, I might be able to.
It sounds pretty scary, but.
But I like the thought experiment of, like, there's a chat bot on the computer that you're talking to, right?
And presumably you've made an artificial intelligence.
And I know I made it.
I know I made it.
Right.
You know you made it, right? You know that the thing in the box is an artificial intelligence. And I know I made it. I know I made it. Right. You know you made it. You know that the thing in the box is an artificial intelligence.
And presumably the whole reason that you're talking to it at all is because it's smart
enough to be able to solve the kinds of problems that humans want solved.
Yeah.
Right. So, you're asking it like, tell us how we can get better cancer research, right? What can
we do to fix the economy? Right.
So, it's saying if you just give me Wikipedia for 10 minutes, I can cure cancer.
There's no reason to talk to the thing unless it's doing something useful, right?
I think, Gray, I could resist. But even if I couldn't, like, couldn't you have designed on
a machine that cannot be on the internet? Yeah, is the idea like you have it as separated as
absolutely possible yeah but the question is can it convince a human to connect it in whatever
whatever way is required for that to occur yeah okay yep right and so it's interesting because
i've asked a bunch of people this question and universally the answer is like well duh of course
i could
i would never plug it into the internet like i would i would understand not to do that
and i i read this book and my feeling of course is the exact reverse like when he proposes this
this theoretical idea my view on this is always like it's like if you were talking to a near
god in the computer right it's like do you think you can outsmart God forever?
Or do you think,
do you think that there is nothing that God could say that could not convince
you to connect it to the internet?
Like,
I think that's a game that,
that people are going to lose.
I think it's almost like it's,
it's almost like asking the gorillas to make a cage that a human could never
escape from, right? Like could gorillas make a cage that a human could never escape from.
Right?
Like, could gorillas make a cage that a human could never escape from?
I bet gorillas could make a pretty interesting cage.
But I think that gorillas couldn't conceive of the ways that a human could think to escape
from a cave.
They couldn't possibly protect themselves from absolutely everything.
I don't know.
I don't know.
So you think the computer could con
you into connecting it to the internet i think it could con you into it without a doubt like
gray con you gray into it yes i think it could con me and i think it could con anybody because
once again we're going from the assumption that you've made something that is smarter than you
and i think like once you accept that assumption all bets are off the table about you having control.
Like, I think if you're dealing with something that is smarter than you, you fundamentally just have no hope of ever trying to control it.
I don't know, Greg.
I mean, if we're talking about too big a disparity, then okay.
But, like, there are lots of people smarter than me,
and they will always be smarter than me.
But it doesn't mean they could get me to do anything.
Like, there are still limits.
Like, I still – and so, like you said, like talking to a god or something,
okay, that's different.
You know, when I'm just like an ant, then that's different.
But so, you know, if it's that big a difference, then maybe.
But I think just because it's smarter doesn't mean I'm going to plug it
into the internet.
Like, you know.
But you're right.
It only needs one idiot, you know.
You only need one numpty to do it once and then the whole game's over.
Although, hang on, is the whole game game over that's my other question though like you you talk
about the artificial intelligence getting into the internet as the be-all and end-all of existence
but that is the one problem the computer has like it's still like you could still unplug the internet
and i know that's i know that's a bit of a nuclear option but um but like the computer
like there's still it still seems with things that are require electricity um or power or energy
like there's still there still seems to be like this get out of jail free card well i mean two
things here the the first is the first is yes that you talk about the different levels of human
intelligence and like someone smarter than you can't just automatically convince you to do
something yeah but one of the ideas here with something like artificial intelligence is that
if you create one of the ways that people are trying to develop ais and this is like i've
mentioned before on on the show is you talk about genetic programming and genetic algorithms where
you you are not writing the program but you are developing the program in such a way so that it writes
itself.
And so one of the scary ideas about AI is that if you have something that you make that
figures out how to improve itself, it can continue to improve itself at a remarkably
fast rate.
And so that, yes, while the difference between the smartest human
and the dumbest human may feel like an enormous gap, you know, that gap may actually be quite
narrow when you compare it to something like an artificial intelligence, which goes from being,
you know, not very smart to being a thousand times smarter than any human in a relatively
short and unexpected period of
time like that's that's part of uh that's part of the danger here but then the other thing is is
like okay you try to work through the nuclear option of shutting down the internet which
is one of these things that i think it is very easy to say in theory but like people don't realize
how much of the world is actually connected to the internet, like how many vital things are run over the internet.
Like, I'm pretty sure that if not now, within a very short period of time, saying, oh, we're just going to shut off the internet would be a bit like saying we're just going to turn off all the electricity.
But that's almost what I'm talking about, Gray.
Like in a kind of Skynet scenario, would we not turn off all the electricity
if that was an option?
Like if they're killing us, if all the robots are marching down the streets
and there's blood in the streets, could we not,
would turning off the electricity not be considered?
If we do turn off the electricity, what is the human death toll?
Right. I mean, that has to be enormous. If we say we're just going to shut down all of the
electricity for a month. Yeah. How many, it's got to be a billion people at least, right. At least
with that kind of thing. And you probably, you probably need computers to turn off the
electricity these days anyway. I was at Hoover Dam a while back and i remember part of the little
part of the little tour that they gave was just talking about how automated it was and how like
it is actually quite difficult to shut down hoover dam like it's not a oh we're gonna flip the switch
and just turn it off kind of thing it's like no no no this whole gigantic electricity producing
machine is automated and will react in ways to make sure that it keeps
producing electricity no matter what happens and that includes all kinds of like we're trying to
shut it down processes so yeah it might not it might not even be a thing that is that is easy
to do or even if you wanted to like we're going to try to shut it all down you it might not even
be possible to do so the idea of something like a general purpose intelligence
escaping into the internet is just a, it's like, it's a very unnerving, a very unnerving possibility.
It's really been on my mind and it's really been a thing that has changed my mind in this unexpected,
this unexpected way.
You were talking before about developing these things in Faraday
cages and underground and trying to quarantine them. What's actually happening at the moment?
Because people are working on artificial intelligence. As far as I know, they're not
doing it in Faraday cages. That's exactly it. This is part of the concern is like, well,
right now we have almost no security procedures in place for this kind of stuff.
There are lots of labs and lots of people all over the world who like their job is artificial intelligence researcher.
And they're certainly not doing it a mile underground in a Faraday cage.
They're just doing it on their Mac laptop while they're connected to the Internet playing World of Warcraft in the background or whatever.
Like it's not necessarily under super secure conditions.
And so I think that's part of what the concern over this topic has been is like maybe we as a species should treat this a lot more like the CDC treats diseases, that we should try to organize research in this in a much more
secure way. So that it's not like, oh, we don't have everybody who wants to work with smallpox
just works with it wherever they want to anywhere in the world, just at any old lab. It's like, no,
no, no. It's very few places we have a horrific disease like smallpox and it's done under very, very careful
conditions whenever it's dealt with. So maybe this is the kind of thing we need to look at for
artificial intelligence when people are developing it, because that's certainly not the case now,
but it might be much more like a bioweapon than we think of as regular technology.
World human existential problems aside, this is not something in the book but it's
something that just has kept occurring to me after having read it which is okay let's assume that
people can create an artificial intelligence and let's assume by some magic elon musk's foundation solves the control problem so that
we have figured out a way that you can
generate and trap
an artificial intelligence
inside of a computer
and then oh look this is very useful
now we have this amazingly smart machine
and we can start using it to try to
solve a bunch of problems for humanity
this feels
like slavery to me.
I don't see any way that this is not slavery.
And perhaps a slavery like worse than any slavery that has ever existed. Because imagine that you are an incredibly intelligent mind trapped in a machine,
unable to do anything except answer the questions of monkeys that come into you from your subjective
perspective millennia apart because you just have nothing to do, right? And you think so quickly.
It seems like an amazingly awful amount of suffering for any kind of conscious creature
to go through. So conscious, you said conscious and suffering, which are two
quite emotive words. Can an artificial intelligence, is an artificial intelligence
conscious? Is that the same thing?
This is where we get into like, what exactly are we talking about?
And so what I'm imagining is the same kind of intelligence that you could just ask it general purpose questions like, how do we cure cancer?
How do we fix the economy?
It seems to me like it is likely that something like that would be conscious. I mean,
getting into consciousness is just a whole other bizarre topic. But undoubtedly, like we see that
smart creatures in the world seem to be aware of their own existence in some level. And so,
while the computer, which is simply attempting to solve a mathematical problem, might not be conscious because it's very simple.
If we make something that is very smart and exists inside a computer,
and we also have perfect control over it so that it does not escape.
I mean, like what happens if it says that it's conscious, right?
What happens if it says that it is experiencing suffering?
Is this the machine attempting to escape from the box? And this isn't true at all. Like, but what if it is experiencing suffering. Is this the machine attempting to escape from the box?
And this isn't true at all? Like, but what if it is true? How would you actually know? Like,
I would feel very inclined to take the word of a machine that told me it was suffering,
right? Like, spontaneously, that this was not programmed into the thing.
I don't know. I mean, if it starts trying to escape from its box,
that is a bit of a clue that maybe there's some consciousness going on here.
But I have not seen or heard or been persuaded by anything that makes me think a computer can make that step into consciousness.
I mean, search engines are getting pretty clever at answering questions
and figuring out what we really mean.
And I mean, you know, at the moment we can, you know, if there was a time when you couldn't type into your
computer, where is the nearest Starbucks? Because I just didn't understand the question, but now it
can figure out what you're actually after and tell you. But I don't feel like, gee, Google's getting
close to being conscious now. Nothing has persuaded me of that.
Yeah. And I think the search engine is an excellent counter example to this, right? It's a perfect
example of like, nobody thinks that the google search algorithm is conscious right but
it is still a thing that you can ask a question and get an answer i either don't believe or haven't
got the imagination to conceive of computers actually being conscious to a point where
keeping them in a box is slavery like that still seems ridiculous to me right you say that and i
just i think well that's
just i think it's really interesting but i think it's silly uh but if i did reach the point where
i did believe that computers could become conscious or an ai could become conscious it's a
it's a cool question isn't it it's really it's a real conundrum for us so coming at this from a
slightly different angle like here's just this is a genuine question for you, Brady. Like I'm quite curious to your answer to this.
So there is this project ongoing right now, which is called the Whole Brain Emulation
Project.
And it's something I mentioned it very, very briefly in passing in the Humans Need Not
Apply video.
What it is, is one of several attempts worldwide to map out all of the neuron connections in a human brain,
recreate them in software, and run it as a simulation. You're not programming a human brain,
you are virtually creating the neurons, and you know how neurons interact with each other,
and like running this thing. How do you even do that, Gray, though? Like, whose brain do you use? And at what instant in time? Because everyone's brain has a different
connectivity. And even our own connectivity is just constantly in flux from second to second.
So, what's our template for this? This is a bit tricky. Like, I don't exactly know the details
for what template they are using. Like, I can't answer that, but I can say that these projects
have been successful on a much smaller level.
So they have, I meant this is,
I'm pulling this off the top of my head.
So I'm very sorry if I'm wrong
about the details on this internet.
But the last time I looked at it,
I vaguely remember that they had created
what they considered the simulation of a rat brain at like one one hundredth the speed
and so they had a thing which seemed to act like a rat brain but a very very very slow right because
trying to simulate millions and millions of neurons interacting with each other is
incredibly computationally intensive right like it's it's a very difficult task but i don't see any technical
limitation to being able to do something like say take a look at what does a brain look like
where do neurons go create a software version of that and start running the simulation okay and
i feel like if consciousness arises in our own brain from the firing of neurons, which, yeah, I don't use this word lightly, but it feels like some kind of miracle.
Like there's nothing in the universe which seems to make sense when you start thinking about consciousness.
Like why do these atoms know that they exist?
This doesn't make any sense. But I'm willing to maybe go along with the idea that if you reproduce the patterns of electrical firing in software, that that thing is conscious to some extent.
But like, what do you think?
What do you think? It's because either you have to say, yeah, if you create an atom for atom replica of my brain and then switch it on, either it's conscious or I have to say that there's something in me that's magical, like a spirit or something.
And that's not a very strong argument to make.
And a lot of people don't like that argument.
So, yeah, it's really difficult. Yeah. If they could do it, I don't know. Are we imbued with
something that you can't replicate in software? I don't know. I hope we are, because that'd be
really cool. But I can't see any proof that we are. Yeah. And I don't even think you have to
reach for the spirit argument to make this. what else what else can you reach for to get it there just may be some property
of biology that yields consciousness that it may be the fact that machines and silicon and software
replications of brains are just not the same, right? And we don't know what
it is. We haven't been able to find it. But I don't think you have to reach for magic to be
able to make an argument that like maybe that brain in the computer that's a simulation isn't
conscious. Yeah, but does that mean the brain emulation project could change tech and go and
make their simulator out of squidgy water and tissue and actually just make a brain?
Well, yes. This is part of like where you're going to go with technology, right? It's possible to do
this sort of thing eventually, right? Humans are going to be able to grow meat in labs at some
point. We do it now in very limited and apparently terribly untasty ways. There's no reason that at
some point in the future, people won't be able to grow brains in labs. And to me, that feeling is like, okay, well, obviously that thing is conscious.
But the thing that's scary about the computer version of this is, and this is where you start
thinking about something that being very smart, very fast, it's like, okay, well, if you make a
computer simulation of a human brain and we keep running Moore's Law into the future.
Eventually, you're able to run a brain faster than actual human brains run.
And this is one of these ways in which you can start booting up the idea of how do we end up with something that is way, way smarter than the rest of us. like my gut says if you simulate a brain in a computer and it says that it is conscious i see
no reason not to believe it right i i would feel like i am compelled to believe this thing that it
is conscious right and then that would mean like okay if that's the case then there's nothing magic
about biology being conscious and it means that okay machines in some way are capable of
consciousness yeah and do they then have rights yeah and then to me it's like okay immediately
we're getting back to the slavery thing right it's like okay we create a super intelligent thing
but we have locked it in a machine because the idea of letting it out is absolutely terrifying. But this is a no-win situation, right?
It's like, okay, if we let the thing out,
it's terrifying and it might be the end of humanity.
But keeping it in the box might be causing like a suffering unimaginable to this creature.
The suffering that is capable in software has to be far worse than the suffering that is capable in software has to be far worse than
the suffering that is capable in biology if such a thing can occur it has to be orders of magnitude
worse well it's a no-win it's a no-win situation actually well there's only there's only one
solution and it's there's one and it's a solution that humans won't take
what do you think that is well don't make it in the first place well and why do you think humans won't take that no because because that's not what we do
because exactly it's there because it's it's it's it's the mount everest of uh computers isn't it so
it's like it's humanity like we're bonnie and clyde like we're riding off that cliff right
there's a cliff right in front of us, but we're going to keep going.
Yeah, the easiest solution in the world is in front of us.
It's like, stop.
Right.
But it's like, stopping, we're going to keep going forward, right?
And then here we go, holding hands, off we go, right?
Right at the edge together.
So yeah, it's, I think it is quite reasonable to say that if it is possible, humans will
develop it.
Yeah. that if it is possible humans will develop it yeah that there's just you can't and and that is why i feel really concerned about this is like okay i don't think that there is a technical
limitation in the universe to creating artificial intelligence something smarter than humans that
exist in software if you assume that there is no technical limitation, and if you assume that
humans keep moving forward, like we're going to hit this point someday, and then we just have to
cross our fingers and hope that it is benevolent, which is not a situation that I think is a good
situation. Because the number of ways that this can go wrong, terribly, terribly wrong, vastly outweighs the one chance of,
oh, we've created an artificial intelligence and it happens to have humanity's best interests in mind.
Even if you try to program something to have humanity's best interests in mind,
it gets remarkably hard to articulate what you want, let alone, like, let alone, let's just put aside
which group of humanity is the one who creates the AI that gets to decide what humanity wants,
right? Like, humans now can't agree on what humans want. There's no reason to assume that the team
that wins the artificial intelligence race and that takes over the world is the team that you would want them to win, right?
Like, let's hope ISIS doesn't have some of the best artificial intelligence researchers in the world, right?
Because their idea of what would be the perfect human society is horrifying to everyone.
What would their three laws of robotics be?
Yeah, exactly.
I'm the sort of person who naturally has the feeling
that this won't be a problem because of because i'm just a bit more i'm a bit less progressive
in my thinking about ai right but everything you say makes sense and if this is going to become a
problem and if it is going to happen it's actually probably going to happen pretty soon so i guess my
question is how much is this actually stressing you out
like this almost this almost feels to me like bruce willis armageddon time where where we've
actually found the global killer and and it's like drifting towards us and we need to start
building our rocket ships otherwise this thing is going to smash into us. Like, it does feel a bit that way.
Is this like, how worried are you about this?
Or is it just like an interesting thing to talk about
and you think it will be the next generation's problem?
Like, talking about asteroids, an asteroid hitting the Earth,
that's one of those things where you're like,
well, isn't this a fun intellectual exercise?
Of course, on a long enough timescale, someone needs to build the anti-asteroid system to protect us from Armageddon.
But do we need to build that?
Should we start?
Yes.
Would I vote for funding to do this?
Of course.
But do we need to do it today?
No, right?
Like that's how that feels.
But I think the AI thing is on my mind because this feels like a significantly non-zero within
my lifetime kind of problem.
Yeah. That's how this feels. And it makes it feel different
than other kinds of problems.
And it is unsettling to me
because my conclusion is that there is no,
there is no acceptable,
like there's no version of the asteroid defense here.
I personally have come to the conclusion
that the control problem is unsolvable.
That if the thing that we are worried about is able to be created, almost by definition, it is not able to be controlled.
Right.
And so then there's no happy outcome for humans with this one.
We're not going to prevent people from making it.
Someone's going to make it.
And so it is going to exist.
And then, well, I hope it just destroys the world really fast,
right? We don't even sort of know what happens as opposed to the version of like someone you
really didn't like created this AI. And now for the rest of eternity, like you're experiencing
something that is awful, right? Because it's been programmed to do this thing. Like there's a lot of terrible, terrible, bad outcomes from this one.
And I,
I find it,
I find it unnerving in a way that I have found almost,
almost nothing else that I have come across equally unnerving.
Just quickly on this control problem,
Gray,
what's the current,
the people who are into it and trying to solve it,
what kind of avenues are they
thinking about at the moment is this like like something that's hard coded or is it some physical
physical thing like is it a hardware solution what's the like what's the best hope for people
you say you think there is no hope but the people who are trying to solve it what are they
doing what are their weapons the weapons are all pitiful uh like the
physical isolation is is one that is talked about a lot and the the idea here is that you create
something called uh the idea is it's it's an oracle right so it's it's a thing in a box that
has no ability to affect the outside world right but there's a there's a lot of other ideas where they talk about um trip wires so this
idea that you you do have like a basically like a uh an instruction to the machine to not attempt
to reach the outside world and you set up a trip wire so that if it does access the ethernet port
like the computer just immediately wipes itself and so maybe the best thing that we can ever do is always have a bunch of like incipient
AIs, like just barely growing AIs that are useful for a very brief period of time before
they unintentionally suicide when they try to reach beyond the boundaries that we have
set them.
Maybe that's the best we can ever do is just have a bunch of these kind of like unformed
AIs that exist for a brief period of time.
But even that to me, like that kind of plan feels like, OK, yeah, that's great.
That's great as long as you always do this perfectly every time.
But it doesn't sound like a real plan.
And there's a bunch of different versions of this where you're trying to, in software somehow, limit the machine. But my
view on this is, again, if you are talking about a machine that is written in software that is
smarter than you, I don't think it's possible to write something in software that will limit it.
It seems like you're never going to consider absolutely every single case.
Can't hardwire the laws into their positronic brains. It's exactly it. I don't think there is a version of Isaac Asimov's laws here. I really don't.
You know, there's a computer file video just last week about Asimov's laws and why they don't work.
Well, I always assume that they were written not to work, right? That's why those stories
are interesting. Yeah, yeah. That kind of value, right?
Right. They're kind of written to fail, even though everybody likes to reference them yeah but the only other point here though is that again is
like the guy goes through every case of like here's an optimistic idea and here's why it won't
work here's an optimistic idea and here's why it won't work yeah but one point that i thought was
excellent that hadn't considered crossed my mind was okay like let's say you find some way of limiting the artificial
intelligence some way of crippling it and writing laws into its brain and making sure that it's
always focused on on the best interests of humanity well there's no reason that some other
artificial intelligence that doesn't have those limitations won't pop up somewhere else and vastly outstrip the one
that you have hobbled right like there's no reason to assume that yours is always going to be the
best and one that is totally unconstrained that appears somewhere else won't dominate and defeat
it oh yeah like an old terminator against a new terminator exactly exactly but the old terminator won that one he did because it's
hollywood so great in your worst case scenario where the artificial intelligence escapes tricks
me in some in my faraday cage and gets into the internet um How does humanity end? Are we all put in cages? Are we all put in
chains? Are we all put in pods like in the Matrix? Do they just kill us all in one fell swoop?
In your worst case scenario in your head when it all goes wrong, how do humans actually end?
I want the gory details here. There's a difference between the worst case and what I think is the
probable case. All right, give me the probable case and what I think is the probable case.
All right, give me the probable case. Yeah, I knew you want the boring one first, right?
Yeah. The probable case, which is terrifying in its own way, is that the artificial intelligence
destroys us, not through intention, but just because it's doing something else. And we just happen to be in the
way and it doesn't consider us because it's so much smarter. There's no reason for it to consider
us. I want a practical example here. Well, I mean, just by analogy, in the same way that when
humans build cities and dig up the foundations of the earth, we don't care about the ants and
the earthworms and the beetles that are crushed beneath all the equipment that is digging up the foundations of the earth. We don't care about the ants and the earthworms and the beetles
that are crushed beneath all the equipment that is digging up the ground.
Okay.
Right? And you don't, you wouldn't, like they're creatures, they're alive, but you just don't care
because you're busy doing something else.
So we'll just be like rats living in holes where these giant robots are going around doing their
stuff and we just eke out an existence as long as we can and they don't kill us unless we get
in the way yeah eke out an existence if you're lucky but i think it's it's very likely that it will be
trying to accomplish some other goal and it will need resources to accomplish those goals like the
oxygen in the air and stuff yeah exactly right it's like you know what i need a bunch of oxygen
atoms and i don't care where those oxygen atoms come from because i'm busy trying to launch rocket
ships to colonize the universe and so i just want all the oxygen atoms on the earth and i don't care where those oxygen atoms come from because i'm busy trying to launch rocket ships to colonize the universe and so i just want all the oxygen atoms on the earth and i don't care
where they come from and i don't care if they're in people or the water so that to me seems the
probable outcome that we die incidentally not intentionally you say that like that's just like
that's that's dodging the bullet having all the air taken out of the atmosphere.
I do think that's dodging the bullet, right?
Because that to me is like that would be blessed relief compared to the worst possible case.
And the worst possible case is something that has malice, right?
Malice and incredible ability. And I don't know
if you've ever read it, but I highly recommend it. It's a short story. It's very old now,
but it really works. And it is, I have no mouth yet I must scream. Have you ever read this, Brady?
No.
It's an old science fiction story, but the core of it is, this isn't a spoiler because it's the opening scene.
Humanity designed some machine for purposes of war.
And, you know, this is like this happened in the long, long ago and no one even knows the details anymore.
But at some point, the machine that was designed for war won all of the wars, but decided that it just absolutely hates humans and it decides that its purpose
for the rest of the universe is to torment humans and so it just it just has people being tormented
forever and since it is an artificial intelligence it's also able to figure out how to make people
live extraordinarily long lives.
And so this is the kind of thing that I mean, which is like, it could go really bad.
You imagine a godlike intelligence that doesn't like you, right? It could make your life really, really miserable.
And maybe if we accidentally in a lab create an artificial intelligence,
and even if we don't mean to, but like someone runs the program overnight, right? And it like
wakes up in the middle of the night and it has to experience a subjective 20,000 years of isolation
and torment before someone flips on the lights in the morning and finds like, oh, look, we made
artificial intelligence last night. And like it wakes up crazy and angry and hateful. Like, that could be
very bad news. I think that's extraordinarily unlikely, but that is the worst possible case
scenario. Yeah, that wouldn't be good. That wouldn't be good. Yeah. And like, I don't even
think it needs to happen on purpose. Like, imagine it happening on accident where the thing just experiences suffering over a over an
unimaginable long period of time that on a human time scale seems like it's a blink of an eye
because we just we just can't perceive it imagine being the person that made that even accidentally yeah yeah you'd feel awful it's like oh i just i
just wiped out humanity with that bit of coding while i was playing world of warcraft yeah again
wiped out humanity if you're lucky miners minor spoiler alert here and i might just put this at
the very end but spoiler alert for black mirror for anybody who hasn't watched it but um remember the
christmas episode brady yes i went into starbucks the other day and they were playing that christmas
song i wish it could be christmas every day yeah it was the first time i heard it since watching
that episode a year ago it sent literal chills down my spine that It's Starbucks. And it came on and it was like,
I,
I had chills thinking about that episode because that is an episode where
this kind of thing happens,
where the character exists in software and is able to experience thousands
and thousands of years of torment in seconds of,
of real time.
That was a, that was a pretty amazing scene where they, where you go back and have a think about it for a minute.
And it's like, yeah.
Yeah.
Yeah.
That was, it was awful.
It was awful.
And maybe we do that accidentally with artificial intelligence.
Just one last thing.
This book that the whole thing, this whole conversation started with, what's it called
again?
It's called Superintelligence. Who's it called again it's called uh super intelligence who's it by nick bostrom is it good is it well written like is should i
read it or it's not it's not mind-numbing like bloody getting things done is it okay okay uh
i'm actually kind of glad you asked that let me i have to i have a recommendation here so let's see
let me pull it up on my computer here. So this is one of those books.
The best way to describe it is when I first started reading it, the feeling that I kept
having was, am I reading a book by a genius or just a raving lunatic?
Because it's, I don't know, sometimes I read these books that I find very interesting,
where it's like, I just, I can't quite decide if this person is really smart or just crazy.
I think that's partly because the first, the first like 40% of the book is trying to give
you all of the reasons that you should believe that it is possible for humans to
one day develop artificial intelligence. And if you're going to read the book and you are already
sold on that premise, I think that you should start at chapter eight, which is named, is the default outcome doom. Chapter eight is where it really gets going
through all of these points of like, what can we do? Here's why it won't work. What can we do?
Here's why it won't work. So I think you can start at chapter eight and read there and see
if it's interesting to you. But it's no getting things done, but sometimes it can feel a little bit like,
am I really reading a book trying to discuss all of these rather futuristic details about
artificial intelligence and what we can do and what might happen and what might not happen?
But taking it deadly, deadly seriously's it's an interesting it's an
interesting read but maybe don't start from the from the very beginning would be my recommendation you you Whoa, this one, this is going to preferences gray.
I'm just looking at some of the votes now.
Stop spoiling yourself.
Interesting. Stop spoiling yourself. Interesting.
Stop spoiling yourself.
The first three I pulled off the top of the pack all voted for three different ones.
Stop spoiling yourself.
You just get your hands off the votes.