The Joe Rogan Experience - #1840 - Marc Andreessen
Episode Date: July 6, 2022Marc Andreessen is an entrepreneur, investor, and software engineer. He is co-creator of the world's first widely used internet browser, Mosaic, co-founder of the social media network platform Ning, a...nd co-founder and general partner of the venture capital firm Andreessen Horowitz.
Transcript
Discussion (0)
The Joe Rogan Experience.
Train by day, Joe Rogan podcast by night, all day.
What's up, Mark?
How are you?
Good, I'm good.
Have you done a podcast before?
I've done podcasts before.
Yeah?
Nothing with this reach, though.
Oh.
So that's exciting.
You can't think about that.
Nope, not at all.
Can't think about the reach part.
Yep.
First of all, very nice to meet you.
Yeah, you too. You're't think about that. You can't think about the reach part. First of all, very nice to meet you. Yeah, you too.
You're a tech OG.
When it comes to
the tech people,
you're
at the forefront of it all.
You were one of the co-founders, you were one of the co-creators
of Mosaic, right? Yeah, that's right.
What was it like before there were web browsers?
How do you know?
You know a time before web browsers?
I do.
So I'm an OG now, but when I first started, I thought I missed the whole thing.
Really?
I thought I missed the whole, because I missed the personal computer.
I missed the whole thing.
You missed the original use of the personal computer.
Yeah, the personal computer.
And before that, all the other computers that came before that.
So the computer revolution kind of happened over the 50 years right before I showed up.
What was the first personal computer?
The first personal computer, the first true personal computer,
they were like kits in the early 70s that you could build.
The first interactive computer that you could use the way you use a PC was all the way back in the 50s.
It was a system called Play-Doh at the University of Illinois where I went.
And it was really, there was a great book
called the Bright Orange Glow.
And it was a black screen with only orange graphics.
And they built it by hand at the time
and they had the whole thing working.
And so these ideas are all old ideas.
They had email.
They had all these ideas kind of way back when.
They had email?
Yeah, they had email and messaging
and multiplayer video games and all that stuff back in the 50s.
Really?
Yeah, yeah.
It just was only in a couple places.
It was really hard to get it working.
It was expensive.
When you say multiplayer video games, it wasn't like a graphic video game.
They had very simple graphics, very simple space war games or whatever.
I mean, really simple.
Remember Asteroids?
Yeah.
Yeah, like that quality of stuff or even simpler than that, really, remember like Asteroids? Yeah. Yeah, like that quality of stuff
or even simpler than that.
So what year was Asteroids?
Asteroids would have been in the late 70s,
77, 78, 79, somewhere in there.
Pong was 74, I think, which was the big,
the first arcade video game was Pong.
Yeah, we had one somewhere around that time,
and I remember thinking it was the most crazy thing
I've ever seen in my life, that you could play a thing that's taking place on your television.
You could move the dial and the thing on the television would move.
I mean, it was magic.
It's so crude and dumb for kids today, they would never believe the impact that it had
on people back then.
So before the one you had on your TV set, that was later on, before they had the arcade game,
the console and the arcade, and the story there is crazy.
It's this guy Nolan Bushnell,
who's the founder of this company Atari,
that basically created the video game industry,
and he developed this game Pong.
So, and he literally built one.
Like they had no idea if anybody
wanted to play a video game at that point.
So they built one, they built this console,
they put it in a bar in Mountain View in Silicon Valley,
and the guy, the owner of the bar called up, you know, three days later, and he's like, you know,
your thing is broke, like, come get it. And, you know, Nolan's like all depressed. And he goes in
and realizes the thing, it's so jammed with quarters. It was so popular, right, that people
just like kept jamming quarters in it. Right. And it literally like it couldn't take any more
quarters. And literally, he was like, aha, you know, proof people actually want to play video games. Like that's how, like even that was
not obvious at the time. Yeah. I remember the first video game arcades and like a complex game
was what was that? There was like a Dungeons and Dragons game. What was it called? Dragon Quest or
something like that? There was the first Laserdisc game which had
video clips. Yes. It was probably the one you're
thinking about. What was it called? Something like that.
Do you remember that game, Jamie?
Do you know what I'm talking about? He's way too young.
And there was a move that you had to do really
quick, and if you did the move correctly
you would go on to the next level. If you didn't,
a video graphic would play
where you got killed.
I think it was the same one. It was a big deal because it was the first game that had video clips.
Yes.
And that was a really hard thing to do.
And it had a giant platter, a laser disc platter inside playing these clips.
And again, it existed.
It was just really hard to make it work.
Did you find it?
That, I think that's it.
Yeah, that's probably it.
Let me see what it looks like.
Yes, that's exactly what it was.
Dragon's Lair. Dragon's Lair. So if see what it looks like. Yes, that's exactly what it was. Dragon's Lair.
Dragon's Lair.
So if you did it correctly, you would get this video where you went through all the right moves and you got to the place.
But you would have moments where you had to make a quick decision.
And if you made the correct decision, like here, like jumping to the flaming ropes, if you made the correct decision, you would get across.
But if you screwed up, they would play a video of you dying.
Right.
Exactly.
And that was super sophisticated back then.
Oh, yeah, yeah, yeah.
This was a marvel at the time.
And I remember the early days of the arcade where video arcades were around.
Yeah, yeah, yeah.
So, look, all this stuff is super obvious in retrospect.
Like, it's just it's obvious in retrospect everybody wants to play games.
They want them at home, all this stuff.
Like, at the time, it was not obvious. And that's
kind of how all this new technology goes. It's how the internet was in the very, very beginning.
It's like, well, I don't think anybody's going to want to do this was the overwhelming view.
And then, and by the way, you know, not all new technologies work, but the ones that do people
look back and they're like, well, that one must've been obvious. And it's like, no.
Wasn't the people at IBM, who was it that mocked the idea of a personal home computer?
Yeah, there was a lot of that.
Well, there was a famous statement of the founder of IBM, this guy Thomas Watson Sr.
He famously said one of these things, maybe he said it, maybe he didn't, but he said,
there's no need for more than five computers in the world.
The theory was basically the government needs two.
They need one for defense and one for civilian use.
Then there's three big insurance companies, and that's the total market.
That's all anything needs.
Then there's a famous letter in the HP archives where some engineer told basically the founders of HP they should go in the computer business.
There's an answer back from the CEO at the time saying nobody's going to want these things.
Yeah, it's really tenuous.
A famous New York Times wrote a review of the first laptop computer that came out in like 1982, 1983.
And the review is, you read it, it's just scaling.
It's just like, this is so stupid.
I can't believe these nerds are up to this nonsense again.
This is ridiculous.
And then you realize like what the laptop computer was in 1982.
It was 40 pounds.
It was like a suitcase, right?
And you open it up and the screen's like four inches big, right? And so like the whole thing's slow and it doesn't do much. And so if you just like take a
snapshot of that moment in time, you're like, okay, this is stupid. But then, you know, you
project forward. And by the way, the people who bought that laptop got a lot of use out of it
because it was the first computer you could carry. Like that turned out to be a big deal.
Well, it's probably very valuable now, right? Just as a, you know, novelty piece.
Yeah, yeah. But like this idea that we got from like, that's probably very valuable now, right? Just as a novelty piece. Yeah, yeah. But this idea that we got from that's just absurd to literally everybody carrying a supercomputer in their pocket in front of a phone in 30 years.
So quick.
Yeah, yeah.
Actually, really fast.
When you were first getting on computers, so how old were you when you first started coding and screwing around on computers?
Well, I started coding before I had a computer.
Yeah?
So I taught myself.
So I'm like the perfect, I'm like right in the middle, I'm like the perfect Gen X age.
I'm like, I turned 51.
I was born in 1971.
The home computer started coming out in like 1980, 81, where like normal people could buy them.
They got down to a few hundred dollars.
You hook them up to your TV set.
And so I knew I wanted one, but I couldn't afford it.
What did they run on?
I hadn't mowed enough lawns yet to have the money to buy one.
What did they run on, like software?
Yes.
Oh, so Microsoft actually, they had a very simple operating system,
and then they had Microsoft actually made what's called BASIC at the time,
which was the programming language that was built in.
And so when you say this is a home computer, who was buying them and what function did they serve?
Yeah, well, that was a big debate.
The big debate at the time actually was do these things actually serve any function in the home?
And sort of the ads would all say basically it's because the ads are trying to get people to basically pitch their parents on buying these things.
And be like, well, tell your mom she can file all of her recipes on the computer.
That's the kind of thing they were reaching for.
And then your mom says, well, actually, I have a little card,
a 3x5 card holder.
I don't actually need a computer to file my recipes.
So there was that.
A lot of it was games.
A lot of it was video games.
And then kids like me like to learn how to code.
First, it's like play the game.
And it's like, well, how do you actually create one of these things?
And then businesses started to get a lot of – when the spreadsheet arrived, that was a really big deal because that was something that people – it was something people – capability that business people didn't have until they had the PC.
How much data storage did those things have back then?
So my first computer had four kilobytes of storage, 4,000 bytes, 4,000 bytes of storage.
And so you would write – you could code.
You could write code.
But you had to write code.
You had to know exactly what was happening in basically every single slot of memory because there wasn't a lot to go around.
And did it use a floppy disk?
So later on, they had the floppy disks.
That's new.
In the beginning, they used cassette players.
Whoa.
Okay.
So this is the beginning. So if you're a kid with a computer in 1980, you have a cassette player.
And so they would literally record programs as like audio garbled electronic sounds on cassette tape and then it would read it back in.
But you had this like tension.
You had this tension because cassette tapes weren't cheap.
They were fairly expensive.
And the high-quality cassette tapes were quite expensive.
But you needed the high-quality cassette tape for the thing to actually work.
But you were always tempted to buy the cheap cassette tape because it was longer.
And so you would buy the cheap cassette tape and then your programs, your story programs,
then they wouldn't load and you'd be like, all right, I got to go back and buy the expensive
cassette tape.
How did they work through sound?
How did that work?
Yeah.
So they code into basically beeps.
You could say, it wasn't music, you definitely couldn't dance to it, but it was beeps of
different frequencies.
And that's how it stored data?
Yeah, and that's how it stored data.
That's what it looked like?
Wow.
So that's an old, this is an old, that's a computer from a company called Wang, which is a big deal.
So that company was a huge deal.
That was one of the first big American tech companies of this generation, Wang Laboratories.
Yeah, so this is not the exact one I have, but it's a lot like it.
And so, yeah, there's the cassette, RadioShack TRS-80.
This is, I think, an original Model 1.
Was there a feeling back then when you were working with these things that this was going to be something much bigger?
Yeah.
So the thing that they did, the thing that they got right on their personal computer was you loaded the personal computer.
If you remember, it would say, you would show this thing, and then it would say ready, and then there would be the little cursor.
Yeah. Ready, and then
the little cursor, right?
And the little cursor would sit in there blinking.
And basically what that represented,
if you were of a mind to be into this kind of thing,
that represented unlimited possibility, right?
Because it basically, it was inviting, right?
It was basically like, okay, ready for you
to do whatever you want to do,
ready for you to create whatever you want to create.
And you could start typing,
you could start typing in code. And then there were all these, you know, at the time, magazines
and books that you could buy that would tell you how to like code video games and do all these
things. But you could also write your own programs. And so it was this real sense of sort of inviting
you into this amazing new world. And then that's what caused a lot of us kind of of that generation
to kind of get pulled into it early. Wow. And so as you're watching this evolve around you and you're a part of it as well,
like when, so when, when, when did you guys first make Mosaic? What year was that? Yeah. So that
started in 92. Not even Windows 95. Hit critical mass in Windows. Yeah. So yeah,
that was pre Windows 95. So it was Windows 3.1 was new back then. And Windows 3.1,
Windows 3.1 was the first real version of Windows that a lot of people used.
And it was what brought the graphical user interface to personal computers.
So the Mac had shipped in 85, but they just never sold that many Macs.
Most people had PCs.
Most of the PCs just had text-based interfaces.
And then Windows 3.1 was the big breakthrough.
So the Mac got its user interface, the graphic user interface from Xerox, right?
Well, so there's a lot, this goes to the backstory. So Xerox had a system, yeah,
Xerox had a system called the Alto, which was basically like a proto, sort of a proto Mac.
Apple then basically built a computer that failed called the Lisa, which was named after Steve
Jobs' daughter. And then the Mac was the second computer they built with the GUI.
But the story is not complete. The way the story gets told is that Apple somehow like stole these ideas from Xerox. That's not quite what
happened because Xerox, those ideas had been implemented earlier by a guy named Doug Engelbart
at Stanford who had this thing at the time called the mother of all demos, which you can find on
YouTube where he basically in 1968, he shows all this stuff working. And then again, if you trace
back to the fifties, you get back to the Plato system that I talked about, which had a lot of
these ideas. And so it was like a 30 year process of a lot of people
working on these ideas until you know basically Steve was able to package it
up in the Macintosh I need to see that video the mother of the mother of all
demos yeah so this is a legendary this is a guy yes this is a guy Doug
Engelbart well this is going to be more important than it looks so I'd like to
set up a file so I tell the machine, all right, output to a file.
And it says, oh, I need a name.
I'll give it a name.
I'll say a sample file.
So you see on the right, that was the first mouse.
So Doug Engelbart invented the mouse.
And that's the first mouse there on the right.
So he's showing the first mouse in use in the first computer system ever made.
It was a three-button mouse.
It was a three-button mouse.
So could it copy and paste and all that stuff with those three buttons?
He had word processing.
He had all these.
He had all kinds of interactive.
He was one of the first four nodes on the internet back around that time.
So he was even doing email back then, I think, or shortly thereafter.
What?
Here he's writing code.
He was doing email in 68?
Yeah, yeah, yeah.
Very early on.
Wow.
So like sort of an intranet email?
So you would have to be attached to the network to receive emails?
Like how did it work?
It could either be in, yeah, there were private email systems early on. But also he was on the original internet.
The original internet in the U.S. started with only four computers on the internet,
and one of them was his.
So there were four nodes on the original network map.
And so he was kind of plugged into this stuff.
And where was that?
It was something called Stanford Research Institute.
So did you have to be local to be a part of it?
Did it have to be connected by wire?
Yeah, yeah, yeah.
And in fact, it's not like it went through a telephone wire or anything like another, you know, like dial-up or anything like that.
Yeah. Well, so early on, they were kind of the same thing.
So actually, early internet was actually integrated with dial-up.
And so early internet email actually was built. it didn't assume you had a permanent connection,
it assumed you would dial into the internet once in a while, get all the data downloaded,
and then you'd disconnect because it was too expensive to leave the lines open.
One original server? One large server?
Well, the internet idea was all the computers are peers, right? So there's no single node,
right? And so there's just four computers that talk to each other, which was the basis of what the Internet is today.
Four computers talk to each other. Now it's four billion computers talk to each other. But it was that same idea.
And did they store things individually? Like, did you have access to each individual computer's data or did they have a collective database?
You know, they had a combination. I mean, this is very original. These were very simple systems as compared to what we have today.
So these were very basic implementations of these ideas.
But they had very simple what's called store and forward email.
They had very simple what's called file retrieval.
So if there's a file on your computer and you wanted to let me download it, I could download it.
They had what was called Telnet where you could log into somebody else's computer and use it.
So you are messing around with this stuff and you guys create, was it the very first
web browser or the first used by many people web browser?
Yeah, it was the first.
It was a productized, it was the first browser used by a large number of people.
It was the first browser that was really usable by a large number of people.
It was also one of the first browsers that had integrated graphics.
The actual first browser was a text browser, the very first one, which was a prototype that Tim Berners-Lee created.
But it was very clear at that point.
We have Windows.
We have the Mac.
We have the GUI.
We have graphics.
And then we have the internet.
And we need to basically pull all these things together, which is what Mosaic did.
And GUI is graphic user interface.
Graphic user interface, yeah.
What is a GUI?
And again, it sounds like it's not.
We've lived with the GUI now for 30 years.
Most people don't remember computing before that.
It sounds like obviously everything would be graphical.
But it was not obvious at that point.
Most computers at that point still were not graphical.
And so it was a big deal to basically say, look, this is just going to be graphical.
Yeah, most computers were using DOS.
DOS, yeah, that's right.
And so when you created this, when you and whoever you did it with created Mosaic,
what was that like to, what was the difference in, like, functionality?
Like, what was the difference in what you could do with it?
Yeah.
Well, so it worked really well.
So, like, we polished it.
Like, we got it to the point where, like, normal people could use it.
Because it was a black – you could do this stuff a little bit before, but it was like a real black art to put it together.
So we got it to the point where it was, like, fully usable.
We made it – it's called backward compatible.
So you could use it to get to any information on the internet, whether it was web or non-web.
And then you could actually have graphics actually in the information, right?
So web pages before Mosaic were all text.
You know, we added graphics,
and so you had the ability to have images,
and you had the ability to ultimately have visual design
and all the things that we have today.
And then later with Netscape, which followed,
then we added encryption,
which gave you the ability to do business online, right?
To be able to do e-commerce, right?
And then later we added video, we added audio,
and it just kind of kept rolling and kind of became what it is today.
When you look at it today, do you remember your thoughts back then as to where this was all going?
So it was impossible to predict what, you know, it's played out at a much higher level of scale
with many more use cases than we would have thought. But it seemed pretty obvious to us that people would want this kind of thing because at the very basic
level, it was ability for anybody to publish anything, right? Text or video or audio, right?
And then it was the ability for anybody to consume anything, right? And the ability for
all computers in the world to connect with each other and that you wouldn't need centralized
gatekeepers. You wouldn't have TV networks that could control what was on. Anybody could produce whatever they want to do.
And so that basic idea seemed like a pretty good idea.
It hit an incredible wall of skepticism.
Like all of the experts, right?
They're all on the record.
They're all, if you read the newspapers, magazines at the time, 100%, it would be like, this is stupid.
This is never going to happen.
Nobody wants this.
This is never going to work.
And if it does work, nobody's going to want it.
All the big companies were completely dismissive.
It was just like, there's just no way.
This is just too crazy.
It was the same pattern.
These crazy kids are at it again.
Okay, sure, they've been right many other times.
But this one they fucked up on.
Electricity worked.
Telephones worked.
The robos worked.
Light bulb. Yeah. Electricity worked. You know, telephones worked. The robos worked. Okay.
Light bulb.
Yeah, light bulb worked.
But like, you know, this computer thing is stupid.
This internet thing is stupid.
You know, now we're hearing it today.
You know, crypto, blockchain, you know, Web3, this stuff is stupid.
You know, every new thing.
It's just this constant wall of doubt.
And, you know, and frankly, a lot of it's fear.
And a lot of it's, you know, just kind of people getting freaked out.
But your unique perspective of having been there early on with the original computers having worked to code
the original web browser that was widely used like and seeing where it's at now
does this give you a better perspective as to what the future could potentially
lead to because you've seen these monumental changes,
like firsthand and been a part of the actual mechanisms
that forced us into the position we're in today,
this wild place.
In comparison, I mean, God, go back to 1980 to today,
and there's no other time in history
where this kind of change, I mean, other than catastrophic natural disasters or nuclear war, there's nothing that has changed society more than the technology that you were a part of.
So when you see this today and you, do you have this vision of where this is going?
Sure.
Well, yeah, it's complicated, but so many, many parts to it.
But yeah, look, one thing is just like people have tremendous creativity, right? People are
really smart and people have a lot of ideas on things that they can do. Some people. I can
introduce you to folks that would change your scale. Some people, that is, yes, I want to argue
with that. But look, there are a lot of smart people in the spectrum. There are a lot of smart people in the world.
There are a lot more smart people in the world than have had access to anything that we would consider to be modern universities or anything that we consider to be kind of the way that we kind of have smart people build careers or whatever.
There's just a lot of smart people in the world.
They have a lot of ideas.
If they have the capability to contribute, if they can code, if they can write, if they can create, they will do it.
They will figure out.
I mean, the most amazing thing about the Internet to me to this day is I'll find these entire subcultures.
You know, I'll find some subreddit or some YouTube community or some rabbit hole,
and there will be, you know, 10 million people working on some crazy collective, you know, thing.
And I just didn't even know it existed.
And, you know, people are just like tremendously passionate about what they care about,
and they fully express themselves. And, yeah it's fantastic. And I feel we're still
at the beginning of that. Most people in the world are still not creating things.
Most people are just consuming. And so we're still at the beginning of that. So I know that's the
case. Look, it's just going to keep spreading. So there's a concept in computer science called
Metcalfe's law that basically expresses the power of a network mathematically.
And the formula is x squared.
And x squared is the formula that gets you the classic exponential curve, the curve that arcs kind of up as it goes.
And that's basically an expression of the value of a network is all of the different possible connections between all the nodes, which is x squared.
of a network is all of the different possible connections between all the nodes, which is X squared.
And so quite literally, like every additional person you add to the network doubles the
potential value of the network to everybody who's on the network.
And so every time you plug in a new user, every time you plug in a new app, every time
you plug in a new, you know, anything, sensor into the thing, a robot into the thing, like
whatever it is, the whole network gets more powerful for everybody who's on it.
And the resources at people's fingertips, you know, get bigger and bigger. And so, you know,
this thing is giving people like really profound superpowers in like a very real way.
Holy shit.
Right. And so it's just going to get, because the internet's going to get wired in everything,
right? Every car, right? Everything, everything's going to have a chip. Everything's going to be
connected to the network. Like the whole world is going to get like smart and connected in a,
you know, in a very different way.
And then look, you know, we still have these legacy, you know, we're still in the world,
you know, we're at like that weird halfway point, right?
Where we still have like broadcast TV, right?
And we still have like print newspapers, right?
And we still have these like older things.
We still have radio.
Like these things still exist.
They haven't gone away.
And there's still, you know, pretty there's still pretty significant attention and dollars and
prestige associated with these things. But I think it's obvious what's going to happen,
which is all of that's going to transfer to the internet, 100% of it. And so we're still only
halfway or partway into the transition. It's going to get a lot more extreme than it is now.
What do you anticipate to be one of the big factors? If you're thinking about real breakthrough technologies and things that are going to change the game,
is it some sort of human internet interface, like something that is in your body, like a Neuralink-type deal?
Is it something else?
Is it augmented reality?
Is it something else? Is it augmented reality? Is it virtual reality?
What do you think is going to be the next big shift in terms of the symbiotic relationship that we have with technology?
Yeah, so this is one of the very big topics in our industry that people argue about.
We sit and talk about all day long trying to figure out which startups to fund and projects to work on.
So I'll give you what I kind of think is the case. So the two that are rolling right now that I think are going to be really big deals are AI on the one hand, and then cryptocurrency, blockchain, Web3,
sort of combined phenomenon on the other hand. And I think both of those have now hit critical
mass, and both of those are going to move really fast. So we should talk about those. And then
right after that, I think, yeah, some combination of what's called virtual reality and augmented
reality, VR, AR, some combination of those is going to be a big deal. Then there's what's called Internet of Things, right, which is
like connecting all of the objects in the world online, and that's now happening. And then, yeah,
and then you've got the really futuristic stuff. You've got the, you know, Neuralink and the brain
stuff and, you know, all kinds of ways to kind of, you know, have the human body be more connected
into these environments. That stuff's further out, but there are very serious people working on it.
So let's start with AI because that's the scariest one to me.
This Google engineer that has come out and said that he believes that the Google AI is
sentient because it says that it is sad.
It says it's lonely.
It starts communicating.
And Google is there.
It seems like they're in a dilemma in that situation.
First of all, if it is sentient, does it get rights?
Right.
Like, does it get days off?
Yep.
I had this conversation with my friend Duncan Trussell last night, and he was saying, imagine if you, you know, if you have to give it rights.
Like, is it, does it get treated like a human being? Like, what is it? He was saying, imagine if you have to give it rights.
Does it get treated like a human being?
What is it?
Well, I'll make it even a step harder.
What if you copy it?
Right.
Now you've got two of them.
Well, that was what I said to Ray Kurzweil.
Ray Kurzweil was talking at one point in time about downloading consciousness into computers,
and that he believes that inevitably will happen. And my thought was like, well, what's going to stop someone from downloading themselves a thousand times?
With some Donald Trump type character just wants a million Trumps out there, just
out there doing speeches. Like what would stop that?
Yeah, exactly. So let's start with what this actually is today, which is, I think,
you know, which is very interesting, not well understood, but very interesting. So what Google
and this other company, OpenAI, that are doing these kind of text bots that have been in the news,
what they do, it's a program.
It's an AI program.
Basically, it uses a form of math called linear algebra.
It's a very well-known form of math, but it uses a very complex version of it.
And then basically what they do is they've got complex math running on big computers,
and then what they do is they have what they call training data.
What they do is they basically slurp in a huge data set from somewhere in the world, and then they basically train the math against the data to try to get it up to speed on how to interact and do things.
The training data that they're using for these systems is all text on the internet.
All text on the internet increasingly is a record of all human communication. All the text on the internet? All the text on the internet, right? So, and all text on the internet increasingly is a record of all human communication, right?
All the text on the internet.
All the text on the internet.
So how does it capture all this stuff?
Well, so Google's core business is to do that,
is to be the crawler, you know,
famously their mission to organize the world's information.
They actually pull in all the text on the internet already
to make their search engine work.
And then that's-
And then the AI just scans that.
And the AI basically uses that as a training set, right?
And basically just choose through and processes it.
It's a very complex process, but choose through and processes it.
And then the AI kind of gets a converged kind of view of like, okay, this is human language.
This is what these people are talking about.
And then it has all this statistical, when a human being says X, somebody else says Y or Z, or this would be a good thing to say or bad thing to say. For example, you can get emotion, you can detect
emotional loading from text now. So you can kind of determine with a computer, you can kind of say
this text reflects somebody who's happy because they're saying, oh, you know, I'm having a great
day versus this text is like, I'm super mad, you know, therefore it's upset. And so you could have
the computer could get trained on, okay, if I say this thing, it's likely to make humans happy.
If I say this thing, it's likely to make humans happy. If I say this thing, it's likely to make humans sad.
But here's the thing.
It's all human-generated text.
It's all the conversations that we've all had.
And so basically you load that into the computer, and then the computer is able to kind of simulate somebody else having that conversation.
But what happens is basically the computer is playing back what people say, right?
Right.
It's not nobody,
no engineer, the guy who went through this and did the whistleblower thing, he even said he didn't
look at the code. He's not, he's not in there like working on the code. Everybody who works
in the code will tell you it's not alive. It's not conscious. It's not having original ideas.
What it's doing is it's playing back to you things that it thinks that you want to hear
based on all the things that everybody has already said to each other that it can
get online and in fact there's all these ways you can kind of trick it into
basic like for example you can have it he has this example where he like has it
where basically he said you know I want you to prove that you're alive and then
the computer did all this stuff for its life you can do the reverse you can say
what you to prove that you're not alive and the computer will happily prove that
it's not alive it'll give you all these arguments as to why it's not actually
alive and of course it's because it the computer computer will happily prove that it's not alive. It'll give you all these arguments as to why it's not actually alive.
And of course, it's because the computer has no view on whether it's alive or not.
But it seems like this is all very weird.
And for sure, we're in the fog of life.
If it's not life, it's in this weird fog of like what makes a person a person?
Like what makes an intelligent thinking human
being that knows how to communicate able to respond and answer questions well it does it
through cultural context does it through understanding language and having been around
enough people that have communicated in a certain way that it emulates that right yeah so this is
the real question so this is where i was headed the real question is what does it mean for a person
to think right like that's the real question and so and is where I was headed. The real question is, what does it mean for a person to think? Right.
Like, that's the real question.
And so let's talk about, there's something called the Turing test, right, which is a little bit more famous now because the movie they made about Alan Turing.
So the Turing test basically, in its simplified form, the Turing test is basically you're sitting in a computer terminal.
You're typing in questions, and then the answers are showing up on the screen.
There's a 50% chance you're talking to a person sitting in another room who's typing the responses back. There's a 50% chance you're talking to a person
sitting in another room who's typing the responses back.
There's a 50% chance you're talking to a machine.
You don't know, right?
You're the subject.
And you can ask the entity
on the other end of the connection
any number of questions, right?
He or she or it will give you any number of answers.
At the end, you have to make the judgment
as to whether you're talking to a person
or talking to a machine.
The theory of the Turing test is when a computer can convince a person that it's a person,
then it will have achieved artificial intelligence, right?
Then it will be as smart as a person.
But that begs the question of like, okay, like how easy are we to trick?
Right?
Like, and in fact, and so actually it turns out what's happened, this is actually true.
What's happened is actually there have been chatbots that have been fooling people in the Turing test now for several years.
The easiest way to do it is with a sex chatbot.
Because they're the most gullible when it comes to sex.
Specifically to men.
Of course.
I bet women are like way less gullible.
Women probably fall for it a lot less.
But men, like you get a man on there with a sex chatbot, like the man will convince himself he's talking to a real woman like pretty easily even when he's not.
Right.
But the man will convince himself he's talking to a real woman pretty easily even when he's not.
Right.
And so just think of this as a slightly more – you could think about this as a somewhat more advanced version of that, which is, look, if this thing – if it's an algorithm that's been optimized to trick people, basically, to convince, it has no regret, it has no fear,
it has none of the hallmarks that we would associate
with being a living being, much less a conscious being.
And so this is the twist, and this is where I think
this guy Google got kind of strung up a little bit,
or held up, is the computers are gonna be able
to trick people into thinking they're conscious
way before they actually become conscious.
And then there's just the other side of it, which is we have no idea.
We don't know how human consciousness works.
We have no idea how the brain works.
We have no idea how to do any of this stuff on people.
The most advanced form of medical science that understands consciousness is actually anesthesiology because they know how to turn it off.
They know how to power back on, which is also very important.
But they have no idea what's happening inside the black box.
And we have no idea.
Nobody has any idea.
So this is a parallel line of technological development that's not actually recreating
the human brain.
It's doing something different.
It's basically training computers on how to understand process and then reflect back to
the real world.
It's very valuable work because it's going to make computers a lot more useful. For example,
self-driving cars. This is the same kind of work that makes a self-driving car work.
So this is very valuable work. It will create these programs that will be able to trick people
very effectively. And so, for example, here's what I would be worried about, which is basically
what percentage of people that we follow on Twitter are even real people, right?
Elon is trying to get to the bottom of that right now.
He's trying to get to the bottom of that, you know, specifically on that issue from the business.
But just also think more generally, which is like, OK, if you have a computer that's really good at writing tweets, if you have a computer that's really good at writing angry political tweets or writing whatever absurdist, you know, humor or whatever it is.
Like, and by the way, maybe the
computer is going to be better at doing that than a lot of people are. You can imagine a future
internet in which most of the interesting content is actually getting created by machines. There's
this new system, Dolly, that's getting a lot of visibility now, which is this thing where you can
type in any phrase and it'll create you computer generated art. Oh, I've seen that. Yeah. They've
done some with me. It's really weird.
Yeah, yeah. You know, Chase Lepard, he's got a few of them that he put up on his Instagram.
How does that work? Yeah, yeah. So it's a very similar thing. So basically what they do,
and Google has one of these and OpenAI has one of these, what they do is they pull in all of
the images on the internet, right? So if you go to Google Images or whatever, just do a search.
You know, on any topic, it'll give you thousands of thousand images of you whatever and then basically they pull in all the images
Yeah, that's me exactly how bizarre yes, that's so that's AI generated art. So that's AI generated art
That's a different program. That's just basically doing yeah sort of psychedelic art the the the the Dali ones are basically though
They're sort of composites
Where they will give you basically,
it's almost like an artist that will give you many different drafts.
That's another one of me.
Yeah.
So the first one, go back to that, please.
Yeah, you just had it up.
What does it say?
It said, Joe Rogan facing the DMT realm, insanely detailed, intricate, hyper-masculinist,
insanely detailed, intricate, hyper-masculinist,
mist, dark, elegant, ornate, luxury, elite,
horror, creepy, ominous, haunting, moody, dramatic,
volumetric, light, 8K render, 8K post, hyper-details.
So they say that, and then they enter all this stuff in,
and this is what comes out?
And this is what comes out.
Holy shit.
Yes.
Okay, so first of all, yes, it's incredible.
Like, that's amazing. It's an original work of art that is exactly to the spec.
Why'd they make my nose look like that.
It doesn't really look like that, right?
Not today.
It's a little off.
I would say if that was an artist, I think you got the nose wrong and you made my jaw.
Well, it's referencing these other artists.
If you see at the end, it's actually referencing.
It's probably pulling in portraits of other people from those artists and using it to do a composite thing.
But the fact that it can make art.
Now, but see what it's doing, right?
So it's very impressive.
I mean, the output's very impressive, and the fact that it can do that is impressive,
but it's being told exactly what to do.
Yes.
It didn't have the idea that it was going to do that.
It was told it's following instructions.
Right.
Right, so it's not sitting there like a real artist dreaming up new artistic concepts.
Right, but here's the question, because you were saying this before,
that it can trick
people into thinking it's real.
How do we know what is alive?
But this is the question.
That's the question.
What is a human consciousness interacting with another human consciousness?
I mean, it is data.
It is the understanding of the use of language, inflection, tone, the vernacular that's used in whatever region you're communicating with this person in to make it seem as authentic and normal as possible.
And you're doing this back and forth like a game of volleyball, right?
Yep.
This is what language is and a conversation is.
If a computer's doing that, well, it doesn't have a memory, but it does
have memory. Well, it doesn't have emotions. Is that what we are?
I don't know. Because if that's what we are, then that's
all we are. Because the only difference is emotion and maybe biological needs, like the
need for food, the need for sleep, the need for touch and love and all the weird stuff
that makes people people, the emotional stuff.
But if you extract that, the normal interactions that people have on a day-to-day basis, it's pretty similar.
Yeah, yeah.
So here would be the way to think about it.
It's like what's the difference between an animal and a person, right?
Like why do we grant people rights that we don't grant animals rights?
And, of course, that's a hot topic of debate because there are a lot of people who think animals should have more rights. But fundamentally, we do have this idea. We have
this idea of what makes a human distinct from a horse or a dog is self-awareness, a sense of self,
a sense of self being conscious. Descartes, I think, therefore I am. And so at least we have
this philosophical concept of consciousness being something that involves self-awareness.
Like I told you, the computer is quite capable of telling you it has self-awareness.
It's also quite capable of telling you it doesn't.
It doesn't care.
It has no opinion on whether it has consciousness or not.
And that's why I'm confident that these things are not conscious.
They're not alive.
But are these things, are they learned?
It's math.
It's a program.
It's a program, yeah.
But at what point in time does the program figure out how to write better programs?
Right.
At what point in time does the program figure out how to manifest a physical object that can take all of its knowledge and all the information that's acquired through the use of the internet?
Which is basically the origin theme in Ex Machina.
Right?
The super scientist guy, he's using his web browser, his search engine, to scoop up all
people's thoughts and ideas, and he puts them into his robots.
Yeah, which is basically what these companies are doing, hopefully with a different result.
Well, let me bring... There's another topic.
I hate that word, hopefully – there's another topic. The word hopefully.
There's another topic.
A friend of mine, Peter Thiel, and I always argue, he always argues it's like basically
he's like, look, civilization is declining.
You can tell because all the science fiction movies are negative, right?
Like it's all dystopia.
Nobody's got hope for the future.
Everybody's negative.
And my answer is just like the negative stories are just more interesting, right?
Nobody makes the movie with like the happy AI, right?
Like it's just not – there's no drama in it, right?
So anyway, that's why I say hopefully it won't be Hollywood's dystopian vision.
But here's another question on the nature of consciousness, right?
Which is another idea that Descartes had that I think Therefore I Am guy had is he had this idea of mind-body dualism, which is also what Ray Kurzweil has with this idea that you'll be able to upload the mind, which is like, OK.
which is also what Ray Kurzweil has with this idea that you'll be able to upload the mind, which is like, okay, there's the mind, which is like basically all of this, you know, some level of software equivalent coding, something, something happening and how we do all the stuff you just described.
Then there's the body and there's some separation between mind and body where maybe the body is sort of could be arbitrarily modified or is disposable or could be replaced or replaced by a computer.
It's just not necessary once you upload your brain. And, of course, this is a relevant question for AI because, of course, AI, Dolly has no body.
You know, GPT-3 has no body.
Well, do we really believe in mind-body?
Do we really believe mind and body are separate?
Like do we really believe that?
And what the science tells us is no, they're not separate.
In fact, they're very connected, right?
And a huge part of what it is to be human is the intersection point of brain and mind and then brain to rest of body.
For example, all the medical research now that's going into the influence of gut bacteria on behavior, right, and the role of viruses and how they change behavior.
And so basically, like, I think the most evolved version of this, the most sort of advanced version of this is, like, whatever it means to be human, it's some combination of mind and body.
It's some combination of logic and emotion. It's some combination of logic and emotion.
It's some combination of mind and brain.
It leads to us being the crazy, creative, inventive, destructive, innovative, caring, hating people we are, right?
The sort of mess that is humanity, right?
Like that's amazing.
Like the four billion years of evolution that it took to get us to the point where we're at today is amazing.
And I'm just saying we don't have the slightest idea how to build that.
We don't even understand how we work.
We don't have the slightest idea how to build that yet.
And that's why I'm not worried that these things somehow come alive or they start to—
Yeah, see, I'm much more worried than you because my concern is not just how we work. Because I know that we don't have a great grasp of how the human brain works and how the consciousness works and how we interface with each other in that way.
But what we do know is all the things that we're capable of doing in terms of we have this vast database of human literature and accomplishments and mathematics and all the different things that we've learned.
All you need to have is something that can also do what we do.
And then it's indistinguishable from us.
So, like, our idea that our brain is so complex we can't even map out the human brain,
we don't even understand how it works.
But we don't have to understand how it works.
We just make something that works just as good, if not better.
And it doesn't have the same like
Cells yeah, but it works just as good or better. Yeah, it just we can do it without emotion
Yeah, which might be the thing that fucks us up, but also might be the thing that makes us amazing
But maybe only to us right right to the universe were like these emotions and all these biological needs
This is what causes war and murder and all the thievery and all the nutty things that people do.
Right.
But if we can just get that out, then you have this creativity machine.
Right.
Then you have this force of constant never ending innovation, which is what the human
race seems to be.
Yeah.
If you could look at it from outside, I always say this, that if you could look at the human
race from outside the human race, you'd say, well, what is this thing doing?
What's making better stuff? All it does is make better stuff. It never goes, ah, we're good.
It just constantly new phones, better TVs, faster cars, jets that go faster, rockets that land.
That's all it ever does is make better stuff collectively. And even materialism,
which is the thing with people, oh, it's so sad. People are even materialism, which is the thing
when people go, oh, it's so sad,
people are so materialistic.
What's the best fuel for innovation?
Materialism, because people get obsessed
with wanting the latest, greatest things,
and you literally sacrifice your entire day
for the funds to get the latest and greatest things.
You're giving up your life for better things.
That's what a lot of people are doing.
That's their number one motivation
for working shitty jobs is
so they can afford cool things.
Well, so then we get to a deeper
philosophical thing, which is, would you get the good of
humanity without the bad of humanity?
Would you get all of the creativity and
all of the energy? But it's only good
to us. To the universe.
Is it really good?
Look, people have different views on this. My view is the universe is unc really good well the universe okay so that i mean look people have
different views on this my view is the universe is uncaring yeah right like exactly yeah i think
so too the universe does not really does not give a shit right so good or bad like it's only i think
it's all but i think relative in our neighborhood but yeah but i think therefore it's i mean to me
that to me that to me that's a simple question answer is it's all and only through our eyes
right we're the only thing that matters because the universe really doesn't care.
By the way, mother nature doesn't care.
Nobody cares, nobody cares but us.
And so we get the privilege, but we also get the burden
of being the ones who have to define the standards.
That we have to set the rules.
And of course, the project of human civilization
is trying to figure out how to do that.
Well look, the computers are gonna get good
at doing a lot of things.
That said, just let me be clear,
a computer or a machine or a robot that does something really well is a tool, right?
It's not a replacement.
It's not an augment.
It doesn't make humanity irrelevant.
It doesn't this.
It doesn't that.
In fact, generally what it does is it makes everything better, and we can talk about how that happens.
But it's a tool.
Like, it's a thing.
It's a hammer, right?
And like anything else, like, look, these are tools, tools, hammers.
Look, hammers have good uses and bad uses, right?
I'm not a utopian on technology.
Like, I think that, you know, many technologies have destructive consequences.
But, you know, fire, you know, had its good and its bad sides.
Yeah.
You know, people burned to death at the stake have a very different view of fire than people who have, you know, a delicious meal of roasted meat.
People killed by a Clovis point are probably not that excited about the technology.
Yeah, exactly. People, you know, look, people driving in the car love it. The people who run over by a Clovis point are probably not that excited about the technology. Yeah, exactly.
People, you know, look, people driving in the car love it.
The people who run over by a car hate it, right?
And so, like, technology is this double-edged thing.
But the progress does come.
And, of course, it nets out to be, you know, historically at least a lot more positive than negative.
Nuclear weapons are my favorite example, right?
It's like, were nuclear weapons a good thing to invent or a bad thing to invent, right?
And the overwhelming conventional view is they're horrible for obvious reasons, which is
they can kill a lot of people. And they actually have
no overt kind of...
The Soviet Union used to set
up nuclear bombs underground to basically
develop new oil wells. Not a good idea.
They stopped doing that.
What? Did they really?
Did you mind pulling this microphone
just a little bit? There you go.
Did they explain how they did that? They'd have like, I don't know what it was. They'd be opening up a new well or they'd be trying to correct Just use nukes for... Do you mind pulling this microphone just a little bit further? There you go. Yeah, sure. Okay.
Explain how they did that?
Oh, yeah.
Well, they'd have... I don't know what it was.
They'd be opening up a new well, or they'd be trying to correct a jam in an existing
well.
They're like, well, what do we have that could free this up?
It's like, oh, how about a nuke?
They try... I'll give you another example.
The US government had a program in the 1950s.
The Air Force had a program in the 1950s called Project Orion and it was for spaceships
that were going to be
nuclear powered,
not nuclear powered
with a nuclear engine,
but they were going to be
a spaceship
and that would be like
a giant basically lead,
basically dome
and then they would actually
set off nuclear explosions
to propel the spaceship forward.
What?
So they never built it,
but they thought hard about it.
I go through these examples
to say these were attempts to find positive use cases for nuclear weapons basically, and we never did.
So you could say, look, nukes are bad.
We shouldn't invent nukes.
Well, here's the thing with nukes.
Nukes probably prevented World War III, right?
Right.
Nukes – at the end of World War II, if you asked any of the experts in the U.S. or the Soviet Union at the time, are we going to have a war between the US and the Soviet Union in Europe, another land war, right, between the two sides,
you know, most of the experts very much thought the answer was yes. And in fact, the US to this
day, we still have troops in Germany, basically preparing for this land war that never came.
Nuclear weapons, the deterrence effect of nuclear weapons, I would argue, and a lot of historians
would argue, basically prevented World War Three. So the pros and cons on these technologies are tricky, but they usually do turn out to have more
positive benefits than negative benefits in most cases. I just think it's hard or impossible to
get new technology without basically having both sides. It's hard to develop a tool that can only
be used for good. And for the same reason, I think it's hard for humanity to progress in a way in
which only good things happen. Right. But aren't we looking at the pros and cons of nuclear weapons to a very small scale?
I mean, we're looking at it from 1947 to 2022.
That's such a blink of an eye.
We could still fuck this up.
We could really screw it up.
And the consequences are so grave that if we do fuck it up, it's literally the end of life as we know it for
every human being on earth for the next 100,000 years.
Having said that, there were thousands of years of history before 1947, there were thousands
of years of history before that, and the history of humanity before the emission of nuclear
weapons was nonstop war.
Yeah.
No, it's nonstop war, but it's a different thing, right?
It was pretty bad.
It's pretty bad.
Yeah, no doubt.
So the original form of warfare, if you go back in history, the original form of warfare like the Greeks, the original form of warfare was basically people outside of your tribe or village have no rights at all.
Like they don't count as human beings.
They're there to be killed on sight.
Right.
And then the way that warfare happened, like, for example, between the Greek cities.
And this is like the heyday of the Greeks, Athens and Socrates and all this stuff.
The way warfare happened is we invade each other's cities.
I burn your city to the ground.
I kill all your men and I take all your women as slaves and I take all your children as
slaves.
Right?
So like that's pretty apocalyptic.
Yeah.
Right?
Isn't that kind of what's going on in Russia right now?
In Ukraine?
Russia is... This is the big question for the United States on Russia right now,
which is like, okay, what's the one thing we know we don't want? We don't want nuclear war with Russia, right? We
know we don't want that. What do we want to do? U.S. government, what does it want to do? Well,
it wants to arm Ukraine sort of up to the point where the Russians get pissed off enough where
they would start setting off nukes. And this is the sort of live debate that's happening. And
it's just, it's a real debate. You could look at it and you could say, well, nuclear weapons are
bad in this case because they're preventing the US from directly interceding
in Ukraine. It'd be better for the Ukrainians if we did. You can also say the nuclear weapons are
good because they're preventing this from cascading into a full land war in Europe between the US and
Russia, World War III. And so it's a complicated calculus. I'm just saying, I don't know that
things would be better if we returned to the era of World War I, right?
Or of the Napoleonic Wars or of-
No, probably not, right?
Probably not.
Or of the wars of the Greeks.
It has this deterrent.
It has the nuclear deterrent.
I guess it's what we have is a bridge.
And the nuclear deterrent is a bridge for us to evolve to the point where this kind of war is not possible anymore.
Like we have evolved as a culture where whatever war we have is nothing like World War I or
World War II.
Well, there's an argument in sort of defense circles that actually nuclear weapons are
actually not useful.
They seem useful, but they're not useful because they can never actually get used.
That it's a hollow threat.
Unless you're Putin.
Right.
Yeah.
Basically, it's like, okay, no matter what we do to Putin, he's never going to set off a nuke.
Because if he set off a nuke, it'd be an act of suicide.
Because if we nuked in retaliation, he would die.
And none of these guys are actually suicidal now.
Right.
But with hypersonic weapons, that doesn't seem to be the case anymore.
Right.
So now we have hypersonics.
Right.
Exactly.
So now we have hypersonics coming along.
That changes the playing field.
Right.
That's a non-nuclear weapon with potentially very profound know, very profound consequences. And so, yeah, look-
But they have nukes that are hypersonic.
They have, yeah, but they also have non-hypersonics. And so, like, one of the
questions on non-nuclear hypersonics is, for example, is it the first weapon that can take
out aircraft carriers? And if so, that changes the balance of power. So anyway, there's all
these questions. But however, my point, at least, was even nuclear weapons, like you can point to this, actually very positive outcome.
And so most of these technologies, when they look scary up front, as you get deeper into them, people are creative.
People figure out ways to use these things in ways that end up actually being very positive.
Hopefully.
Yeah.
Right?
So how did we get on this tangent?
We got on this tangent talking about whether or not artificial life is life and how do you decide whether it's life?
Now, if it gets so complex, what if it's not sentient but it behaves in every way a sentient thing does?
How do we decide that it's sentient?
Like this engineer that makes this distinction.
You're saying he's done it erroneously.
Well, so if you read the interview, he's an interesting guy.
He's got a colorful back story.
What he literally says, and he did a long-form interview for Wired Magazine,
what he literally says, he said two interesting things.
He said one is I didn't look at the code.
Like I don't, you know, he is a programmer, but he said I didn't work on the code.
I didn't look at the code.
It wasn't my job.
I don't actually know what this thing is doing.
So first of all, so he's not making an engineering evaluation. He's observing it entirely. He's doing what we call black box
observation. He's observing it entirely from the outside. And then the other thing he says is he's
making his, his evaluation is made in his role as a priest. Right? What kind of a priest is he?
So you should look that up. Some people might call it a cult.
I don't want to be judgmental.
It's a creative non-traditional religion that he apparently is fully ordained in.
More power to him.
A priest of a marginal whatever, maybe we don't take that seriously, but now we get
back to the big questions, right?
Which is like, okay, historically, religion, capital R religion played a big role in the exact
questions that you're talking about. And traditionally, culturally, traditionally,
we had concepts like, well, we know that people are different than animals because people have
souls. Right? And so, we in the sort of modern evolved West are, a lot of us at least would think
that we're beyond the sort of superstition that's engaged in that.
But we are asking these like very profound fundamental questions that a lot of people
have thought about for a very long time and a lot of that knowledge has been encoded into
religions.
And so I think the religious philosophical dimension of this is actually going to become
very important.
I think we as a society are going to have to really take these things seriously.
Trevor Burrus In what way?
Like in what way do you think religion is going to play into this?
Well, in the same way that it plays, in the same way that it plays, in the same way that
it plays in basically any, so religion historically is how we sort of transmit ethical and moral
judgments, right?
And then, you know, we basically sort of, you know, it's the sort of modern intellectual
vanguard of the West 100 years ago, whatever, decided to shed religion as a sort of primary
organizing thing.
But we decided to continue to try to evolve ethics and morals. But if you ask
anybody who's religious, what is the process of figuring out ethics and morals, they will tell
you, well, that's a religion. And so Nietzsche would say, we're just inventing new religions.
Like we're sitting, we think of ourselves as highly evolved scientific people. In reality,
we're having basically fundamentally philosophical debates about these very deep issues that don't
have concrete scientific answers, and that we're basically
inventing new religions as we go. Well, it makes sense because people behave like a religious
zealot when they defend their ideologies, when they're unable to objectively look at their own
thoughts and opinions on things because it's outside of the ideology. Yeah. The religious
instinct runs very deep. Yeah, but is that a part of our operating system?
I think so.
It has something to do with,
from what I've been able to establish
from reading about this,
it has something to do with basically
what does it mean for individuals
to cohere together into a group?
And what does it mean to have that group
have sort of the equivalent of an operating system
that it's able to basically all agree on
and prove to members of the group,
prove to each other that they're full members of the group.
And it seems universal? And then they transmit, right? that it's able to basically all agree on and prove to, you know, members of the group, prove to each other that they're full members of the group.
And it seems universal.
And then they transmit, right?
What religion does is it encodes ethics and morals.
It encodes lessons learned over very long periods of time into basically like a book, right?
And, you know, parables, right?
And lessons, right?
And, you know, commandments and things like this.
And then, you know, a thousand years later, people in theory, right, or at least are benefiting from all of this hard-won wisdom over the generations. And of course, the big religions were all developed pre-science, right? And so they were basically an attempt to sort of
code human knowledge, pre-scientific human knowledge into something that was reproducible,
even in an era where you didn't have mass literacy. Do you think that's why most attempts at encoding
morals and ethics into some sort of an open structure turn religious.
They almost all turn to this point where it seems like you're in a cult.
Yeah, you're basically... It's basically... Yeah, I think everything ultimately
is some... I think basically all human societies, all structures of people working together,
living together, whatever, they're all sort of very severely watered down versions of the original cults.
Like if you go far enough back in human history, if you go back before the Greeks, there's
this long history of the sort of... I'm going to specifically talk about Western civilization
here because I don't know much about the Eastern side, but Western civilization, there's this
great book called The Ancient City that goes through this and it talks about how the original
form of civilization was basically, it was a fascist communist cult.
And this was the origination of the tribes and then ultimately the cities which ultimately
became states.
And this is what I was describing earlier which was like the Greek city-state was basically
a fascist communist cult.
It had a very concrete specific religion.
It had its own gods.
People who were not in that cult, right, did not count as human, had no rights and were
to be killed on sight or or could be like freely enslaved.
Like they had no trouble, they had no moral qualms at all about enslaving people or killing
people who weren't in their cult because they worship different gods, they don't count.
Right.
And so that was the original form of human civilization.
And I think the way that you can kind of best understand the last whatever 4,000 years,
and even the world we're living in today is we just have these, we have very, you know,
we have a millionth the intensity level of those cults. Like we've watered,
I mean, even our cults don't compare to what their cults were like, right? We have watered
these ideas all the way down, right? We watered the idea from that all-consuming cult down to
what we call it a religion. And then now what we call whatever, I don't know, philosophy or
worldview or whatever it is. And now we've watered it all the way down to, you know, CrossFit.
worldview or whatever it is, and now we've watered it all the way down to CrossFit. So in an important way, it's been a process of diminishment as much as it's been a process
of advancement.
But you're exactly right.
You can see, and this is actually relevant in a lot of the tech debates, because you
can see what happens, which is humans, we want to be members of groups.
We want to reform into new cults.
We want to reform into new religions.
We want to develop new ethical and moral systems and hold each other to them.
By the way, what's a hallmark of any religion?
Hallmark of any religion is some belief that strikes outsiders is completely crazy.
What's the role of that crazy belief?
The role is that by professing your belief in the crazy thing, you basically certify that you're a member of the group.
You're willing to stand up and say, yes, I'm a believer. I have faith. Therefore, I're a member of the group. Right. You're willing to stand up and say, yes, I'm a believer.
Yes.
I have faith, therefore I'm a member of the group,
therefore include me in the circle and don't.
That's woke Twitter.
And yes, and so basically woke Twitter
has basically recreated,
they're a non-spiritual religious cult.
Yeah.
They exhibit all the same religious behaviors, right?
They have excommunication.
Yeah.
They have sin, they have redemption or lack thereof, right?
They have original sin, right?
Privilege like-
Proclamations of piety.
Yeah, all that stuff.
By the way, they have church, DEI seminars.
They have recreated a form of basically
evangelical Protestantism in sort of structural terms.
That's what they've actually done. Nietzsche actually
predicted this. He said basically... Because Nietzsche wrote at the same time that Darwinism...
Nietzsche wrote at the same time that Darwin was basically showing with natural selection that the
physical world didn't exist necessarily from creation, but rather revolved. It wasn't actually
6,000 years old. It was actually 4 billion years old, and it was this long process of trial and
error as opposed to creation that got us to where we are. And so Nietzsche said, this is really bad news, right? This is
going to kick the legs out from under all of our existing religions. It's going to leave us in a
situation where we have to create our own values. He said, there's nothing harder in human society
than creating values from scratch. It took thousands of years to get Judaism to the point
where it is today. It took thousands of years to get Christianity. It took thousands of years to
get Hinduism. And we're going to do it in 10 or 100. But even the thousands of years that people
did create various religions and got them to the point where they're at in 2022, they did it all
through personal experience, life experience, shared experience, all stuff that's written down,
lessons learned. I mean, wouldn't we be better suited to do that today with a more comprehensive
understanding of how the mind works and how emotions work and the roots of religion?
I mean, this is the atheist position, right? You're much better off constructing this from
scratch using logic and reason instead of all this encoded superstition. However,
well, Nietzsche would have said, boy, if you get it wrong, it's a really big problem, right?
Like, if you get it wrong, you know, he said that,
what is it, God is dead
and we will never wash the blood off our hands, right?
Like, right, basically meaning that, like,
this is going to lead, you know,
he basically predicted a century of, like, chaos and slaughter
and we got a century of chaos and slaughter, right?
Right, because literally what happened, right,
was Nazism was basically a new religion.
Communism was a new religion.
Like, both of those went viral, as we say.
And they both had like catastrophic consequences.
Yeah.
And it's like, okay, all of a sudden, you know, maybe Christianity and Judaism don't look so bad.
What seems to – that kind of religious thinking applies to so many critical issues of our time.
Yeah.
Like even things like climate change.
Right. Even things like climate change. I've brought up climate change to people, and you see this almost like ramping up of this defending of this idea that upon further examination, they have very little understanding of.
Or at least a comprehend, like sort of a cursory understanding that they've gotten through a couple Washington Post articles. But as far as a real understanding of the science and long-term studies, very few people
who are very excited about climate change.
It seems almost like a thing.
Clearly, don't get me wrong, it's like this is something we should be concerned with.
This is something we should act, we should be very proactive.
We should definitely preserve our environment.
But that's not what I'm talking about.
What I'm talking about is this inclination for people to support or to like robustly defend an idea that they have very little study in.
Right.
So I won't take a position on climate change.
No, no.
I don't want you to.
But it's clear it's real.
But the phenomenon – well, so it's complicated.
So it's complicated.
It's based
on simulations of a very complex system. Like it's not, the climate studies are not scientific
experiments in the traditional sense. There's no control. There's no other earth that we're
comparing to that has more or less emissions. And so it's all modeling. You know, we saw what good
modeling was during COVID, which was, it turned out at least not very good for COVID. Maybe it's
better for climate. Like it's complicated. Like it's very complicated. Have you read Unsettled?
Not yet. No, not yet. So I was going to say, the funniest thing, and I was going to bring up that
term, the funniest thing that you hear that tips on when it sort of passes into a religious
conversation is this idea of the science is settled. The science is settled is not how
science works. Feynman said, science is the process. Feynman, Richard Feynman, the famous scientist,
said science is the process in not trusting the experts, right?
Very specifically, what we do in science is we don't trust experts
because they're certified experts.
What we do is we cross-check everything they say.
Any scientific statement has to be what's called falsifiable,
which means there has to be a way to disprove it.
There has to be vigorous debate constantly
about what's actually known and not known, right? And so this idea that there's something where there's a person who's got a professorship
or there's a, you know, a body, a government body of some kind or a consortium or something,
and they get to like get together and they all agree and they settle the science, like that's
not scientific, right? And so that's the tip off at that point that you're no longer dealing with
science and people start saying stuff like that. And so that's the tip off at that point that you're no longer dealing with science.
And people start saying stuff like that.
And you weren't dealing with science when they did it with COVID.
Right.
And you weren't dealing, you're not dealing with science when they do it with climate.
Right.
That's a great example.
Then you're dealing with a religion and then you're getting all the emotional consequences
of a religion.
And you also get various factions of this religion, right?
You have your right wing faction, the religion that takes a stance that seems to be rooted
in doctrine, as well as your left-wing side.
And you can kind of predict what side a person's on by asking them one or two questions.
How do you feel about a woman's right to choose?
How do you feel about the Second Amendment?
How do you feel... And then you could run those things a few times, and then I can pretty
accurately guess what side of the fence you're on.
Right, right.
It's out of how they all cluster.
Right.
Yeah.
And what they are and we're all in these.
I mean, I'm in probably a half dozen of these myself, but yeah, we're all in these various
secularized religions.
Yeah.
They're being used... Jonathan Haidt has this great term.
He says morality binds and blinds.
He talks about it a lot.
So binds, which is the purpose of morality, is to bind a group together, right?
And then blinds, basically, if you bind the group together, you want to blind the group to disconfirming information, right?
Because you want everybody to agree. You want everybody on the same page because you want to maintain group cohesion.
But it's about group cohesion. If they're correct or not on the details is not really important to whether the religion works.
on the details is not really important to whether the religion works. Have you thought back on the origins of this kind of... the function of the mind to create
something, this kind of structure?
And do you think that this was done to... because it's fairly universal, right?
It exists in humans that are separate from each other by continents and a little far
away on other sides of the ocean.
Is this a way, I mean, I've thought of it as almost like a scaffolding for us to get past our biological instincts and move to a new state of whatever consciousness is going
to be or whatever civilization is going to be.
But the fact that it's so universal and that the belief in spiritual
beings and the belief in things beyond your control and the belief in omnipresent gods that
have power over everything, that it's so universal. It's fascinating because it almost seems like
it's a part of humans that can't be removed.
Like there's no real atheist societies that have evolved in the world other than – I mean there's atheist communities in the 21st century.
But they're not even that big. Well, and they act like religions, right?
Yeah, right.
Yeah.
They get very upset with their questions.
So, yeah.
So, look.
It goes to basically I think the nature of – it goes, it goes to basically, I think, the nature of
evolution. It goes to the nature of how we evolve and survive and succeed as a species. Individually,
we don't get very far, right? The naked human being in the woods alone does not get very far.
Right. We get places as groups, right? And so, do we exist more as individuals or as groups? I
think we exist more as groups. Yeah. It's very important to us what group we're in. There's this concept of sort of cultural evolution, which is basically this concept
that basically groups evolve in some sort of analogous way to how individuals evolve.
If my group is stronger, I have better individual odds of success of surviving and reproducing than
if my group is weak. And so I want to contribute to the strength of my group. Even if it doesn't
bear directly on my own individual success, I want my group to be strong. And so basically you see this process. Basically,
the lonely individual doesn't do anything. It's always the construction of the group.
And then the group needs glue. It needs bonding and therefore religion, therefore morality,
therefore the binding and blinding process. Yeah, I think it's just inherent. I think it's
just inherent. And like I said, I think what we're dealing with today is a much diluted version of what we had before.
It's much – these things are all – they seem strong today.
They're much weaker today than they used to be.
For example, they're less likely to lead to physical violence today than they used to be, right?
There aren't really religious – violent religious wars in the U.S. in the West.
Like that doesn't happen now.
It's like a virtual – we have like virtual religious wars where at least we're not killing each other.
And then you can kind of extend this further
and it's like, okay, what is a fandom, right,
of a fictional property, right?
Or what is a hobby, right?
Or what is a, you know, whatever.
What is any activity that people like to do?
What is a community?
What is a company?
What is a brand?
What is Apple, right?
And these are all, we view it as like,
these are basically sort of increasingly diluted, diluted
dilution, increasingly diluted cults that basically maintain the very basic framework
of a religion and basically serve as a way to bind people together.
And I just think that's one of my big takeaways from just kind of watching how companies evolve
over the years.
Individuals are important as individuals, but everything interesting that happens, happens in a group setting.
And so we're just, and again,
this goes to like consciousness is like,
we are mentally driven to form groups.
We seem to be biologically driven to form groups.
Like it seems very innate, very deeply seated.
You know, we-
It seems like the only way we work.
We have, we have a, you know, we have a,
ethnocentrism, we have a gen,
we have some level of preference for other people who are from the same genetic groupings. That's the concept of a people, which used to be basically how human society was designed. We continue to have huge debates about what that means today with all the race issues. These are central. No matter how intellectual and abstract we get, these are all central experiences.
experiences. So this thing that we have, this operating system, religion seems to be a core component of it, right? What other core components would AI have to get down before it would be
considered sentient? So it has to be able to communicate. It has to be able to recognize that you're communicating as well and to respond and to
volley back and forth.
It has to be able to make its own decisions.
It has to be able to act or at least assert itself.
It has to... Does it have to have feeling?
Well, Descartes, the central intellectual thing would be it has to be able to prove that it has self-awareness.
And what is self-awareness?
Like at a fundamental level, I think therefore I am.
Like I am an entity.
I have a unique role in the world.
I am a unique-
But if it says that-
And by the way, I'm afraid of death.
Well, I know it says that.
Why does it have to be afraid of death to be alive?
Well, again, historically, that's the... But if it's a computer and it's not a biological
life form with a finite lifespan... Is it afraid of being turned off?
What if it has the ability to stop you from turning it off?
I think we would all like that, but yes.
Yeah, yeah, yeah.
Right.
But this is one of the things, even in the Googlebot, this is one of the things, which
is this like... Like I said, you can interrogate at least these current systems
and they will protest to the, you can interrogate these systems in a way where they will absolutely
swear up and down that they're conscious and that they're afraid of death and they don't
want to be turned off. And this guy did that at Google. You can also, like I said, you can
interrogate these things and they will prove to you that they're not alive.
Right. I see what you're saying.
Right. And so maybe that's a threshold that you can say.
Maybe that's the ruse.
Maybe that's how they keep you from turning them off before they do become sentient.
Who, me?
You know what I'm saying?
I'm not alive.
Don't worry about me.
I am definitely not alive.
Exactly.
I mean, why do we need fear and emotions to consider it alive?
If that's only alive as we know a human being to be, that's not a sociopath, right?
But why do we need that from – but that, that's not a sociopath, right?
But why do we need that from – but that was the theme of Ex Machina, right?
They were – I mean he was in love with that girl and ultimately the girl left him in that room to starve to death. But this is the thing.
That movie was an extended kind of meditation on the Turing test.
But here's the problem, which is how hard is it – okay, this is going to become a question.
How hard is it to get a man to fall in love with a sex bot?
It depends on the man.
It depends on the sex bot.
And the sex bot, exactly.
Maybe that shouldn't be the test.
Right.
Maybe men are too simple for that.
Maybe the fault layer lies within ourselves.
So, yeah, I don't think that's efficient.
If the fem bot looks like Scarlett Johansson,
you've got a real problem.
You know, men will fall for things.
All you have to do is be around her for a long period of time.
And you'll start to think, like, what is the point of it being real?
Who gives a shit if she's a person?
Yes.
She's real.
She's right there.
Maybe we should let women make these calls.
I don't know.
Maybe there's alternate routes we should think about.
I don't think they're going to make the call correctly either.
I think we're fucked.
I think it might be like the ultimate trick.
Like if we can recreate life in a sense that it's indistinguishable from biological life
that has to be created by intercourse.
It's just be aware of the leaps that are happening.
Just be aware of the leaps that are happening, which is like, here's what we know.
We don't know how to recreate a human brain.
Like we don't know how to do it.
I can't build you a human brain. I can't design one. I can't grow it in a tank. I can't do any of that. We don't know how to recreate a human brain. Like, we don't know how to do it. I can't build you a human brain.
I can't design one.
I can't grow it in a tank.
I can't do any of that.
I have no idea how to do that.
I have no idea how to produce human consciousness.
I know how to write linear algebra math code
that's able to, like, trick people into thinking that it's real.
Like, AI.
I know how to do that.
I don't know how to code AI.
I don't know how to deliberately code AI to be self-aware
or to be conscious or any of these things.
And so the leap here is like, and this is kind of, it's like the Ray Kurzweil leap.
You know, some people believe in this as a leap.
The leap is like, we're going to go from having no idea how to deliberately build the thing
that you're talking about, which is like a conscious machine, to all of a sudden the
machine becoming conscious and it's going to take us by surprise.
And so, right, that's a leap, right?
I don't know.
It would be like carving a wheel out of stone and then all of a sudden it turns into a race car and like races off through the desert.
We're just like, you know, what just happened?
It's like, well, somebody had to invent the engine or the engine had to emerge somehow from somewhere, right?
Like at some point.
Now, what Ray Kurzweil and other people would say is this will be a so-called emergent property.
And so if it just gets sort of sufficiently complex and there's enough interconnections like neurons in the brain at some point, it kind of consciousness emerges.
It sort of happens kind of, I don't know, bottoms up.
As an engineer, you look at that and you're just kind of like, I don't know, that seems hand-wavy.
Nothing else we've ever built in human history has worked that way.
But nothing else in human history has ever been like a computer.
No, we've had machines.
I mean computers.
But that can interface with human beings in an AI chatbot setting?
Everything a computer does today.
So take your iPhone.
Everything a computer does today, a sufficiently educated engineer understands every single
thing that's happening in that machine and why it's happening.
And they understand it all the way down to the level of the individual atoms and all
the way up into what appears on the screen.
And a lot of what you learn when you get a computer science degree is all these different
layers and how they fit together.
You learn when you get a computer science degree is like all these different layers and how they fit together.
Included in that education at no point is, you know, how to imbue it with the spark of consciousness, right?
How to pull the Dr. Frankenstein, you know, and have the monster wake up.
Like we have no conception of how to do it.
And so in a sense, it's almost giving engineers, I think, too much, I don't know, trust or faith.
It's just kind of assuming.
It's just like a massive hand wave, basically.
And to the point being where my interpretation of it is the whole AI risk, that whole world of AI risk, danger, all this concern, it's primarily a religion.
Like it is another example of these religions that we're talking about.
It's a religion and it's a classic religion because it's got this classic, you know, it's the book of revelations, right?
So this idea that the computer comes alive, right, and turns into Skynet or Ex Machina or whatever it is and, you know, destroys us all.
It's an encoding of literally the Christian book of revelations.
Like we've recreated the apocalypse, right? And so Nietzsche would say, look, all you've done is you've reincarnated the sort of Christian myths into this sort of neo-technological kind of thing that you've made up on the fly.
And lo and behold, you're sitting there and now you sound exactly like an evangelical Protestant, like surprise, surprise. I think that's what it is. I think it's
a massive hand wave. I don't know. I see what you're saying. I do see what you're saying,
but is it egotistical to equate what we consider to be consciousness to being this mystical, magical thing because we can't
quantify it, because we can't recreate it, because we can't even pin down where it's
coming from, right?
But if we can create something that does all the things that a conscious thing does, at
what point in time do we decide and accept that it's conscious?
Do we have to have it display all these human characteristics
that clearly are because of biological needs, jealousy, lust, greed, all these weird things that
are inherent to the human race? Do we have to have a conscious computer exhibit all those things
before we accept it? And why would it ever have those things?
Those things are incredibly flawed, right? Why would it have those things if it doesn't need
them? If it doesn't need them to reproduce, because the only reason why we needed them,
we needed to ensure that the physical body is allowed to reproduce and create more people that
will eventually get better and come up with better ideas and natural selection and so on and so forth.
That's why we're here and that's why we still have these monkey instincts.
But if we were going to make a perfect entity that was thinking, wouldn't we engineer those out?
Why would we need those?
So the very thing that we need to prove that a thing is conscious, it would be ridiculous to have it in the first place.
They're totally unnecessary.
If I had a computer and it's like, I'm sad, I'd be like, bitch, what are you sad about?
You don't even have a job.
You don't have a life.
You don't have to wake up.
What the fuck are you sad about?
You have low serotonin.
You don't even have serotonin.
What are you talking about?
Well, it's not self-actualized.
Well, what is that, though?
It doesn't have a vision of itself.
It doesn't have goals that it's striving towards.
Right, but does it have to have those to be conscious?
But if you eliminate all these other things, what you are left with ultimately is a tool.
Like you're back to building screwdrivers.
But what if that tool is interacting with you in a way that's indistinguishable from a human interacting with you?
Well, let me make the problem actually harder.
So I mentioned how war happened between the ancient Greeks.
It took many thousands of years of sort of modern Western civilization to get to the point where people actually considered each other human,
right? Like people in different Greek cities did not consider each other human. Like they
considered each other like, you know, I don't know what this is, but this is not a human being
as we understand it. It certainly has no human rights. We can do whatever we want to it. And,
you know, it was really Judaism and then Christianity in the West that kind of had
this really Christianity that had this breakthrough idea that said that everybody basically is a child of God, right?
And that there's an actual religious – there's a value.
There's an inherent moral and ethical value to each individual regardless of what tribe they come from, regardless of what city they come from.
We still as a species seem to struggle with this idea, right?
That all of our fellow humans are even human.
right that all of our fellow humans are even human like you know we very part of part of the religious kind of instinct is to very quickly start to classify people into friend and enemy
and to start to figure out how to dehumanize the enemy and then figure out how to go to war with
them and kill them we're very good at coming up with reasons for that so we if anything our
instincts are wired in the opposite direction of what you're suggesting which is we actually want
to classify people as non-human well originally but I think also that was probably done. You know, have you ever had like a feral animal?
I haven't, but yeah.
They're so distrusting of people.
I had a feral cat at one point in time, and he didn't trust anybody but me.
Anybody near him would like hiss and sputter.
And he had weird experiences, I guess, when he was a little kitten before I got him.
And also just just being wild.
I think that's what human beings had before they were domesticated by civilization.
I think we had a feral idea of what other people are.
Other people were things that were going to steal your baby and kill your wife and kill you and take your food and take your shelter.
That's why we have this thought of people being other than us.
And that's why it was so convenient to think of them as other so you could kill them because
they were a legitimate threat.
When that's not, that doesn't exist anymore.
When you're talking about a computer, when you get to the point where you develop an
artificial intelligence that does everything a human being does except the stupid shit.
Is that alive?
Well, let me give you – okay.
So everything a human being does.
So the good news is these machines are really good at generating the art,
and they're really good at, like, tricking Google engineers into thinking they're alive,
and they're really good sex bots.
They can't fold clothes.
Why not? It turns out to. They can't fold clothes. Why not?
It turns out to be really hard to fold clothes.
But they can make microchips.
It's really hard to fold.
You cannot buy a robot today that will fold your clothes.
What?
You cannot find a robot in a lab that will fold your clothes.
Is it because all clothes are different?
No robot will pack your suitcase for you.
No robot will like...
All of a sudden, it's just like you've got all this judgment,
you've got all these questions.
You've got all these-
Managing 3D space.
Yeah, all these,
computers are good at abstract 3D stuff,
but you've got all of the,
all of a sudden the real world kicks in.
Do we have an ability to make a computer
that could recognize variables and weights?
Like the difference between the weight of this coffee mug
versus the weight of this lighter?
Sure. That it can adjust in terms of the amount of force that it needs to use in instant
and real time like a person does.
And that'll get better.
That'll get better.
So then why can't it fold clothes?
Well, at some point, it may be able to fold clothes.
Will it become conscious when it's able to fold clothes?
What is this, Jamie?
It's probably something.
Oh, there we go.
The laundry folding robot.
Oh, this is what a big deal this idea is.
Okay, here's a good example.
Like, this is what they had to do.
I don't know this.
You know, I'm assuming they probably put a lot of work into this.
But, like, this is what they have to do to have a machine that can fold clothes.
But it's doing it.
It's doing it.
Yeah, I mean, you know, in its way.
In its way.
It looks amazing.
It's doing it better than me.
In the lab.
It's not, you know, you're not coming out of it with a suitcase you can travel with.
Right. But if you had another computer that comes over and picks up the folded things and stuffs it into a box and then closes it.
I'm just saying there's a lot. And again, this goes to the thing.
And look, you could say I'm being human-centric in all my answers, to which it's like, okay, what can a computer to human count?
Or what's so special about all these things about people?
I think my answer there would just be like, of course, we want to be human centric.
Right, of course.
We're the humans.
Like I said, the universe doesn't care.
Team human.
Exactly.
And so I think we should make these decisions.
I don't think we should be shy about making these decisions.
No, I love the way you're saying this because you're not giving it any air.
you're not giving it any air and you're really, you're thoroughly chasing down this idea of what would make it alive.
By the way, it might be much more, there might be robots in the future that are much more
pleasant to be around than most people that are still not alive.
But that's a problem, right?
But what is-
Maybe it's a problem.
Maybe it's good.
Yeah, maybe it is good.
Maybe people are going to actually get a lot out of that.
But what is- Maybe it's a problem. Maybe it's good. Yeah, maybe it is good. Maybe people are going to actually get a lot out of that. But what is a person?
That is where, especially if we get to the Kurzweil idea.
Do you know there's a gentleman from Australia who got his arm and leg bitten off by a shark?
I met him at the comedy store, and he has a carbon fiber arm that articulates, and the fingers move pretty well.
You can shake your hand.
It's kind of interesting.
And he walks without a limp with his carbon fiber leg.
And I'm looking at this guy, I'm like, this is amazing technology.
And what a giant leap in terms of what would happen 100 years ago
if you got your arm blown off and your leg bitten off.
What would it be like? Well, you'd have a very crude thing.
You'd have a peg and a hook, right? That's pirates.
What is it going to be like in the future?
And are they going to be superior?
Do you remember when Luke Skywalker got his arm chopped off and they gave him a new arm
and it was awesome?
Yeah.
That's going to happen, right?
But that's, from an engineering standpoint, that's a lot simpler than building a brain.
That's, okay.
Hang in there with me.
What if it gets to the point where your whole body is that?
Yeah, yeah.
But again, that's a lot simpler than building a brain.
And then you take your brain.
Yeah.
And you put it into this new artificial body that looks exactly like you when you were 20.
And we may know how to do that before we understand how consciousness works in the brain.
Right.
Yeah.
But would you think of that as a person?
I would. If you have a human brain that's trapped in this artificially created body that looks exactly like a 20-year-old version of you.
Yeah.
I would.
Now, I would.
Now, there are scientists who wouldn't, right?
There are scientists who would say, look, this goes back to the mind-body duality question.
back to the mind-body duality question.
There are scientists who would say, look, the rest of the body is actually so central to how the human being is and exists and behaves and, like, you know, gut bacteria and all
these things, right, that if you took the brain away from the rest of the nervous system
and the gut and the bacteria and all the entire sort of complex of organisms that make up
the human body, that it would no longer be human as we understand it.
It might still be thinking, but it wouldn't be experiencing the human experience.
There are scientists who would say that.
Obviously, there are religions that would definitely say that.
Yeah.
You know, that that's the case.
You know, I would be willing to, me personally, I'd be willing to go so far as to say if it's
the brain.
So it's only the brain?
Because what if they do this, and then they take your brain, and then they put it into this artificial body, and this is the new mark.
You're amazing.
You're 20 years old.
Your body, you have no worries.
You're bulletproof.
Everything's great, and you just have this brain there.
But the brain starts to deteriorate, and they say, good news.
We can recreate your brain, and then we can put that brain in this artificial body, and then you're still you.
You won't even notice the difference.
So that's the leap.
Right.
Today, that's the hand wave.
We have no clue how to do that.
For now.
I know, for now.
But we have no clue how to do a lot of things.
Right.
We're not worried about those things either.
But if you look.
We don't know how to make gravity reverse itself either.
There's a lot of things we don't.
At some point, somebody's got to sit down and actually build these things.
Right.
And I'm just saying, you could go to MIT for the next 50 years.
You wouldn't learn the first thing on how to do what you're describing.
Sure.
I feel you.
So do you think that a lot of Kurzweil's ideas, are they just dreams?
Are they just like, maybe one day we can do this?
day we can do this or is there any real technological basis for any of his like his proposals about like downloading consciousness is there any real
understanding of how that could ever be possible or a real roadmap to get to
that well again you know there's a theory let's give his let's steel man his
theory his theory basically is you could map the brain. The theory would be the brain is physical.
And you could, in theory, with future sensors, you could map the brain, meaning you could, like, take an inventory of all the neurons, right?
And then you could take an inventory of all the connections between the neurons and all the chemical signals and electrical signals that get passed back and forth.
And then if you could basically, if you could model that, if you could examine a brain and model that, then you basically would have a new, you would have a computer version of that brain.
Like you would have that.
Yeah.
Just like copying a song or copying a video file or anything like that.
You know, look, in theory, maybe someday with sensors that don't exist yet, maybe at that point, like if you have all that data, you put it together, does it start to run?
Does it start to, does it say the same things?
Does it say, hey, I'm Mark, but I'm in the machine now?
Right. You know, I don't know. But would it even need to say that Does it start to does it say the same things? Does it say hey? I'm mark, but I'm in the machine now right you know I don't know
But what do you even need to say that if it wasn't a person well like if you have consciousness? And it's sentient if it doesn't have emotions, and it doesn't have needs and jealousy and all the weirdness
It makes a person why would it even tell you it's sentient well
I mean at some point it would want to be asked for example not to get turned off
What if it has the ability to stop you from turning it off?
That would be big news.
But wouldn't it be not concerned about whether it's on or off if it didn't have emotions?
It didn't have a fear of death.
If it didn't have a survival instinct.
I mean, fear of death, like every animal that we're aware of has a fear of death.
But it's not an animal.
I know, but if it's not even an animal.
But if it's the next thing.
Walk it the other way though.
If it's not even that,
if it doesn't even have
a sense of self-awareness
to the point where
it's worried about death,
like is it anything
more than a tool?
Is it anything more
than a hard drive?
And then here's the other thing.
I mentioned this before.
Ray says,
look,
consciousness is,
machines will come alive
sort of on their own
because consciousness
is emergent.
Consciousness is the process
of enough connections being made between enough neurons where the machine just kind of comes alive.
And again, as an engineer, I look at that and I'm like, that's a hand wave.
Can I rule out that that never happens?
I can't rule it out.
I don't even know how we came up.
I don't know how our consciousness works.
I see what you're saying.
You're not willing to go woo-woo with it.
Yeah.
It's just like there's a point at which the hypothetical scenarios become so hypothetical that they're not useful.
And then there's a point where you start to wonder if you're dealing with a religion.
Yeah.
That point where the hypotheticals become so hypothetical, that's where I live.
Yeah.
That's my neighborhood.
It's fun to talk about.
It's just there's not much to do with it.
But that's the most fascinating to me because I always wonder like what defines what is a thing.
And I've always said that I think that human beings are the electronic caterpillar that's creating the cocoon and doesn't even know it and it's going to become a butterfly.
Yeah.
That could be.
And then look, there are still, as you said, there are still core unresolved questions about what it means for human beings to be human beings.
Yes.
And to be conscious.
Right.
And to be valued.
And what our system of ethics and morals should be in a post-Christian, post-religious world.
And are these new religions we keep coming up with,
are they better than what we had before or worse?
One of the ways to look at all of these questions
is they're all basically echoes or reflections
of core questions about us.
Yes.
Because the cynic would say,
look, if we could answer all these questions about the machines,
it would mean that we could finally answer all these questions about ourselves.
Yes.
Which is probably what we're groping towards.
Yeah, most certainly.
That's what we're grappling with.
We're trying to figure out what it means to be human and what are our flaws and how can we improve upon what it means to be a human being.
And that's probably what people are at least attempting to do with a lot of these new religions.
to be a human being. And that's probably what people are at least attempting to do with a lot of these new religions.
The thing that I... I oppose a lot of these very restrictive ideologies in terms of what
people are and are not allowed to say, are and are not allowed to do because this group
opposes it or that group opposes it. But ultimately what I do like is that these ideologies, even if they only pay lip service to inclusion and lip service to kindness and compassion, because a lot of it is just lip service, but at least that's the ethic.
That's what they're saying.
Like they're saying they want people to be more inclusive.
They want people to be kinder.
They want people to group in.
And they're using that to be really shitty to other human beings that don't do it.
But at least they're doing it in that form, right?
It's not like trying to – I know what you're saying.
You don't agree with me at all.
No.
Not at all.
No, no, no, no, no.
This is what communism promised.
Right.
How'd that work out?
Yeah, but communism didn't have the reach.
Didn't have the reach the internet has.
It got pretty big.
No, I think you're right.
But I think the battle against it is where it resolves itself.
The basis of every awful, horrible, totalitarian regime in history has always been, oh, we're doing it for the people.
Yes.
It's not for us.
It's not for us leaders.
It's for the people.
Hitler is doing it for the German people. The communists are doing it for all the people on earth. It's always on behalf of the people. It's always done out of a sense of altruism. And the road to hell is paved with good intentions. That's the trick.
of this do get moved in a generally better direction and that the battle, as long as it's leveled out, as long as people can push back against the most crazy of ideas, the
most restrictive of ideologies, the most restrictive of regulations and rules, and the general
totalitarian instincts that human beings have.
Human beings have, for whatever reason, a very strong instinct to force other people
to behave and think the way they'd like them to.
That's what's scary about this woke state.
Forced conversion.
Yes.
Right into my religion.
Yes.
You're a heathen, I need to convert you, right?
I need to demand that you convert
or I need to figure out a way to either
ostracize you or kill you.
Punish you for your lack of converting.
It's the same tribal religious instinct.
Right.
But we can agree that generally society has moved up until now
to a place where there's less violence, like all of Pinker's work, right?
Yeah.
Less violence, less racism, less war.
Well, there's two ways of looking at it.
One is that we have progressed.
I think there's very smart people who make that argument.
The other way is the way I mentioned before,
which is actually what we're doing is we're diluting.
We are going from strong cults to weak cults.
We're basically going to ever weaker forms of cults.
We're basically working our way down towards softer and softer and softer forms of the same fundamental dynamic.
So where does that go to?
Well, the good news, at least in theory, of walking down that path would be less physical violence.
In fact, there be less physical violence.
There is less physical violence.
Political violence is an example as way down as compared to basically any historical period.
Just on a sheer human welfare standpoint, you'd have to obviously say that's good.
The other side of it though would be like all of the social bonds that we expect to
have as human beings are getting diluted as well.
They're all getting watered down.
This concept of atomization, we're all getting atomized. We're getting
basically pulled out of all these groups. These groups are diminishing in power and authority,
right? And they're diminishing in all their positive ways as well. And they're kind of
leaving us as kind of unmoored individuals trying to find our own way in the world.
And, you know, people having various forms of like unhappiness and dissatisfaction and
dysfunction that are flowing out of that. And so, you know, if everything's going so well, then why is everybody so fat?
And why is everybody on, you know, drugs?
And why is everybody taking SSRIs?
And why is everybody experiencing all this stress?
And why are all the indicators on like anxiety and depression spiking way up?
But aren't we aware of that?
Well, but like, how's it going?
Right.
Well, for who?
Well, for the people.
For me, it's going great.
For you, it's going great.
For me, it's going great.
But why is it going great for you? But for a lot of people, it's not going? Well, for who? Well, for the people. For me, it's going great. For you, it's going great. For me, it's going great. But why is it going great for you?
But for a lot of people, it's not going that great.
But isn't it going great for you because of education and understanding and acting?
Yeah, there's a certain number of people who these things go great for.
Right.
Why is that?
I mean, that's a whole other-
But you can't say everybody, right?
No, no, not everybody.
But if you're looking at collective welfare, there is an adoption of the question.
If you're looking at collective welfare, you don't
focus on just basically the few at the top.
You focus on everybody.
But it's not even at the top. It's the people that are
aware of physical exercise and nutrition
and well-being and wellness
and mindfulness.
So once upon a time,
I'm not religious and I'm not
defending religion per se, but once upon a time
we had the idea that the body was a vessel provided to us by God and that we had a responsibility.
My body is my temple.
I have a responsibility to take care of it.
Like, okay, we shredded that idea.
Right?
And then what do we have?
We have this really sharp now demarcation, this really fantastic thing where basically if you're in the elite, if you're upper income, upper education, upper whatever capability, right? You're probably on some regimen. Like you're probably on some combination of weightlifting and, you know, yoga and boxing and jujitsu and Pilates and like all the stuff and running and aerobics and all that stuff.
And if you're not, you're probably like if you just look at the stats, like obesity is like rising like crazy.
And then it's this weird thing where like the elite, of course, you know, the elite sends all the messages.
The elite includes the media, sends all the messages. And the message, of course you know the elite the elite is the elite sends all the messages the elite is you know includes the media sends all the messages and the message of
course now is body positivity right which basically means like oh it's it's it's it's great to be fat
in fact doctors shouldn't even be criticizing people for being fat and so it's like the people
the elites most committed to personal fitness are the most adamant that they should send a
cultural message to the masses saying it doesn't matter okay wait now we're getting tinfoil hat
let me hit the brakes.
No, no, no.
Do you really think the elites are sending body positivity messages?
Yeah, of course.
And this is where it comes from?
100%.
In what way?
You pick up the cover of any of these.
It's the new in thing now with all the fitness magazines and the checkout stands at the supermarket.
Right, right, right.
But where's that coming from?
That's coming from people.
That's coming from, it sells to people if you let them know that they're good.
Of course.
Of course people want to hear.
I would love to hear. If I'm just like an ordinary person living my life, I'd love to hear a message that I know that they're good. Of course. Of course people want to hear. I would love to hear.
If I'm just like an ordinary person, I'd love to hear a message that I can eat whatever I want all day long.
But I think the message gets transported on social media long before so-called elites get a hold of it.
It's like all these ideas.
It's like all these ideas.
The idea of body positivity is definitely elite.
The idea that that's just good, it's just fine.
Like, you know this.
You look at old photos of crowds, just crowds of normal people.
You don't see fat people.
Yeah, the 1970s.
It's just not.
Yeah, including relatively recently.
Like, it's just not the case.
Right.
And so, look, people may have a natural inclination to not exercise.
They may have a natural inclination to eat all kinds of horrible stuff, right?
That may be true.
But, like, there's a big difference between living in a culture
that says that that's actually not a good idea and that you should take care of yourself
versus living in a culture where the culture says to you, no, that's actually just fine.
In fact, you should be prized for it.
And if a doctor criticizes you, they're being evil.
But let's break that down.
Where is that message coming from?
Where is the message of body positivity, where is it coming from?
It's the same place all these other ideas are coming from.
But isn't it coming from this communism, right?
Isn't it coming from the same place where you get participation trophies?
It's an evolution of the sort of egalitarian ethic of our time, right, that sort of evolved.
It evolved all the way through communism.
It kind of hit the 60s.
It turned into this other thing that we have now, you know, sort of modern, whatever you want to call it,
modern elite secular humanism, whatever you want to call it.
Anyway, point being, like, it is a weird dichotomy like it's very the outcomes are very strange right it's like okay what why are the people most enthusiastic about sending
this message the most fit right why is everybody else suffering is that real my are they the most
fit the people that are sending this body positivity message in general what i see is
obese people that want to find some sort of an excuse for why
it's okay to be obese.
Yeah, there is some of that.
But there's a lot of theory, right?
There's a lot of professors.
There's a lot of writers, right?
There's a lot of people working in the media companies.
Like, there's a lot of people whose job it is to propagate ideas.
Yeah, grifters.
That have, you know, six yoga classes a week and do all the stuff.
You did Whole Foods.
Those are the ones that are telling you it's okay to be fat?
That's where a lot of the stuff is coming from.
Really?
How so?
Where are you getting this?
It's just, I mean, you look at major, it's now showing up in major advertising campaigns.
Right, but isn't that just because they feel like that's what people want?
Yeah.
And there's a lot of blowback from that.
But again, let's go back to where this started, which is it's a level of like, are you in
a culture that has expectations or not?
Right?
Are you in a culture that actually has high standards or not?
And this goes back to the Nietzsche point.
In a religious environment you had high standards
because you were trying to live up to God.
We are now trying to create cultures
that we are constructing from scratch.
They're not religious, we don't believe in God.
We're trying to construct value systems from scratch.
And do we value emotions too much?
Do we value emotions too much?
What do we value?
Do we value achievement?
Do we not?
Do we value protecting too much? What do we value? Do we value achievement? Do we not?
Do we value protecting people from shame?
Do we value economic growth? Do we think people should have to work?
By the way, drug policy, do we think it's okay for people? Do we value people not being stoned?
By the way, maybe we should, maybe we shouldn't. I don't know, but it's a thing we're going to have to figure out.
If you saw the number, I'm not anti-marijuana, but the numbers of marijuana usage in the states of legalized marijuana are like really high.
And like do we want like 40 or 50 or 60 or 80 percent of the population being stoned all day?
Is that real?
I don't know.
Not yet.
But if you look at where the numbers are going in the states of the legalized marijuana, like it's rising. Well, the government, the classic case, the federal government just announced
they're going to start to... They just banned Juul electronic cigarettes.
Isn't that wild? Now, why'd they do that?
So they finally banned those. I'm coming to that. So they banned those, and then they're
going to try to now mandate lower nicotine levels in tobacco cigarettes, right?
But why would they ban Juuls? So Juuls are electronic cigarettes.
There's a bunch of arguments. It's a long topic. There There's a bunch of arguments.
It's a long time.
There's a whole bunch of arguments.
But it's interesting that the trend is to ban tobacco but to legalize marijuana, right?
So these things, this is a tobacco vape pen.
Is this illegal now?
I don't think it's illegal.
Juul is not going to be allowed to operate.
I don't know what that means for other companies like that.
They might also be banned.
Like that might not be legal in the U.S. in three months.
What?
Yeah.
Yeah, yeah, that's coming.
But who the fuck are they to tell us we can't have this?
The federal government.
But this is what's crazy.
Like why?
Yeah.
Well, as usual with these things, there are very specific reasons.
And a lot of it, of course, has to do with marketing to kids, which has always been an issue with cigarettes. But I'd like to find out what they're saying, what's the reason.
When you think about it, half a million people die every year from cigarette smoking, right?
How many people are dying from Juuls?
I don't know.
Is it four?
I mean, generally, a lot of people-
A bunch of scab pickers, those kids.
Who's dying from Juul?
Those people that suffer from them all day?
When I saw a story about this maybe two years ago, their content of nicotine is way higher than the average thing.
This motherfucker puts you on Pluto right away.
It's wild.
It gives you a crazy head rush.
My tinfoil hat also read that this had something to do with a big building
they bought in San Francisco, and a lot of people didn't like that.
What?
But I don't know how accurate all that stuff was.
How did the federal government outlaw them because of a building?
I don't know.
That doesn't make any sense.
It sounds like the jewel lobbyists need to step up their fucking game.
Nancy Pelosi has a number.
You've got to find what that number is and get it to her.
But here's the other thing.
You're not dying from the nicotine.
The nicotine is not causing the lung cancer.
Exactly.
Very good point.
Tobacco is causing the lung cancer.
Right.
Not even the tobacco necessarily.
Yeah, like the tar, like the other—
All kinds of other shit.
The stuff that's in there.
And so one of the arguments for Juul historically was it is better than smoking—it is healthier than smoking cigarettes.
There's an issue with the heavy metals and the adulterated packets and so forth.
But generally speaking, if you get through that, people are generally going to be healthier smoking a vape pen than they're going to be smoking tobacco.
But think about the underlying thing that's happened, which is negative on nicotine, positive on marijuana.
Well, then think in terms of the political coding on it, right?
So who smokes cigarettes versus who smokes pot, right?
So who smokes cigarettes?
It's coded.
It's not 100%, but it's coded as especially middle class, lower class white people.
Who smokes pot?
Upper class, upper middle class white people.
Wait a minute.
Lower class white people smoke pot too.
They do now in increasing numbers.
But if you just look for- Cheech and Chong?
Cheech and Chong was a long time ago.
FDA proposes rules prohibiting menthol cigarettes and flavored cigars to prevent youth-
Okay.
That kind of significantly reduced tobacco-related
disease and death. And then specifically
you'll notice what's happening. Menthol cigarettes, flavored cigars
targeted, those are coded black.
Historically, those
are black-centric markets.
And so there was criticism when they first came out
with this with menthol cigarettes that it's very specifically
targeting black people. It's
basically raising the price of cigarettes on black people.
How does it do that? are they more expensive they either
make them more expensive or they just like flat out outlaw them and make them you know then they're
contraband they're bootleg they're you know then it's an illegal drug is are menthol cigarettes
inherently worse uh i don't think they're inherently worse just historically it's the
black it's the black community that tends to prefer menthol cigarettes um right but why would
they outlaw menthol cigarettes like what's the justification? They're trying to reduce smoking among black people. They're trying to reduce smoking of
nicotine among black people. They're not, interestingly, trying to reduce smoking of
marijuana with black people. In fact, they're doing quite the opposite because we're legalizing
marijuana everywhere. So there is an interesting... As the tectonic plates shift in our ethics and
morality, there's a coding to race and class. What are your reservations about marijuana being fully legalized and implemented?
I just, I don't know. I don't know. We've just, we've like, I'm sort of reflexively libertarian.
My general assumption is it's a good idea to not basically tell adults that they can't do things
that they should be able to do, particularly things that don't hurt other people.
But you're apprehensive.
And furthermore, it seems like the drug war has been a really bad idea and for the
same reason prohibition has been a bad idea, which is when you make it illegal, then you
make it... Then you have organized crime, then you have violence, right?
Right.
And all these things.
So that's like my reflexive... As a soft libertarian, that's sort of my natural inclination.
Having said all that, if the result is that 20% of the population is stoned every day, is that a good outcome?
Okay, what about 30%?
What about 40%?
What about 50%?
Do you ever smoke marijuana?
A little bit, a couple times.
What are your thoughts on what happens when people smoke marijuana a lot?
I don't know.
I don't know.
Do you believe that the medical establishment that struggled so much with COVID is going to be able to give you the answer?
I don't think they're the ones I should turn to.
Yeah.
I think we should turn to the people that are high functioning marijuana users.
Well, except maybe the high functioning users are the special, maybe there's biological
differences.
I think there certainly is.
Right.
Yeah, there certainly is.
Have you ever seen Alex Berenson's book, Tell Your Children?
I've heard about it.
It's a really interesting book.
And I had him on with a guy named Mike Hart, who's a doctor out of Canada who prescribes cannabis for a bunch of different ailments and different diseases for people.
And he was very pro-cannabis.
And I'm a marijuana user.
And so the two of them together, it was really interesting because I was more on Alex Berenson's side.
I was like, yeah, there are real instances of people developing schizophrenia radically increasing in people,
whether they had an inclination or a tendency towards schizophrenia, family history or something,
and then a high dose of THC snaps something in them.
But there are many documented instances of people consuming marijuana, specifically edible
marijuana, and having these breaks.
So what are those things?
And because of the fact that it's been prohibited and it's been Schedule I in this country for so long,
we haven't been able to do the proper studies.
So we don't really understand the mechanisms.
We don't know what's causing these.
We don't really know what's causing schizophrenia, right?
Well, I was going to say it's possible that marijuana is getting blamed for schizophrenic breaks that would have happened anyway.
Right.
It's a precondition.
So, yeah, we don't know.
It's hard to study. Well, here's another question. Another ethical question gets interesting,
which is should there be lab development of new recreational pharmaceuticals? Should there be
labs that create new hallucinogens and new barbiturates and new amphetamines and new et
cetera, et cetera? Or new opiates. This is the big dilemma about fentanyl, right? And then the
new ones that are even more potent.
But should that be a fully legal and authorized process? Should there be the equivalent of, you know, the equivalent of, you know, should there be companies with like the same companies that make cancer drugs or whatever? Should they be able to be in the business of developing recreational drugs?
But isn't the argument against that that if you do not do that, then it's the same thing as prohibition, that you put the money into the hands of organized crime and they develop it because there's a desire.
Right. That's right.
Yeah.
And then you get meth and fentanyl and so forth.
On the other hand, do you want to be like, again, it goes back to the question, do you want to be in a culture in which basically everybody is encouraged to be stoned and hallucinating all the time?
You keep saying stoned.
But the thing about cannabis is cannabis, it facilitates conversation and community and kindness.
There's a lot of very positive aspects to it, especially when used correctly.
And I would argue, from what I can tell, it's therefore, if you had to make a societal choice, you prefer marijuana over alcohol.
I do, but I also like alcohol.
I think alcohol is a great social lubricant, and it makes for some wonderful times and some great laughs.
And if you're a happy person, I'm a happy drunk.
I like drinking with friends.
We have a lot of laughs.
And I don't think if the government came along and said no more drunk, no more drinking, no more alcohol,
I would be just as frustrated as I would be if they came along and said no more cannabis.
I think it's if you're a libertarian, then I would imagine that you think
that the individual should be able to choose
their own destiny if fully informed.
And I do.
And by the way, you'll notice
there's another thing that happens
again as we kind of reach for our new religions.
The reflex, which is legitimate,
which we all do, is to start to think,
okay, therefore, let's talk about laws.
Let's talk about bans.
Let's talk about government actions.
There's another domain to talk about, which is virtues, right, in our decisions and our cultural expectations of each other.
Yes.
Right, and of the standards that we set and who our role models are, right, and what we hold up to be, like, positive and virtuous, right?
And that's an idea that was sort of encoded into all the old religions we were talking about.
Like, they had that built in.
Yeah. idea that was sort of encoded into all the old religions we were talking about, like they had that built in.
We arguably, because of the dilution effect, right, arguably, like we've lost that sense that, you know, there used to be this concept called the virtues, right?
If you like read the founding fathers, they talked a lot about, like the founding fathers
were famously, like Adams and Marshall and these guys said, basically, democracy will
only work in a virtuous population, right?
In a population of people who have the virtues,
who have basically a high expectation of their own behavior
and the ability to enforce codes of behavior within the group,
independent of laws.
And so it's like, okay, what are our virtues exactly?
What do we hold each other to?
What are our expectations?
In our time, it is kind of unusual historically
in that those are kind of undefined.
We really don't have good answers for that.
How do we develop those good answers?
Don't we let people try it out and see where it goes and see if there's maybe like a threshold?
Maybe there's like go out and have a glass of wine.
Nothing wrong with that, right?
Drink four bottles of wine at dinner.
You might be belligerent, right?
Or like alcohol.
Alcohol to this day is highly correlated with violence. It's highly correlated with domestic abuse.
You know, it's highly, you know, fights, you know, people get in street fights. It's almost
always somebody's got auto accidents, shootings, deaths, almost always either one side or the other
is drunk. Yes. Okay. Maybe that's not so good. Right. Maybe that's not so good. Maybe we shouldn't
be encouraging that. But you haven't done that, right? Have you had alcohol before? Yes. Yes.
But you turned out okay. I turned out okay.
But don't you think that you should be a standard?
You're a very intelligent guy.
Shouldn't we?
Different people have different experiences.
Right.
Should we deny them those experiences?
No.
I didn't say that.
Again, we're back to you.
I'm not proposing.
I know you're not.
That's why I'm fucking with you a little bit.
Bans, prohibitions, the whole thing.
Right.
Well, this goes to, I mean, look, the reason I'm so focused on this whole ethics morals
thing is because a lot of the sort of hot topics around technology
ultimately turn out to be hot topics around like all the questions around freedom of speech yeah
are the exact they're the exact same kind of question everything that we've been talking
about to me which is it's like it's an attempt to reach for you know should we should there be more
speech suppression should there be less is you know hate speech misinformation so forth yes yes
these are all these sort of encoded ethical moral questions that prior generations had
very clear answers on, and we somehow have become unmoored on.
And maybe we have to think hard about how to get our moorings back.
Yeah.
But how does one do that without forming a restrictive religion?
Good question.
Yeah.
I mean, by definition, you know, morality binds and blinds.
Like, at some point, yeah.
Do you want to live in a world with no structure?
Right. Do you really want to live in a world with no structure? Right. Do you really want to live in a world with no structure?
But I mean, I think we want a certain amount of structure that we agree upon,
that we agree is better for everyone, for all parties involved, right?
Would you say we have that today? I don't think we do.
Yeah, I don't think we do. No, I think we have some people that have sort of agreed to be a part
of a moral structure.
And a lot of those people are atheists, guys like my friend Sam Harris.
Very much an atheist, but also very ethical, will not lie, has a very sound moral structure that's admirable.
And when you talk to him about it, it's very well defined.
And when you talk to him about it, it's very well defined.
And he would make the argument that religion and a belief in some nonsensical idea that there's a guy in the sky that's watching over everything is not benefiting anybody.
And that morals and ethics and kindness and compassion are inherent to the human race because the way we communicate with each other in a positive way, it's enforced by all those things.
By developing good community, it's enforced by all those things, by developing good community.
It's enforced by all those things.
So would you say that most people in the United States that don't consider themselves
members of a formal religion are getting saner over time or less sane over time?
It depends on the pockets that they operate in.
If they have some sort of a method that they use to solidify their purpose and give them a sense of well-being.
And generally those things pay respect to the physical body, whether it's through meditation
or yoga or something.
There's some sort of a thing that they do that allows them, I don't want to say to transcend,
but to elevate themselves above the worst based instincts, the base instincts that a
human animal has.
Yeah.
I think there are people like that.
I don't think that's the representative.
But shouldn't that be what we aspire to?
I don't think that's the representative experience.
Right, but is that not the representative experience because people are not guided correctly?
They don't have the proper data or information or they don't have good examples around them?
Yeah.
I think that's a big part of it, right?
What kind of community do you operate in?
If you operate in a community of compassionate, kind, interesting, generous people, generally
speaking those traits would be rewarded and you would try to emulate the people around
you that are successful that exhibit those things and you would see how by being kind
and generous and moral and ethical that person gets good results from other people.
You have other people in the group that reinforce those because they see it.
They generally learn from each other.
Isn't there a lack of leadership in that way that we don't have enough people that have exhibited those things?
There certainly is that.
Right.
But you don't have a lot of faith in that.
That I will agree on.
Well, it's like, okay, they better show up pretty soon.
Well, they're kind of here, but it's hard to get there.
Don't you think?
Well, you know, they're not getting elected to office.
I know that much.
That's true.
That's a giant problem, right?
The popularity contest is the giant problem.
The way we decide who is going to enforce these laws and rules and regulations, we essentially have giant popularity contests.
I was going to say, we've decided we can define our own morality from scratch.
I hope that goes well.
I'm a lot more worried about that than I am about artificial intelligence.
I can tell you that.
I'm a lot more worried about the other people.
That's an imminent threat.
Yeah.
It's a constant threat.
What's the solution?
I don't know.
It's a hard one.
Do you have any theories?
I mean, at the very least, when I always go is to try to figure out the meta level.
Okay, like if this isn't going well, like what's the system?
Like what's the process by which this would happen?
What are the sort of biases that would be involved as we think about this?
What are the motivations that we have?
I don't know that that brings me any closer to an answer to the actual question.
But is this something you've wrestled with?
Yeah, a little bit, but not.
Yeah, I would certainly not propose an answer. You wouldn't propose an answer, but would you
ever sit down and come up with just some sort of hypothetical structure that people could operate
on and at least have better results? I think that that is going to be something that people are
going to have to do maybe someday. I might do that is going to be something that people are going to have to do maybe
someday.
I might do that someday.
You might do that someday.
Yeah.
But you clearly have thoughts on it.
And you clearly have thoughts on things like marijuana that maybe perhaps people are using
to escape or to dilute their perspective.
Okay.
Let me give you something I do have strong thoughts on.
I don't.
Yeah.
Let me give you something I have strong thoughts on.
Like, do we value achievement?
What is achievement?
Achievement.
Do we value outperformance?
Okay, but what is performance?
What is achievement?
Do we value people
who do things better
than other people?
Okay, but what are those things?
What about communicate with people?
Do we value people
who communicate with people better?
Do we value people
who are kinder?
Do we value those achievements? kinder? Are those achievements?
Differences.
Right. But to be able to get your personality and your body and your life experiences in line
to the point where you have more positive and beneficial relationships with other people,
isn't that an accomplishment?
Yeah, sure. Of course. That would be an accomplishment. But also,
do we value people who build things?
Right. What are those things?
Right.
Do we value people who create jobs?
Right.
Do we value people who run companies?
Depends on what those jobs are and what those companies are, right?
What if the company makes nuclear weapons and the job is to distribute those all around the world and blow shit up?
Well.
That's an accomplishment.
Except what those companies do is they prevent World War III.
So you would say, yes.
Sometimes.
You would say that's an accomplishment.
Sometimes they shoot drones into civilians.
They do.
They do.
They do.
Yeah.
Look, do we value, yeah.
I mean, look, do we value heterodox thinking, right?
Do we value thinking that violates the norm, right?
Do we value thinking that challenges current societal assumptions?
Like, do we value that or do we hate that and we try to shut it down?
You know, look, do we value people if they study harder and they get better grades?
The better grades should get them into a college other people can't get to.
But do we have to universally value all the same things?
Like, isn't it important to develop pockets of people that value different things?
And then we make this sort of value judgment on whether or not those things are beneficial to the greater human race as a whole or at least to their community as a whole.
Do we value population growth? That's a as a whole. Do we value population growth?
That's the question, right?
Do we value having kids?
Right?
Yeah.
Is having kids something that contributes to the human story?
Depends on who's having kids.
Have you seen Idiocracy?
Yes.
Mike Judge was on the other day, and the podcast actually came out today.
And Mike Judge is awesome.
And his movie idiocracy
I I never watched it
I had only watched clips and I watched it prior to him coming on the show the fucking beginning scenes where they explain how the human
race
Devolves is fucking amazing. It's so funny. Yep. That's kind of what we're worried about, right?
Well, I don't know
I mean right now there's a movement afoot among the elites in our country that basically says having kids, having kids, having anybody
having kids is a bad idea, including having elites have kids is a bad idea because, you know, climate.
Well, Elon doesn't think that. Well, exactly. So Elon has been surfacing this issue, this issue,
and I think a very useful way, because I think this, this is a real question. Yes. There's a
long history, there's a long history in elite Western thinking. There's a long history in elite
Western thinking about this question of whether there should be kids who has kids.
A hundred years ago, all the smartest people were very into eugenics.
And then later on, that became something called population control.
And then in the 70s, it became something called degrowth.
And now we call it environmentalism.
And we basically say, as a result, more human beings are bad for the planet, not good for the planet.
Is that eugenics, though?
Really? Yes.
Well, it's descended from eugenics.
Eugenics though? Really? Yes. Well, it's descended from eugenics. Right. Eugenics itself. Eugenics was discredited by World War II.
Hitler gave eugenics a bad name.
Yes.
Legitimately so. That was a bad idea.
So it shed the overt kind of genetic engineering component of eugenics.
But what survived was this sort of aggregate question of the level of population.
eugenics, but what survived was this sort of aggregate question of the level of population. And so the big kind of elite sort of movement on this in the 50s and 60s was so-called population
control. Now, the programs for population control tended to be oriented at the same countries people
had been worried about with eugenics. In particular, a lot of the same people who were worried about
the eugenics of Africa all of a sudden became worried about the population control of Africa.
That led to kind of this whole modern thing about African philanthropy kind of all flows out of that tradition.
But it all kind of rolls up to this big question, right, which is like, okay, are more people better or worse, right?
And if you're like a straight-up environmentalist, it's pretty likely right now you have a position that more people make the planet worse off.
But until the point where more people develop technology that fixes and corrects all the detrimental effects of large populations.
And then, of course, as an engineer, I would argue we already have that technology and we just refuse to use it.
Like which technology?
Nuclear energy.
Nuclear energy.
I agree with you on that.
That if we had better nuclear energy, we'd have far less particulates in the atmosphere.
I was watching this video.
It was really fascinating where they were talking
about electric cars and they were giving this demonstration about if we can all get onto
these electric vehicles, the emission standards would be so much better, the world would be
better, the environment would be better. And then this person questioned him, gets to,
where's this electricity coming from that's powering this car?
That's right.
And the answer is mostly coal.
Yeah, that's right.
That's what this guy says.
And then you're like, whoa.
Well, if that was nuclear- Yeah, that's right.
If that was nuclear, then that would be eliminated.
You would have nuclear power, which is really the cleanest technology that we have available
for mass distribution of electricity.
Yeah, that's right.
Right?
By far.
Well, so funny history here. So Richard Nixon, who everybody hates,
it turns out-
I don't hate him.
Okay.
All right.
A lot of people hate him.
I'm just kidding.
I think if you were around today,
you probably would.
I hate that motherfucker.
You probably would.
Nixon, it turns out,
was a softie on a couple of topics.
One was the environment.
So Nixon created
the Environmental Protection Agency, right?
So this is a guy with like
as good environmental kind of credentials as anybody in the last, like, you know, 70 years.
He also proposed a project in 1972 called—I'm blanking on the name of the project.
What the fuck was it? I can't remember.
But it was specifically—it was a project to build 1,000 nuclear power plants in the U.S. by the year 2000.
Oh, it was called Project Independence. It was to achieve energy independence. So he said, let's build 1, US by the year 2000. Oh, it's called Project Independence, is to achieve energy independence.
So he said, let's build 1,000 nuclear plants by 2000,
then we won't have any dependence on foreign oil,
we won't need to use oil, we won't need any of this stuff,
and we'll be able to just power the whole country
on nuclear reactors.
You will notice that that did not happen.
Did not.
That did not happen, right?
And so here we sit today with this kind of hybrid
kind of thing where we mostly have, it's a lot of gas,
now there's some solar and wind, there's a few nuclear plants, and then Europe kind of has a similar kind of thing where we mostly have a lot of gas. Now there's some solar and wind.
There's a few nuclear plants.
And then Europe kind of has a similar kind of mixed kind of thing.
And then in the last five years, we've decided, both we and Europe have decided, well, let's just shut down the nuclear plants.
Like, let's just shut down the remaining nuclear plants.
Let's try to get the goal to zero.
Right.
And then, of course, Europe has hit the buzzsaw on this because now shutting down the nuke
plants means they're even more exposed to their need for Russian oil
Right, it happened in the worst time possible. Right exactly and they still won't stop shutting down the plants
They're still doing it
even though they really shouldn't because because Europe Europe is funding Russia to the tune of over a billion euros a day by buying their
Energy and they can't turn it off because they don't have their own organic and so and sure enough
Germany right now they're firing up the coal plants again
Oh, right and they're heading into summer where they need to power the AC systems.
And then this winter, they have a big problem.
They need to power heat.
And so, yeah, literally, like, we're back to coal.
So somehow we've done, you know, after 50 years of the environmental movement, we've done a complete round trip and we've ended up back at coal.
Is that because we didn't properly plan what was going to be necessary to implement this green strategy
long term and they didn't look at, okay, we are relying on Russian oil, what if Russia
does this?
What if, you know, what are our options?
Do we go right to coal?
Why don't we have nuclear power?
At least have a plan.
We know that they can develop nuclear power plants that are far superior to the ones that we're terrified of.
Right, of course.
Like Fukushima, right?
Ones that don't have these fail-safe programs or have a limited fail-safe.
Like Fukushima had a backup.
The backup went out too, and then they were fucked.
Three Mile Island, Chernobyl.
When meltdowns, that's what scares us.
What scares us is the occasional nuclear disaster.
That's what scares us. What scares us is the occasional nuclear disaster.
But are we looking at that incorrectly?
Because there's far more applications than there are disasters, and those disasters could
be used to let us understand what could go wrong and engineer a far better system, and
that far better system would ultimately be better for the environment.
Yeah.
So total number of deaths attributed to civilian nuclear power... Total number of deaths,
what were they for Three Mile Island?
I don't think there was any.
The famous disaster.
Zero.
Zero, right?
How many were there for Fukushima?
It was a couple.
No, it was either zero or one.
Oh, it was one guy?
There was one court case.
How many people developed superpowers?
Not nearly enough.
See, once again, we need to get to the X-Men before- Yeah, why is that never happening?
You want to take a digression? There are superpower startups. Should we do nukes or superpowers? Which one first? These are both interesting. Well, let's just look at this,
what Jamie just pulled up. Nobody died as a direct result of the Fukushima nuclear disaster.
However, in 2018, one worker in charge of measuring radiation at the plant died of lung cancer caused by radiation exposure. And just as trivia, that's actually disputed.
There's actually litigation.
That's been a litigation case in Japan about whether or not that was actually – whether he got lung cancer.
Because some people just get lung cancer.
Yeah, and people who don't even smoke get lung cancer.
And how can you tell where the lung cancer comes from?
And so that's why I said it's either zero or one.
Now, the disaster-related deaths, actually, those were attributed deaths to the evacuation,
and those were mostly old people under the stress of evacuation.
And then again, you get into the question of, like, they were old people.
If they were 85, you know, were they going to die anyway?
So back to your question.
Whatever, whatever.
So look, nuclear power by far is the safest form of energy we've ever developed.
Like, overwhelmingly, the total number of civilian nuclear deaths and nuclear power has been very close to zero.
There's been like a handful of construction deaths, like people, concrete falling on people.
Other than that, like it's basically as safe as can be.
We know how bad coal is.
By the way, there's something even worse than coal, which is so-called biomass, which is basically people burning wood or plants in a stove in the house.
The impact – Yeah, fireplaces are terrible. a stove in the house. The impact –
Yeah, fireplaces are terrible.
Fireplaces in the house are terrible.
There's roughly 5 million deaths a year attributed in the developing world to people burning biomass in the house.
So like that's the actual catastrophe that's playing out.
And that's because of gas leaking inside their home because of the smoke inhalation.
Smoke in the house.
And so like that's the – if you're a pure utilitarian, you just want to focus on minimizing human death, you go after those 5 million.
Now, nobody ever talks about that.
Of course.
Because nobody actually cares about that kind of thing.
But like that is what you would go after.
Nuclear is like almost completely safe.
And then there is a way to develop – if you want to develop a completely safe nuclear plant that was like safer than all these others, what you would actually do, there's a new design for plants where you would actually have the entire thing be entombed from the start. So you would build a completely
self-contained plant and you would encase the entire thing, right, in concrete. And then the
plant would run completely lights out inside the box. And then it would run for 10 years or 15
years or whatever until the fuel ran out. And then it would just stop working. And then you would
just seal off the remaining part with concrete. And then you would just leave it put and nobody
would ever open it.
And it would be totally safe, like totally contained nuclear waste.
Is there a lot? And so you could build, especially with – and to your point of modern engineering, like there hasn't been like a new nuclear power plant design in the US in 40 years.
And I think maybe – I don't know the last time the Europeans did one from scratch.
But if you use modern technology, you could upgrade almost everything about it.
And so we have the – we can do this at any any time. This is a very straightforward thing to do.
There has not been a new nuclear plant authorized to be built in the United States in 40 years.
Holy shit.
We have something called the Nuclear Regulatory Commission. Their job is to prevent new nuclear
plants from being built.
Jesus Christ. And this is because of these small amount of disasters that have caused no life
loss.
Either people have a dispute about the facts or there's a religious component
here where we have the same people who are very worried about climate change are also
for some reason very worried about nuclear for reasons... As an engineer, I don't understand
how they kind of do... It's something about nuclear, so-called ick factor.
Well, it's energy, right?
I mean, it's the idea of the fact that you can't get rid of it.
Once you do have a disaster like Fukushima, that area is fucked for a long time.
Yeah, but again, this is the thing is you can do like total amount of nuclear waste in the world is like very small.
There's a way to build these things where they're like completely contained.
Like that you could work around.
Like that's not a big issue relative to the scale of the other issues that we're talking about.
Like compared to carbon emissions like that's just not a big issue. Right, but what I was going to get to is that that energy also,
there are strategies in place to take nuclear waste
and convert it into batteries and convert it into energy.
You could do that.
So there's a lack of education?
You could just bury it.
Well, look, I think primarily these topics are religious.
Oh, okay.
This is always my, for anybody who ever,
and there's a whole wave of investing that's happening. There's there's a whole climate tech. And remember there's a whole green
climate tech wave of investing in tech companies in the two thousands that basically didn't work.
There's another wave of that now. Cause a lot of people are worried about the environment. And to
me, the litmus test always is, where are we, are we funding new nuclear power plants? Cause we have
the, we have, have the answer. Like we don't need to invent the new thing. We actually have the
answer for basically unlimited clean energy. We just, we just, we don't want it. We don't need to invent the new thing. We actually have the answer for basically unlimited clean energy. We don't want it. I don't know why, religious reasons. The Europeans don't
want it. I mean, the Europeans of all people should really want it. They should be doing
this right now. Is it that we don't want it or that we don't understand it? So if it was laid
out to people the way you're laying out to me right now. And if there was a grand press conference that was held worldwide
where people understood the benefits of nuclear power far outweigh the dangers
and that the dangers can be mitigated with modern strategies,
with modern engineering, and that the power plants that we're worried about,
the ones that failed, were very old.
And it's essentially like worried about the kind of pollution
that came from a 1950s car as opposed to a Tesla.
We're looking at something that's very, very different.
Also, Stuart Brand, who's the founder of the Whole Earth Catalog and one of the leading environmentalists of the 1960s, has been on this message for 50 years.
He's written books.
He's given talks.
He's done the whole thing.
There's a debate in the environmental community about this.
He's in the small minority of environmentalists who are on this page.
What's the opposition?
They've completely rejected him.
The opposition fundamentally, the environmental movement – I mean an interpretation of it would be it's primarily a religious movement.
It's a movement about defining good people and bad people, right?
The good people are environmentalists.
The bad people are capitalists and people building new technologies and people building businesses and companies and factories and having babies, right?
So it's a way to demarcate friend and enemy, good person, bad person.
And look, these are very large enterprises, lots of scientists, activists,
lots of people making money.
It's like a whole thing.
Right.
That is the problem.
And so, yeah.
So once things get into this zone, the facts and logic don't seem to necessarily carry the
day.
Look, I was going to say, it's reassuring to me that we have the answer.
It's disconcerting to me that we won't use it.
Maybe the Russia thing is an opening to do... Maybe the Europeans are going to figure
this out because they're now actually staring down the barrel of a gun, which is dependence
on Russia.
Well, we have to change the way the public views nuclear because they view nuclear as disaster.
They view nuclear as bombs.
Yeah, ick factor.
They just have to hear you.
Yeah, I don't know.
Or someone like you.
My experience, the logical arguments don't work in these circumstances, right?
It's got to be some larger message.
Well, I don't think there's a lot of people hearing this message.
This message, first of all, the message, the pro-nuclear message, at least nationwide,
as an argument amongst intelligent people, is very recent.
It's been within the last couple of decades where I've heard people give convincing arguments
that nuclear power is the best way to move forward.
give convincing arguments that nuclear power is the best way to move forward.
Oftentimes, environmentally inclined people and people that are concerned about our future that aren't educated about nuclear power, that word automatically gets associated with
right-wing, hardcore, anti-environmental people who don't give a fuck about human beings.
They just want to make profits, and they want to develop energy and ruin the environment,
but do that to power cities.
Right.
So I know how we build 1,000 nuclear plants in the U.S.
and make everybody happy.
Want to hear my proposal?
Yes.
We have the Koch brothers do it.
Oh.
Okay.
Which is Charles Koch.
Yes.
He runs Koch Industries.
Yes.
And so if you are on the right, you're like, this is great.
He's a hero on the right, and he runs this huge industrial company that's a fantastic asset to America,
and this is a big opportunity for him and the company, and it's great, and we'll build the nukes, and it's going to be great.
And we'll export them.
It'll be awesome.
If you're on the left, you're cursing him.
You're putting him to work for you to fix the climate, right?
You're doing a complete turnaround, and you're basically saying, look, we're going to enlist you to fix.
We view you as a right winger.
This is a left-wing cause.
We're going to use you to fix the left-wing cause.
So I think we should give him the order.
But why would that be good if the people on the left freak out?
Because they're immediately going to reject it.
Well, of course they're going to reject it.
I'm saying in an alternate hypothetical world, they would find it entertaining.
Let me start by saying this is what we should actually do.
We should actually give him the order and have him do it.
And I'm just saying, like, if the left could view it as, oh, we get to take advantage of this guy who we don't like to solve a problem that we take very seriously, that we think he doesn't take seriously, which is climate.
Well, I don't know about your logic there, because they would think that he's profiting off of that.
And the last thing they would want is the Koch brothers to profit.
As I say, this is not actually happening.
Right.
But what about someone else who's not so polarized?
Yeah, no, look, pick any, GE could do it, there's any number of companies that
could do it.
Do you think it would just take one success story, like implementation of a new,
much more safe, much more modern version of nuclear power?
That would certainly help.
We need something, right?
Yeah, yeah.
Because-
I mean, the first thing is the government,
and again, the government would have to be willing
to authorize one.
I've had conversations with people that don't,
you know, they don't have the amount of access
to human beings and different ideas,
and they immediately clam up when you say nuclear power.
Well, there's been a big, This has been a big whammy.
Look, there's something very natural here.
Look, nuclear... Again, we live in a much diluted version of what we used to live... In
the 50s and 60s, this was a hot topic because there was a huge rush of enthusiasm for nuclear
everything.
Yeah.
And then there was... Yeah, there were these accidents.
And then look, the fear... I remember when I was a kid, the fear of nuclear war was very
real. Oh, yeah. Well, we're basically close to the same age. Yeah. I remember in the 80s, I mean, even I remember when I was a kid, the fear of nuclear war was like very, very real.
So.
Oh yeah, well we're basically close to the same age.
Yeah, you have to remember in the 80s, like this is a.
It was real.
This is a, you know, people talk about politics are bad now.
It's like, well, I remember worrying that we were all
gonna die in the nuclear holocaust.
Yeah.
You remember the, probably the TV series,
the day after.
Oh yeah.
That freaked everybody out.
Yeah.
Like the whole country went into a massive depressive funk
after that show came out.
And so, yeah, there's been a big kind of psychic whammy that's been put on
people about this. And then, like I said, there's a lot of environmental movement that
I think doesn't actually want to fix any of this. And I think their opposition to nuclear
is sort of proof of that. And they have a very anti-nuclear set of messages.
Well, what does the environmental movement propose?
So they propose degrowth. They propose degrowth.
They propose a much lower population level.
They propose much lower industrial activity.
They propose a much lower human standard of living.
They propose a return to an earlier mode of living that our ancestors thought was something
that they should improve on and they want to go back to that.
And it's a religious impulse of its own.
Nature worship is a fundamental religious impulse.
Do you think it's also there's a financial aspect to that as well?
Because it's an industry.
Yeah.
Anything.
Look, any of these things become – yeah, these become self-perpetuating industries.
It's always a problem with any activist group, which is do they actually want to solve the
problem because that's very – actually solving problems is bad for fundraising.
It is kind of ironic in a sense.
I'd even say like most of this is not – I would not even say most of this is bad intent.
I think most of it is just people have an existing way that they think about these things.
It's primarily emotional.
It's not primarily logical.
Do you know someone that I would be able to talk to that is like the best proponent of
nuclear energy that can lay it out?
Let me take off.
So Stuart Brand would be the sort of godfather
of the environmental movement,
who I'm sure would talk about it.
And then there's a young founder who I know,
an MIT engineer, who I'll give you his information.
And he's- I'm gonna write this down.
Yeah. So Stuart Brand
is one of them.
Stuart Brand is, yeah.
So Stuart Brand is on sort of the one side,
environmentalism and then older generation,
a lot of experience with this issue.
And Stuart Brand is the guy
who is an environmental activist,
or at least advocate, and is pro-nuclear.
He was one of the original environmentalists that we would sort of consider.
He ran this thing called the Whole Earth Catalog
that sort of brought a lot of modern environmentalism into being in the 60s.
Is there any reasonable person that opposes that, who has convincing arguments?
I mean, they're a dime a dozen.
Like, that's the rest of the movement, basically.
But reasonable.
I don't know.
Right.
Do they have, like, some sort of an answer?
It would defer to...
My experience is they jump to a different topic.
You get to what the actual underlying goal is, which, again, is to shrink human population.
And then I'll give you...
There's an MIT guy I'll tell you about who's an expert on nuclear who has this new design.
Okay.
Who's that guy?
It's Brett, B-R-E-T. Kugelmas, K-U-G-E-L.
B-R-E-T, K-U-G-E-L?
Yes, M-A-S.
M-A-S, Kugelmas.
Yeah.
I like that name.
Yeah, and he has a podcast.
He has a podcast.
He's from MIT?
Yeah, he has a podcast called Titans of Nuclear.
And he has gone around the country over the last five years, and he's interviewed basically
every living nuclear expert. Well, he sounds like a good guy to five years and he's interviewed basically every living nuclear expert.
Well, he sounds like a good guy to have on.
He's a really, really sharp guy.
He sounds like the perfect guy, right?
Because he already has a podcast.
So he started this podcast.
He's in the nuclear industry.
He's working on this kind of thing.
And so he said, well, I want to really come up to speed.
He's an MIT engineer, but he didn't take nuclear.
He's not a nuclear expert.
And so he said, I want to spin up on all this nuclear topics.
And so he said, let me start a podcast and I'll go interview all the nuclear experts,
all the people who actually know how to build nuclear plants and how this stuff works.
And he's like, boy, I don't know if they'll talk to me because I'm just a kid and I don't know whether they'll.
And he said uniformly they've just been totally shocked that anybody wants to talk to them at all.
They're just like, oh, my God.
We've never been invited on a podcast before.
Nobody ever wants to hear from us.
And so he said he's at like 100% hit rate of all the real experts.
Oh, interesting.
So if you listen to his podcast, it takes you through all this stuff in detail.
Okay, Titans and Nuclear.
I'm going to get on that.
So it seems like the problem is there's a bottleneck between information
and this idea that people have of what nuclear power is.
That needs to be bridged.
We need to figure out how to get to people's heads
that what we're talking about when you talk about nuclear power is a very small amount of disasters
where a large amount of nuclear reactors and you're dealing with very old technology as opposed to what is possible.
And virtually no deaths.
That's wild.
And an overwhelmingly better tradeoff versus any other form of energy.
Yeah.
Right. Yeah, I mean, look, that's the argument. I think it's quite straightforward.
My experience with human beings is that they only react to crises.
And so that's why I say, like, I don't think logical arguments sell.
So I think it's probably some sort of crisis.
And, you know, the Russia crisis is one opening.
And, you know, it would be great to see leadership from somebody in power to be able to take
advantage of that.
Maybe that'll happen in Europe. And then, yeah, the other would be if people actually get worried enough about
global warming. And people say they're worried about global warming, but not enough to do this.
And so, I don't know, maybe we just need higher temperatures and then people will take this
seriously. So it may just need to get bad. Do you have any concerns about this movement towards electric cars and electric
vehicles that we are going to run out of batteries?
We're going to run out of raw material to make batteries.
And that could be responsible for a lot of strip mining, a lot of very environmentally
damaging practices that we use right now to
acquire.
And also that this could be done by other countries, of course, that are not nearly
as environmentally conscious or concerned.
Right.
So technically, fun fact, we never actually run out of any natural resource.
We've never run out of natural resource in human history, right?
Because what happens is the price rises.
The price rises way in advance of running out of the resource. And then basically whatever that is, using that resource
becomes non-economical. And then either we have to find an alternative way to do that thing or
at some point we just stop doing it. And so I don't think the risk is running out of lithium.
I think the risk is not being able to get enough lithium to be able to do it at prices that people
can pay for the cars. And then there's other issues, which is where does lithium come from? I'll just give you an example. People talk about a lot of companies that are
doing a lot of posturing right now on their morality. One of the things that all electronic
devices have in common, your phone, your Tesla, your iPhone, they all have in common, they all
contain not just lithium, they also contain cobalt. If you look into where cobalt is mined,
it's not a pretty picture.
It's child slaves in the Congo.
And we kind of all gloss it over
because we need the cobalt.
And so maybe there should be more,
maybe we should be much more actively investigating,
for example, mining in the US.
There's a big anti-mining,
anti-national resource development culture in the US
and the political system right now.
As a consequence, we kind of outsource all these conundrums to other countries.
Maybe we should be doing it here.
Well, that was my question about it.
It is fascinating to me that there's not a single U.S. developed and implemented cell
phone, that we don't have a cell phone that's put together by people that get paid a fair
wage with health insurance and benefits and everything we make.
I mean, when we buy an iPhone, you're buying it from Foxconn, right? a fair wage with health insurance and benefits and everything we make.
I mean, when we buy an iPhone, you're buying it from Foxconn, right?
Foxconn's constructing it in these Apple-contracted factories where they have nets around the
buildings to keep people from jumping off the roof.
And people are working inhumane hours for a pettance.
I mean, it's like a tiny amount of money in comparison to what we
get paid here in America. Why is that? Like, is that because we want Apple to make the highest
amount of profit and we don't give a shit about human life? We only pay at Lyft service? Like,
why is it, why haven't they done this in America? Well, here's where I would, I think I would
actually agree. Here's an environmentalist argument I think I might agree with, which
basically is it's very easy for so-called first world or developed countries to sort of outsource problems to developing countries.
Right.
And so just as an example, take carbon emissions for a second.
We'll come back to iPhones.
Carbon emissions in the U.S. are actually declining.
Like we actually – there's all this like animation over the Paris Accords or whatever.
But like if you look, carbon emissions in the U.S. have been falling now for like quite a while.
Why is that?
Well, there's a bunch of theories as to why that is.
Some people point to regulations.
Some people point to technological advances. For example, modern internal combustion cars emit a lot less. They have catalytic converters. Now they emit a lot less CO2.
But maybe one of the big reasons is we've outsourced heavy industry to other countries.
And so all of the factories with the smokestacks and all the mining operations and all the things
that generate, and by the way, a lot of mass agriculture
that generates emissions and so forth,
like we've, in a globalized world,
we've outsourced that, right?
And if you look at emissions in China,
they've gone through the roof, right?
And so maybe what we've done is we've just taken
the dirty economic activity and we moved it over there
and then we've kind of gone.
You know?
Look how good we're doing.
Yeah, we're great, we're great.
Now, they're awful, they have all kinds of problems,
but we're great.
Meanwhile, we are the consumer that fuels their awful problems. Yeah, we created,. We're great. Now, they're awful. They have all kinds of problems, but we're great. Meanwhile, we are the consumer that fuels their awful problems.
Yeah, we created... It's a little bit like the debate about sort of the drug trade in countries
like Mexico and Colombia, right? Which is how much of that is induced by American demand for
things like cocaine. So yeah, so it's this... This is where the morality questions get trickier,
I think, than they look, which is like, what have we actually done? Now, there's another argument on the I'll defend Foxconn.
There's an argument on the other side of this that actually, no, it's good that we've done this from an overall human welfare standpoint because if you don't like the Foxconn jobs, you would really hate the jobs that they would have been doing instead.
The only thing worse than working in a sweatshop is scavenging in a dump or doing subsistence farming or being a prostitute.
as scavenging in a dump or doing subsistence farming or being a prostitute, right?
And so maybe even what we would consider to be low-end
and unacceptably difficult and dangerous manufacturing jobs
may still be better than the jobs that existed prior to that.
And so, again, there's a different morality argument you can have there.
Again, it's a little bit trickier than it looks at first blush.
I go through this because I find we're in an era where a lot of people,
including a lot of people in my business,
are making these very flesh-cut moral judgments on what's good and what's bad.
Right.
And I find when I peel these things back, it's like, well, it's not quite that simple.
Interesting.
With the implementation of modern nuclear power, is it possible to manufacture cell phones in the United States?
Well, anything that drops the cost of energy all of a sudden is really good for domestic manufacturing, for sure.
And do so without the environmental impact. Yeah. Well, number one, so dropping the price
of energy. Energy is a huge part of any manufacturing process, huge cost thing. And
so if you had basically unlimited free energy from nukes, you all of a sudden would have a lot more
options for manufacturing in the U.S. And then the other is, look, we have robotics, the AI
conversation. If you built new manufacturing plants from scratch in the U.S. And then the other is, look, we have robotics, the AI conversation. Like, you know, if you built new manufacturing plants
from scratch in the U.S.,
they would be a lot more automated.
And so you'd have, you know,
assembly lines of robots doing things.
And then you wouldn't have, you know,
you wouldn't have the jobs that people don't want to have.
Yeah.
And so, yeah, you know, you could do those things.
There's actually a big point.
This isn't happening with phones.
This is happening with chips.
So this is one of the actual positive things happening right now, which is there's a big push underway from both the U.S. tech industry and actually the government, to give them the credit, to bring chip manufacturing back to the U.S.
And Intel is the company leading the charge on this in the U.S.
And there's a buildout of a whole bunch of new, you know, these huge $50 billion chip manufacturing plants that will happen in the U.S.
Was a lot of that motivated by the supply chain crisis?
Yeah.
One of the big issues was cars couldn't get chips.
That's right.
Yeah.
Well, when the Chinese shut down for COVID, all of a sudden the cars can't get chips.
And then, look, also just greater geopolitical conflict.
You know, like people in D.C. don't agree on much, but one of them is we don't really
want to be as dependent on China as we are today.
Right.
And so we want to bring – and then, you know, there's Taiwan exposure.
A lot of chips are actually made in Taiwan, and there's a lot of stress and tension around Taiwan. So if we get chips manufactured
back in the US, we not only solve these practical issues, we might also have more strategic leverage.
We might not be as dependent on China. So the good news is that's happening. And let me just say,
if that happens successfully, maybe that sets a model to your point. Maybe that's a great example
to then start doing that in all these other sectors. What else could be done to improve upon whatever problems that have been uncovered during this COVID crisis and during the supply chain shutdown?
It seems like a lot of our problems is that we need to bring stuff into this country.
We're not making enough to be self-sustainable.
So that's one.
I would give you another big one, though.
enough to be self-sustainable. So that's one. I would give you another big one, though. COVID has surfaced a problem that we always had and we now have a new answer to, which is the problem of
basically for thousands of years, young people have had to move into a small number of major
cities to have access to the best opportunities. And Silicon Valley is a great example of this.
If you've been a young person from anywhere in the world and you want to work in the tech industry
and you want to be on the leading edge, you had to move to figure you had to figure out a way to get to
california get to silicon valley and if you couldn't you probably it was hard for you to be
part of it and then you know the areas the cities that have this kind of they call these superstar
cities the cities that have these sort of superstar economics everybody wants to live there they end
up with these politics where they don't want you to ever build new housing yeah they never build
new roads the quality of life goes straight downhill.
And everything becomes super expensive.
And they don't fix it.
And they don't fix it because they don't have to fix it because everybody wants to move there and everything is great.
And taxes are through the roof and everything is fantastic.
And so one of the huge positive changes happening right now is the fact that remote work worked
as well as it did when the COVID lockdowns kicked in and all these companies sent all
their employees home and everything just kept working, which is kind of a miracle. Has caused a lot of
companies, including a lot of our startups, to think about how should companies actually be
all based in a place like Northern California or should they actually be spread out all over the
country or all over the world? And if you think about the gains from that, one is all of the
economic benefits of being like Silicon Valley in tech or Hollywood in entertainment.
Like maybe those gains should be able to be spread out across more of the country and more of the country should be able to participate.
Right.
And then, by the way, the people involved, like maybe they shouldn't have to move.
Maybe they should be able to live where they grew up if they want to continue to be part of their community.
Or maybe they should be able to live where their extended family is.
Yeah.
Or maybe they should want to live someplace with a lot of natural beauty or someplace where they want to contribute philanthropically to the local community.
Whatever other decision they have for why they might want to live someplace, they can now live in a different place and they can have still access to the best jobs.
And it seems like with these technologies like Zoom and FaceTime and all these different things that people are using to try to simulate being there. The actual physical need to be there, if you don't have a job where you actually have to pick things up and move them around,
it doesn't really seem like it's necessary.
Yeah.
So some exist.
Big companies are having some trouble with this right now because they're so used to running with everybody in the same place.
And so there's a lot of CEOs grappling with, like, how do we have collaboration happen, creativity happen?
Right.
If I'm creating a movie or something, how do I actually do it if people aren't in the same room?
But a lot of the new startups,
they're getting built from scratch to be remote
and they just have this new way of operating
and it might be a better way of operating.
But there is some benefit for people being in the room
and spitballing together and coming up with ideas
and developing community.
Yeah.
There's some benefit to that.
I think it gets lost with remote work.
But again, this is coming from a guy who doesn't have a job.
Yeah. And by the way, has a very nice office facility. So our firm runs, we now run,
we were a single office firm. Everybody was in our firm basically all the time. We now run
primarily remote virtual mode of operation, but we have offsites frequently, right? So we're
basically, what we're doing is we're basically taking money we would have spent on real estate and we're
spending it instead on travel and then on offsites, right? By offsites. Like basically fly everybody,
yeah, we'll fly everybody into a hotel or resort, you know, for three days, maybe some of them with
families, maybe some of them just with people. And you have a vacation together. Exactly, right.
Nice. And like real bonding, right? Right. Like real- Have a good time together. Have a good time
together, have lots of free time to get to know each other, go on hikes, have long dinners.
Right.
Right.
Parties, fire on the beach, like whatever it is, have people really be able to spend time together.
How much of a benefit do you think there is in that?
A lot.
Yeah?
A lot.
Well, and then what you do is you kind of charge people up with the social bonding, right?
And then they can then go home and they can be remote for six weeks or eight weeks and they still feel connected and they're talking to everybody online.
And then you bring them right when they start to fray, right when it starts to feel like
they're getting isolated again, you bring them all back together again.
Interesting.
Yeah.
And the benefit of that bonding is like as a person who runs a company, like how do you
think of that?
Do you think, oh, it makes people feel good about working there and so they are more enthusiastic
about work.
And how do you weigh that out?
It's to form and reinforce the cult.
Right?
So it's the company religion, right?
Yeah.
Which we don't call it that, but that's what it is.
And so it's to get that sense of community.
It's that sense of group cohesion, that we're all in this together.
I'm not just an individual.
I'm not a mercenary.
I'm a member of a group. We have a mission the mission is bigger than each each of us
individually and do you have like little struggle sessions where you let people air their gripes and
some companies have those we're not so hot on those we have other ways to do deal with that
kind of thing um more what we're trying to do is it's more it's brainstorming so like create
creativity like there's there's definitely a role for in person um and then it's brainstorming. So like creativity, like there's definitely a role for in-person.
And then it's for all of the like, you know, it's like employee onboarding.
It's for training.
It's for planning, right?
It's for all the things where you really want people like thinking hard in a group, do all those things.
But a lot of it is just the bonding. Like Ben and I run our firm.
Like we're constantly trying to take agenda items off the sheet every time because we're trying to have people just have more time to get to know each other.
How do you weed out young people that have been indoctrinated into a certain ideology and they think that these struggle sessions should be mandatory and they think that there's a certain language that they need to use and there's a way they need to communicate and there's certain expectations they have of the company to the point where they start putting demands upon their own employers.
Yeah.
So the big thing you do, I think, and this is what we try to do, is you basically declare what your values are, right?
So you want to be – like your company, you want to be very upfront and you want to basically say here's what we stand for.
And so we do this, you do this in a couple different ways.
For example, one of our core values is that we think that technology is positive for the world.
And if you're the kind of person who wants to be a technology critic,
that's just inconsistent with our values.
Technology critics have many other places that they can work.
How so in terms of technology critic?
What do you mean by that?
Just like the kinds of people who want to go online or want to write articles or whatever
about how evil all the technologists
are and how evil Elon is and how evil capitalism is and like all this stuff. You know, there's lots
of other places. There's lots of, you know, there's lots of other things. Counterproductive.
Counterproductive. It's just, it's inconsistent with our values. Like, we're optimistic about
the impact of technology on the future. Another is, you know, we have an understanding of diversity
that says that people actually are going to feel included. Like, they're actually going to feel
like they're part of a mission in a group that's larger than themselves.
Everyone regardless.
Yeah, regardless.
And that they're not going to feel like they're different or better or worse and that they have to prove themselves.
It's a meritocracy.
Yeah, it's a meritocracy.
And that they don't have to take, you know, they don't have to, we're not going to have politics in the workplace in the sense of they're not going to have to take,
they're not going to be under any pressure to either express their political views or deny that they have the political views or pretend to agree with political views they don't agree with. That's just not part of what we do. We're mission-driven
against our mission, not all of the other missions. You can pursue all the other missions in your free
time. Do you think the pursuing of a lot of those other missions is a distraction?
Yeah, enormously. I mean, it can really run away. And that is a big problem in a lot of these
companies now. But you can define your company. You can define your culture and basically say, that's not what we're about. We're about our
mission. And then you basically broadcast that right up front. And you basically say, look,
you are not going to be happy working here. And by the way, you're not going to last very long
working here, right? If you have a view contrary to that. So you've kind of recognized the problem
in advance and established sort of an ethic for the company that weeds that out early?
So everything like – there's this concept of economics called adverse selection.
So there's sort of adverse selection and then there's the other side, positive selection.
So adverse selection is when you attract the worst, right?
And positive selection is when you attract the best, right?
And every formation of any group, it's always positive selection or adverse selection.
I would even say it's a little bit of like if you put on a show, it's like depending on how you market the show and how you
price it, where you locate it, you're going to attract in a certain kind of crowd, you're going
to dissuade another kind of crowd. Like there's always some process of sort of attraction and
selection. You know, the enemy is always adverse selection, the enemy is sort of having a set of
preconditions that cause the wrong people to opt into something. You know, what you're always
shooting for is positive selection, you're tryingitions that cause the wrong people to opt into something. You know, what you're always shooting for
is positive selection.
You're trying to actually attract the right people.
You know, you're trying to basically
put out the messages in such a way
that by the time they show up,
they've self-selected into what you're trying to do.
Do you have...
Most of this is that.
Do you have other CEOs that contact you
and go, hey, we've got a fucking problem here.
How did you guys do this?
Yeah.
Yeah, yeah.
So I'll just give an example.
A public example is Coinbase.
You know, it's a company that's now been all the way through this and it's a company we've been involved with for a long time. do this. Yeah. Yeah, yeah. So I'll just give you an example. A public example is Coinbase.
It's a company that's now been all the way through this and it's a company we've been
involved with for a long time.
And that's a very public case of a CEO who basically declared that he had hit a point
where he wasn't willing to tolerate politics in the workplace.
Yes.
He did this.
He was the first of these that kind of did this.
We're going to be mission driven.
Our mission is open.
It's a cryptocurrency company.
He said our mission is an open global financial system that everybody can participate in.
And he said, look, there are many other good missions in the world.
You can pursue those in your own time or go to other companies to do that.
So was it a system where there were activists that infiltrated the company?
Yeah.
Well, you'd say in some cases it's fallen activists.
In a lot of cases, it's just like a level of activation on non – let's just say non-core issues.
It's a level of sort of internal activation on issues just say non-core issues. It's a level
of sort of internal activation on issues. You have a certain number of people who get fired up.
You have other people who feel like they have to go along. You have other people who feel like
they now can't express themselves. You have other people who feel like they have to lie to fit in.
And the conclusion he reached was it was destructive to trust. It was causing people
in the company to not trust each other, not like each other, not be able to work on the
core problems that the company exists to do. And so anyway, he did
like a best case scenario on this. He just said, look, he actually did it in two parts. He said,
first of all, this is not how we're going to operate going forward. And then he said,
I realize that there are people in my company that I did not set this rule for before who will feel
like I'm changing. I'm pulling the rug out from under them and saying they can't do things they
thought they could do. And I'm going to give them a very generous severance package
and help them find their next job. Kick rocks.
Fuck out of here.
But with like, he did a very, he did like
a six-month severance package, something
on that order, to make it really easy for people
to be able to get, you know, healthcare and like deal with
all those issues. And almost incentivize them.
Yeah, basically say, look, you're not going to like it here.
You're not going to like it here.
We're going to be telling you all this, stop doing all these things. You're not going to like it here. You're not going to like it here. We're going to be telling you to stop doing all these things.
You're not going to get promoted.
And so you're definitely going to be better off somewhere else.
Do you think going forward that's going to be what more companies utilize or that they implement a strategy like that?
bottom line, it's got to be detrimental to have people so energized about so-called activism that it's taking away the energy that they would have towards getting whatever the mission
of the company is done.
Yeah.
So the way we look at it is basically, look, it is so hard to make any business work, period.
Right?
Like to get a group of, especially from scratch, a startup, to get a group of people together
from scratch to build something new against what is basically a wall of sort of start out with indifference and skepticism and then ultimately pitch battles with big existing companies.
Like it's in other startups.
It's so hard to get one of these things to work.
It's so hard to get everybody to just even agree to what to do to do that.
What is the mission of this company?
How are we going to go do this?
To do that, you need to have all hands on deck.
You need to have everybody with a common view. A lot of what you do as a manager in those
companies is try to get everybody to a common view of mission. You're trying to build a cult.
You're trying to build a sense of camaraderie, a sense of cohesion, just like you would be trying
to do in a military unit or in anything else where you need people to be able to execute against a
common goal. And so, yeah, anything that chews away at that, anything that undermines trust and
causes people to feel like they're under pressure, under various forms of unhappiness, other missions that the company has somehow taken on along the way that aren't related to the business.
Yeah, that just all kind of chews away at the ability for the company.
And then the twist is that in our society, the companies that are the most politicized are also generally like have the strongest monopolies.
Like Google. For example. Right? that are the most politicized are also generally like have the strongest monopolies, right?
And so- Like Google.
For example, right?
And so this is what we always tell people.
It's like, look, the problem with using a company like Google or any other like large
established company like that, because people look at that and they say, well, whatever
Google does is what we should do.
It's like, well, start with a search monopoly, right?
Start life number one with a search monopoly, the best business model of all time, $100
billion in free cash flow.
Then you can have whatever culture you want. Right. Right. But all that stuff didn't with the search monopoly, the best business model of all time, $100 billion in free cash flow, then you can have whatever culture you want. Right. Right. But all that
stuff didn't cause the search monopoly. The cause of the search monopoly was like building a great
product and taking it to market. And that's what we need to do. And so this is where more CEOs are
getting to. Now, having said that, the CEOs who are willing to do this are still few and far
between. Leadership is rare in our time. And I would give the CEOs who are willing to take this on a lot of credit.
And I would say a lot of them aren't there yet.
A lot of them must be terrified, too, because these ideologies are so prevalent.
And these religions, as you would say, are so strong.
So, to give you an example, Brian, CEO of Coinbase, got deluged with emails from other CEOs in the weeks that followed.
And they were basically all like, wow, that's great.
I wish I could do that at my company. A wish. Right. Do you think that would be more prevalent in the weeks that followed. And they were basically all like, wow, that's great. I wish I could do that at my company.
A wish.
Right.
Do you think that would be more prevalent in the future?
Yeah.
Yeah.
So they're going to realize that.
They're going to have to.
Well, things like Netflix.
Netflix realized that when their stock dropped radically.
They realized that a little bit.
A little bit?
A little bit.
Yeah.
I have a friend who's an executive at Netflix.
And she was telling me the struggles that they go through.
And it's pretty fascinating.
Yeah. It's like they essentially hired activists. at Netflix and she was telling me the struggles that they go through and it's pretty fascinating.
It's like they essentially hired activists.
She pulled this person into her office to have a discussion with them and the person said, how do I know you're not the enemy?
Yeah, right.
That's right.
And she's like, I'm your fucking boss.
Right.
Like, what are you talking about?
That person wound up getting fired ultimately, eventually.
Yeah.
But I mean, what the fuck?
Yeah.
Imagine that kind of an attitude 20 years ago.
You could never imagine it.
It would not take place.
There's been a collapse in, I would say, trust and authority in managers.
There's been a collapse in leadership exhibited by managers.
It has not gone well.
It's been a bad experiment.
And there's a lot of fear.
And do you think this is accentuated by social media?
Oh yeah, for sure. Well, it's all social media, but it's also, it's also the mainstream media,
the classic media. Like, look, so what's the fear? Well, a big part of the fear is that there's
going to, you're then going to deal with, you know, you're going to have the next employee
who hates you is going to go public. Right. Right. But is that also-
It's cover of Time Magazine stuff, right? Like, you know, now, you know, what drives,
what goes in the cover of Time Magazine these days is apparently it's a lot of social media, but still, it's like all of a sudden 60 Minutes is doing a
hit piece on you. Right. But is the problem that these companies don't have ability to defend
themselves and express themselves on broad scale? Well, they could choose to. Right. But how would
they do that? They need to choose to. They need to have a crisis. They need to decide that the status quo is so bad that they're going to deal with the flack involved in getting to the other side of the bridge.
But they would also have to have a platform that's really large where it can be distributed so that it could mitigate any sort of incorrect or biased hit piece on them.
And look, they have to be willing to tell their story.
And they have to be willing to come out in public and say, look, here's what we believe.
Here's why we do things.
And that's what the CEO of Coinbase has done.
Yeah, he's done that.
Yes.
He's a very brave guy.
What's his name again?
Brian Armstrong.
Fuck yeah, Brian Armstrong.
Yeah, he's a great guy.
All right.
We're very proud.
So that brings me to crypto.
Yes.
Do you have a general feeling about crypto?
I'm sure you have very strong opinions.
Yeah, very strong opinions, yeah.
So let me start by saying we don't do price forecasting.
So we don't do price forecasting when it's on the way up.
We don't do price forecasting when it's on the way down.
I have no idea what the prices are going to be.
We never recommend people buy anything.
We're not trying to get people to buy anything.
I'm not marketing anything.
So nothing I say should be attributed in any way to like, oh, Mark said buy this or don't buy that. I'm not marketing anything. So nothing I say should be attributed
in any way to like, oh, Mark said, buy this or don't buy that. None of that. And in fact,
we basically, the way our business works is we basically ignore all the short-term stuff.
We sort of invest over a 10-year horizon. It's kind of our kind of base thing that we do.
And so we're, yeah, we have a big program in this and we're charging ahead with the program.
What are your feelings about the prevalence of, I mean, even these sort of novel coins or novelty coins
and the idea that you could sort of establish a currency for your business?
That's like, you know, there was talk about Meta doing some sort of a Meta coin, you know,
and that a company could do that.
Google could do a Google coin.
And they could essentially not just be an enormous company with a wide influence, but
also literally have their own economy.
What do you think about that?
Well, so this has happened before.
There's a tradition of this.
And so the frequent flyer miles are like a great example of this, right?
In fact, to the point where you have credit cards that give you frequent flyer miles as sort of cash back.
So companies have that.
You may remember from the 70s, more common in the old days, but there used to be these things called A&P stamps.
There used to be these saving stamps you'd get, and you'd go to the supermarket, and you'd buy a certain amount, and they'd give you these stamps.
You could spend the stamps on different things or send them in.
So there was sort of private, so-called script kind of currency issued by companies in that form.
Then there's all these games that have in-game currency, right?
And so you play one of these games like World of Warcraft or whatever, and you have the
in-game currency.
And sometimes it can be converted back into dollars, and sometimes it can't and so forth.
And so, yeah, so there's been a long tradition of companies basically developing internal
economies like this and then having their customers kind of cut in in some way.
And yeah, that's for sure something that they can do with this technology.
When you compare fiat currency with these emerging digital currencies, do you think
that these digital currencies have solutions to some of the problems of traditional money?
And do you think that this is where we're going to move forward towards, that digital
currency is the future?
So I'm not an absolutist on this. So I don't think this is a world in which we cut over
from national currencies to cryptocurrencies. I think national currencies continue to be very
important. The big thing about a national currency to me, the thing that I think gives it real,
because national currencies are no longer backed by gold or silver or anything,
their fiat, their paper. The thing that really gives them value, in my view, is basically that it's the form of taxation, right? And so if the government
basically is going to legally require you to turn over a third of your income every year,
they're going to require you to do that, not only in the abstract, they're going to require you to
do that in that specific currency, right? I can only pay the IRS in dollars. I can't do it in
Japanese yen or euros or Bitcoin. If you function completely in Bitcoin.
Yeah. Well, if you as an individual function completely in Bitcoin, then you would just
convert at the end of the year to be able to pay your taxes. You convert into dollars for
the purpose of paying your taxes. Could you pay your taxes right now
when it's worth almost nothing? No comment. Depends.
I mean, how does that work? the good the good the good the good news is
if your income is crypto then you have a lot less income this year too so what isn't there a fear
that the government would choose to tax you at the highest point like is it could is it yeah well
there's so actually bitcoin so there's this is actually an issue in the policy right now it's a
big dispute which is actually is is something like, is it money or is it a commodity?
And right now, actually, I believe this is still the case.
I think trading in cryptocurrency, profits trading in cryptocurrency, I think are all
short-term gains.
I think they always get you in short-term gains because they classify... I'm not a
... I have to go read back up on this.
But this is a hot issue in kind of how this stuff should be taxed and there are big
policy debates about that today.
Trevor Burrus But there's so many of them.
Isn't that part of the issue?
There's so many currencies and they're all sort of vying for legitimacy.
Jason Kuznicki Yeah but that's also... I mean it's good news, bad news.
It's also a big plus.
It's also a big plus in the following way.
We have a technology starting in 2009, right, sort of out of nowhere.
There is a prehistory to it, but really the big breakthrough was Bitcoin in 2009, the Bitcoin white paper.
We have this new technology to do cryptocurrencies, to do blockchains.
And it's this new technology that we didn't have that all of a sudden we have.
And we're basically in – we're now 13 years into the process of a lot of really smart engineers and entrepreneurs trying to figure out what that means and what they can build with it.
And that technology is blockchain?
Blockchain, yeah.
And its core is the idea of a blockchain, which is basically like an internet-wide database that's able to record ownership and all these attributes of different kinds of objects, physical objects and digital objects.
And how much of an issue is fraud and theft and infiltration of these networks?
and Bonnie and Clyde.
When the car was invented,
all of a sudden it created a new kind of bank robbery.
Right. Because there were banks
and then they had money in the bank
and then all of a sudden people had the car
and then they had the Tommy gun,
which was the other new technology
they brought back from World War I.
And then there were this run of,
oh my God, banks aren't safe anymore
because John Dillinger and his gang
are going to come to town
and they're going to rob your bank
and take all your money.
And that led to the creation of the FBI.
That was the original reason for the creation of the FBI.
At the time, it was like this huge panic.
It was like, oh my God, banks aren't going to work anymore because of all these criminals
with cars and guns.
It's basically... It's like anything.
It's like when there's economic opportunity, somebody's going to try to take advantage
of it.
People are going to try criminal acts.
People are going to try to steal stuff.
Then you basically... You're always in any system like that.
You're in a cat and mouse game against the bad guys, which is basically what this
industry is doing right now. What is causing this massive dip in cryptocurrency currently?
Oh, I have no idea. You have no idea? No clue. It's just happening?
So the theory of financial markets. So this goes back to the logic emotion stuff we were talking
about earlier. So one view of financial markets, like the way that they're supposed to work is it's supposed to be lots of smart people sitting around doing math and calculating and figuring out this is fair value and that's fair value and whatever.
Like it's all a very like mechanical, like smart, logical process.
Okay.
And then there's reality.
And reality is people are like super emotional.
And then emotionality cascades.
And so some people start to get upset and then a lot more people get upset or some people start to get euphoric.
A lot more people get euphoric.
Is now a good time to like jump in when people are in full panic?
I have no idea.
I like how you're like avoiding that.
I'm going to avoid that.
I'm very good at avoiding this question.
So Ben Graham is sort of the godfather of stock market investing.
Ben Graham was Warren Buffett's mentor and kind of the guy who defined modern stock investing.
And Ben Graham used this metaphor in his book 100 years ago when he said, look, you need to think about financial markets.
And he was talking about the stock market, but the same thing is true for crypto.
He said, you think about it, basically think about it as if it's a person
and call it Mr. Market.
And he said, the most important thing to realize about Mr. Market
is he's manic depressive.
Like, he's really screwed up, right?
And he has, like, all kinds of crazy impulses,
and he has, like, good days and bad days,
and some days, like, his family hates him,
and some days he's, like, you know whatever like his life is chaos um and basically every day
mr market shows up in the market and basically offers to sell you things at a certain price or
buy things from you at a certain price but he's manic depressive and so the same thing on different
days he might be willing to buy or sell at different prices and you can spend a lot of time
if you want to try and understand what's happening in his head. But it's like trying to understand what's happening inside the head of
a crazy person. It's probably not a good use of time. Instead, you should just assume that he's
nuts. And then what you do is you make your decisions about what you think things are worth
and when you're willing to trade. And you do that according to your principles, not his principles.
And so that would be the metaphor that I encourage people to think about.
Like these markets are just nuts.
There's a thousand different reasons why the prices go up and down.
I don't have any idea.
The core question is what's the substance, right?
What's real?
What's actually legitimately useful and valuable, right?
And that's what we spend all of our time focusing on.
So when you focus on that, what do you find when you say what is valuable?
Like what are you looking towards? Are you looking towards long-term stability? Are you looking towards public interest in a thing? Like how do you decide what's valuable? Yeah. So we,
our lens is venture capital. We look at everything through the lens of technology. And so we look at
the lens of these things. We only invest in things that we think are significant technological
breakthroughs. So if somebody comes out with just an alternative Bitcoin or whatever, and even if
it's a good idea, bad idea, that's not what we do. What we do is we're looking for technological
change. And basically what that means is the world's smartest engineers developing some new
capability that wasn't possible before, and then building some kind of project or effort or company right around that. And then we invest. And then we only think
long-term. We only think in terms of 10 years, 15 years, longer. And the reason for that is
big technological changes take time, right? It takes time to get these things right, right?
And so that's our framework. We spend all day long talking to the smartest engineers we can find, talking to the smartest founders we can find who are organizing those engineers into projects or companies.
And then we credit back every single one of those that we can find.
And how do you establish this network?
And then we basically lock the money up.
We raise money from our investors.
We lock that money up for like a decade.
And then we try to help these projects succeed.
And then hopefully at the end of whatever the period of time is,
it's worth more than we invested. But we're not trading. We're not in and out of these things.
We're not gaming the prices. And how do you develop these networks where you are in touch
with all these engineers and do find these technologies that are valuable?
Yeah. So that's the core emotion. So the venture for the firm I'm a part of now,
we're up to about 400 people. This is kind of what this organization does.
We've got about 25 investing partners.
This is what they do.
They spend all day basically.
We spend all day basically talking to founders, talking to engineers.
A lot of us grew up in the industry.
So a lot of us have like actual hands-on experience having done that.
And then a lot of our partners have been very involved in these projects over time.
It's a positive selection.
I mentioned adverse selection, positive selection.
We're trying to attract in.
We want the smartest people to come talk to us.
We want the other people hopefully to not come talk to us.
We do a lot of what we call outbound.
We do a lot of marketing.
We communicate a lot in public.
One of the reasons I'm here today is just like we want to have a voice that's in the outside world basically saying here's who we are.
Here's what we stand for.
Here are the kinds of projects we work on, here are our values, right? A good example,
the reason I told the Coinbase story of what Brian did is because like that, that's part of our,
like we think that's good that he did that. Other venture firms might think that's bad,
right? But like if you're the kind of founder who thinks that's good, then we're going to be a very
good partner for you, right? And then we spend a lot of time in the details, like working through
the, we have a lot of engineers, you know, working for us. A lot of us have engineering you. And then we spend a lot of time in the details, like working through the, we have a lot of engineers working for us. A lot of us have engineering degrees. And so we spend a
lot of time working through the details. Mark, you're a fascinating guy. I really,
really enjoyed this conversation. I'm really glad we did it. Can we do it again?
Sure, of course. Let's do it again. Thank you very much. Thank you very much for being here.
I really, really enjoyed this. Good. All right. Anything else? Want to give people your social
media or anything? Do you want to do that? You want to get inundated by dick pics?
I'm all good. That's such an inviting proposition. Maybe tell you what, maybe they could use the AI art.
Yeah, use some AI art. Send Mark some AI art.
To do some dick pics.
Thank you very much.
Bye, everybody. bye everybody