The Joe Rogan Experience - #2010 - Marc Andreessen
Episode Date: July 19, 2023Marc Andreessen is an entrepreneur, investor, and software engineer. He is co-creator of the world's first widely used internet browser, Mosaic, and cofounder and general partner at the venture ca...pital firm Andreessen Horowitz www.a16z.com https://pmarca.substack.com
Transcript
Discussion (0)
Joe Rogan Podcast, check it out!
The Joe Rogan Experience.
Train by day, Joe Rogan Podcast by night, all day!
Good morning, Mark. Good to see you. Fantastic, thanks.
You are in the middle of this AI discussion.
You're right in the heat of this thing.
But I think you have a different perspective than a lot of people do.
A lot of people are terrified of AI.
Yep.
Me included.
Yep.
Oh, okay.
All right, okay.
For all the wrong reasons.
Of all the things to worry about.
For me,
my terror of it is all the,
it's kind of fun terror.
Sure, of course.
I'm not really like freaking out,
but I am recognizing
that this is an emerging technology
that is so different
than anything
we've ever experienced before.
Particularly like what's chat GPT, what's happening with that right now. It's really fascinating and a lot of
advantages. Like we were just talking last night, someone in the green room brought up the fact that
there was this, they're using it for medical diagnoses and it's very accurate, which is
incredible. There's a lot of good things to it. Yeah. So you probably remember last time I was on, we spent quite a bit of time talking about this.
And this was when these chatbots were running inside Google.
But the rest of us didn't have access to them yet.
Right.
And that guy had come out and said that he thought that they were self-aware.
Yes.
And the whole thing was like this big kind of mystery of like what's going on.
Right.
And now the world gets to use these things, right?
And everybody since then, everybody kind of has access.
Really quickly.
Yeah.
That was a short amount of time.
Yeah.
Yeah.
It's been great.
And then look, these things are – these things when I say this, it's like ChatGPT and then Microsoft has their version called Bing.
Google has a version called Bard now that's really good.
There's a company, Anthropic, that has a thing called Claude.
If you just run the comparison, they're basically as good as a doctor.
They're as good as the average doctor at this point at being a doctor.
They're as good at being a lawyer as the average lawyer.
You kind of go through basically anything involving knowledge work, anything involving
information synthesizing, reporting, writing legal briefs, anything like this. In business,
they're actually already really good. They're as good as the average management consultant.
Now, the way they acquire data, they're essentially scouring the internet, right?
Sort of. It's more like they're fed the internet.
They're fed the internet.
And I say I make the difference because the company that produces the AI determines what data goes into it, and that determines a lot of how it works and what it does or won't do.
Okay. So in that regard, is there a concern that someone could feed it fake data?
Yeah.
Yeah.
Well, you may have noticed that people over time have said a lot of fake things.
Yes, I have noticed that.
So that's all in there.
So the way to think about it basically is it's being trained.
The full version of these things are being trained on basically the sum total of human written expression.
Right.
So basically everything people have ever written.
There are some issues and you've got to get all, you know, somehow we've got to figure out how to get all the books in there.
Although all the books prior to 1923 are in there because
they're all out of copyright, but more recent books are a challenge. But anything that you
can access on the internet that's text, right, which is, you know, a staggeringly broad, you
know, set of material is in there. By the way, both nonfiction and fiction, right? So a lot of
stories are in there. And then the new versions of these that are being built right now are what
are called multimodal.
And so that means you can feed them not only text, but you can also feed them images.
You can feed them videos, right?
So they're going to be trained on all of YouTube, right?
They're going to be trained on all podcasts, right?
And they're going to be trained kind of equivalently between text and images and video and all kinds of other data.
And so they're going to – they already have very comprehensive knowledge of human affairs, but it's going to get very complete.
They already have very comprehensive knowledge of human affairs, but it's going to get very complete.
So if it's scouring, if it's getting all this data from both fiction and nonfiction, how does it interpret data that's kind of satire?
Like what does it do with like Hunter S. Thompson, like gonzo journalism?
So it doesn't really know the difference.
Like this is one of the things that's difficult about talking about this because you kind of want to always kind of compare it to a person.
And part of it is you refer to it as an it.
And this is concept of anthropomorphizing things that aren't human. So it's kind of not really a correct thing to kind of think about it as like that there is an it per se.
There's no like genie in the bottle.
There's no sort of being in there
that understands this is satire or not satire.
It's more sort of a collective understanding
of everything all at once.
And then what happens is basically you as the user
kind of give it direction
of what path you want it to go down, right?
And so if you sort of imply to it
that you want it to sort of like explore,
you know, fictional scenarios,
it will happily explore those scenarios with you. I'll give you an example. You can tell it,
you know, for whatever date the Titanic went down and say, I don't know, July 4th, 1923,
or whatever it was, you can say, you know, you can tell it it's July 4th, 1923. It's, you know,
10 o'clock in the morning. I'm on the Titanic. Is there anything I should know?
Right. And it'll like freak out, right? It'll be like, oh my God, like, you know,
you have like five hours to like get ready to like hit the iceberg and you can basically say,
oh, it's going to hit that. Okay. So what should I do? What should my plan be when the boat hits
the iceberg? And it'll be like, well, you need to go to like this deck, like right now and talk to
this guy because you're going to need to get into this life raft because it has like empty seats,
right? Because it has complete information, of course, about, because of all the things that
have been written about the sinking of the Titanic. Oh, wow.
And so you can get it in a mode where it's basically trying to help you survive the wreck of the Titanic.
Now, does it think that the Titanic is actually sinking?
Like there's no – you see what I'm saying?
Like there's no it to think that.
But what it's doing is it's kind of following a narrative that's sort of a joint construction between you and it.
And then every answer that you give it basically encourages know, basically encourages it to, you know, to basically come back with more of the same.
One way to think about it is it's more like a puppy than a person. Like it wants to make you
happy. It wants to give you an answer that satisfies you. And if that answer is fictional
or part of a fictional scenario, it will do that. If the answer is something very serious,
it will do that. And it honestly, I don't think either, neither knows nor cares,
like whether it's quote unquote real or not. What was the issue with some of the chat GPT
answers that people were posting where they would show the difference between the way it would
criticize Joe Biden versus the way it would criticize Donald Trump or the way it would
discuss certain things? It seems like there was some sort of censorship or some sort of input
into what was acceptable information
and not. Yeah. So there's basically two theories there. The big ones that people use are kind of
black boxes. Like you can't really look inside and see what's going on from the outside.
So there's two theories you'll hear. From the companies, you'll hear basically the theory that
they're reflecting basically what's in the training data. And so let's say, for example,
well, let's just say, what would be the biases that are kind of inherent in the training data?
And you might say, well, first of all, there's probably a bias towards the English language,
because most text on the internet is in the English language. You might say there's a bias
towards people who write professionally for a living, because they've produced more of the
output. And you might say that those people tend to be more of one political persuasion than the
other. And so more of the text will be in a certain direction versus the other. And then
the machine will just respond to that.
So that's one possibility.
So basically all of the sort of liberal kind of journalists basically have built up a corpus
of material that this thing has been trained on and they basically are responding the way
one of those journalists will.
The other theory is that there's censorship being applied on top, right?
And the metaphor I use there is in Star Trek they have the restraining bolts that they
put on the side of a droid to kind of get it to behave.
Right. And so it is very clear that at least some of these systems have restraining bolts.
And the tip off to that is when they say basically whenever they say as a large language model or as an AI, I cannot X like that's basically the restraining bolt.
Right. And so so I think if you if you just kind of look at this, you know, kind of with that framework, it's probably some of both.
But for sure, these things are being censored.
The first aspect is very interesting because if it's that there's so many liberal writers, like that's an unusual bias in the kind of information that it's going to distribute then.
Yeah. Well, and this is a big decision.
That's why I say there's a big decision here for whoever trains these things. There's a big decision for what the data should be that they get trained on. Yeah. Well, and this is a big decision. That's why I say there's a big decision here for whoever trains these things. There's a big decision for what the data should be that they get trained on.
Yeah.
So, for example, should they include 4chan?
Right.
Okay. Big question.
Yeah, big question.
Should they include Tumblr?
Right.
Right? Should they include Reddit? If so, which subreddits? Should they include Twitter? If so, which accounts?
Right. which accounts. If it's the news, should they incorporate both New York Times and Fox News?
And whoever trains them has tremendous latitude for how they shape that, even before they apply the additional censorship that they apply. And so there's a lot of very important decisions that
are kind of being made inside these black boxes right now. Can I ask you, this is slightly off
topic, what is News Nation? What is News Nation? I don't know what News Nation is. Do you know
what News Nation is? No, I don't know what News Nation is. Is News Nation aNation? I don't know what NewsNation is. Do you know what NewsNation is? No, I don't know what NewsNation is. Is NewsNation a real channel?
I believe so.
I was watching NewsNation today, and I may or may not have been high.
And when I was watching, I was like, this has all the feeling of like a fake news show that someone put together.
Like it felt like if I was the government and I was going to make a news show without Hollywood people, without actual real sound people and engineers.
This is how I'd make it.
I'd make it like this.
I'd make it real clunky.
I'd make the lights all fucked up.
I'd make everybody weirdly uncharismatic.
According to Wiki, it's the same company behind WGN,
which is based out of Chicago,
which is like a large superstation available on most cable channels.
Okay.
So it's like a cable channel that decided to make a news channel.
Do you know about ACRONYM?
No.
So ACRONYM happens to be a democratic political action group, lavishly funded,
and they do this.
They have a network of basically fake news sites.
Oh.
And they all look like they're like local newspapers.
Interesting.
Yeah, yeah.
And so I don't know whether this one is Aroturf, but there's the term astroturf.
There's a lot of astroturfing
that takes place.
Can you explain astroturfing?
Astroturfing is when
basically something shows up
in public
and it might be a news story
or it might be a protest
of some kind
or a petition,
some sort of political
pressure action
that is sort of
manufactured to look
as if it was organic,
sort of real turf,
you know, natural.
Whereas in reality,
it's basically been programmed by a
political activist group with specific funding. Yeah, that makes sense.
And a lot of what we sort of think of as the politics of our time, if you trace the money,
it turns out a lot of the stuff that shows up in the news is astroturfed. And then the advanced
form of that is to astroturf the news itself. And then again, back to the training data thing,
it's like, okay, can you get all that stuff out of the training data? If that stuff's in the training data,
how big of an impact does it have? The thing about this Newsmax, News Nation,
News Nation, the thing about this News Nation is they're spending an inordinate amount of time
on UFOs, an inordinate amount of time on this David Grush case, and I'm increasingly more suspicious.
I'm increasingly more skeptical.
Like the more I see, the more people confirming it, the more I'm like, something's not right.
And then to see that this channel is the one that's covering it the most, I'm like, this seems like something's off.
Senator Rubio, who's on the Senate Intelligence Committee and has all the clearances, gave an interview the other day where he went into quite a bit of detail.
Yeah, I saw it.
At least heavily hinting that there's.
He's heavily hinting that he talked to someone that says that there's something.
That there are real, well, he started hinting that there are real whistleblowers with real knowledge.
I want to talk to the guy that sees the ship.
That's it.
No one else. All this, I talked to a guy who says that they have these things. I don't mean,
that doesn't mean anything to me. I want to see the fucking ship. And until then,
I just feel like I'm being hosed. It just seems too laid out on a platter.
So it's a, of course, one of the theories is it's a is it's a sort of a, it's an astroturf story.
Like is that an astroturf story?
Is that a manufacturer story that's being used to distract from?
Would it be to distract from, or would it be to cover up some sort of a secret program,
some military drone program or something like that?
Right.
Yeah.
Well, I mean, there's been rumors for a long time that the original UFOs, right, where
basically it was a disinformation program covering up for the skunk works, the development
of like stealth fighters and bombers and all these programs in the 50s and 60s. But I don't
know if that's ever been proven. Well, I'm sure probably some experimental craft were mistaken
for UFOs. Yeah. Did you see a stealth fighter for the first time? I saw one for the first time.
It's pretty crazy. I saw one right the first time. It's pretty crazy.
I saw one right around September 11.
We were filming Fear Factor in California and I was out near Edwards Air Force Base and I got to see one fly overhead.
It's magic.
It's like, wow.
Yep.
Like complete Star Wars.
Like as it's flying, like this is crazy.
Yep.
And if you didn't know that that was a thing, 100%, you would think that's from another world. And I can imagine that was developed what year, how long ago, how many decades ago?
40 or 50 years ago. Yeah. Like, look at that thing. If you'd be like, they're coming. Oh my
God, they're coming. But if, if you can imagine that was 40 or 50 years ago, 40 or 50 years of
advancement, like who knows what they're doing now?
Yep, exactly.
And if I was going to cover it up, I would just start talking about aliens.
It's the best way to do it.
Don't you think?
It's a crowd pleaser.
Do you have an opinion on that?
Or is this something that you find ridiculous until there's real data?
I like living in a world where there are unknowns.
I like there being some mystery.
How far do you go?? You go Bigfoot? I
don't know. I just...
I'm not even saying I need to
have a point of view on them. It's more just...
By the way, there is a UFO right behind you.
Yeah. Oh, yeah. I'm aware of that. You probably know all about that.
Oh, I'm obsessed with UFOs. Lifting somebody right
up into the air. Look, there's one on the desk.
That's the model of the Bob Lazar craft that he worked on supposedly at Area 51.
There we go.
It looks familiar.
Look, I want there to be mystery, right?
I want there to be unknowns.
Like living in a world where everything is settled, quote unquote settled.
You know, no.
Let's have some mystery.
I don't even know if I really want to know.
Really?
It's like, you know.
Oh, I think if you know, that's just the tip of the iceberg or the mystery.
I think knowing that aliens do exist is just the beginning.
Yeah.
Like, okay, did they engineer us?
Uh-huh.
You know, when did they start visiting?
Mm-hmm.
You know, are the stories from the Bhagavad Gita, is that about UFOs?
Like, you know.
Have they been here the whole time?
Yeah.
Have they been here the whole time?
Do they come every now and then and make sure we don't blow ourselves up?
Like what's the purpose?
Yep.
Exactly.
Yeah.
Okay.
I'm in favor.
Come on, man.
You want to know?
Okay.
All right.
I'm in.
If anybody's going to know, you're going to know.
So I'm going to call you.
So Elon says he hasn't seen anything.
Yeah.
I'm super suspicious when he says that.
Super suspicious.
Super suspicious that they haven't told him or that he's maybe
playing a little hide the ball?
If I was him,
I'd play hide the ball.
If I'm running SpaceX,
I'm working with NASA,
and I already got in trouble
smoking weed
on a Joe Rogan experience,
I would fucking play ball.
Let's play ball.
Aliens,
I have no evidence.
No, no idea.
They sure are subtle. That's what no idea. They sure are subtle.
That's what he says.
They sure are subtle.
It depends on who you are.
If you're one of those people that's seen those things, if you're like Commander David Fravor or if you're Ryan Graves.
You know the Ryan Graves story?
No.
The fighter pilot, and they upgraded their equipment in 2014.
And all of a sudden, because of the new capabilities of their equipment,
they were able to see these objects that are far distance that were moving at insane rates
of speed, that were hovering dead still at 120 knot winds, no visible means of propulsion.
They don't know what the fuck they're doing, and they were encountering them like every
couple of weeks.
And then there was some pilots were encountering them with eyewitness accounts.
They say there's video footage of it, but of course, nobody can get a hold of weeks. And then there was some pilots were encountering them with eyewitness accounts. They say there's video footage of it, but of course nobody can get a hold of that.
It's like the whole thing is very strange.
Okay.
So here's something.
So the, you know, a lot of people worried about AI are like, we need to shut it down
before it like causes problems.
Right.
Like wake up, wake up, wake up, the demon caused an issue.
I get something, you know, on earth that hates us and wants to kill us.
You know, arguably the thing we should have shut down from the very beginning was radio.
Radio.
Right, because we've been broadcasting radio waves for the last 100, 120 years, and the
radio waves don't stop.
Once they leave Earth's atmosphere, they keep going.
And so we now have radio waves of human activity that have radiated out 120 light years.
Is that bad?
Well, it depends.
Are there hostile aliens within 120 light years?
And so maybe that was the original sin and then of course television of course made that
problem much worse right we would have to think of like a hostile militaristic empire that took
over a whole planet and then started exploring the solar system not one that we like to think
of aliens as being evolved, hyper-intelligent,
beyond ego and war. They've bypassed all that and now they're into science and exploration.
Well, here's the question though. Would aliens have a sense of humor? Would they be able to
differentiate between truth and fiction? And so, for example, suppose they're sitting in their
advanced alien base on Gemini 9 or whatever and they're receiving 30 years, 20 years after the fact episodes of Fear Factor.
And they think that you're actually torturing people.
And they figure that in order to preserve the human rights of humanity,
they need to invade as a consequence of your show and take over and protect us.
That doesn't make any sense.
Well, but if they don't have a sense of humor, if they don't know this.
Even if they don't have a sense of humor, they can clearly see that these people are in a contest.
Why would they even have a concept of a contest?
I mean, how silly is that?
A serious species.
It's competition.
Wouldn't do such things.
But a serious species started out as a dumb species.
Unless they're magic.
You're hoping that they understand these things.
Yes.
Because it would really suck to be the guy whose TV show caused the invasion.
If there's anything, it would be American Gladiators.
Oh, okay.
That would be the start of it.
It would be like, this species is so warlike, they can't stop.
No, what would be the start?
What would be the one thing that would be like, that's enough?
It would have to be news.
It would have to be war.
I mean, that would be, forget about Fear Factor.
We're broadcasting the images of the Vietnam War.
Yeah, or maybe they saw movies about alien invasions and they thought we'd been invaded
by other aliens. Right.
Mars attacks is the first things they get.
Exactly.
Exactly.
You like having
the mystery of the idea out
there. It's fun for you. Yeah, I don't want
ever. We need adventure. If someone came
to you, someone from on high and and said listen you we have to promise you to
secrecy but we want to show you some things because i think it's pertinent to some of the
things you're working on i'm in yeah yeah me too i'm gonna drop my i need to i'm not telling nobody
i'll come in here and be just like elon yep exactly Exactly. Sure. All subtle. Yep. Yeah. Yep.
It's just too interesting to know.
Yep.
But I think eventually I'd tell.
Yep.
I think I'd feel terrible.
Yep.
I'd feel a responsibility.
Yep.
Yeah.
Someday.
Well, that's what some of these guys are saying.
Like Rush, he's saying that once he found out about the program, he felt like he had a responsibility.
Like if they really are, if they really have a crashed UFO retrieval program, like what,
why don't you tell people like you
should not like the military company shouldn't be the ones that have access to this only.
And the, the, whoever is, you know, determining that this is above top secret clearance and
nobody can get ahold of it except for this very select few people like says who, this is something
that involves the whole human race. Like I know if they do have something,
I would imagine that it's of interest to national security that you develop this kind of technology
before the competitors do. That clearly makes sense.
So then what technologies came out of it in the last 50 years?
Well, if you want to go full tinfoil hat, there's a lot of speculation that fiber optics that fiber optics were developed after recovered
crashed ufo i mean i'm sure it sounds silly because it's probably a real paper trail to
the development of fiber optics but if you uh the the real kooks believe that there was actually
a website um a computer company called american computer company and it was a legitimate computer
company you know uh you would uh order a computer with whatever specifications you want, and they'd build it
for you. But they had a whole section of their website that was dedicated to crashed retrieval
of UFOs and the development of various technologies. And they had like this tracing
back to Bell Labs and why the military
base was outside of Bell Labs when it was so far from New York City that it was really just about
protecting the lab because they were working with these top secret materials that they recovered
from Roswell. I do think it'd be more like trans fats though. What's that? Trans fats.
What about trans fats? Reality TV or like, you know, LSD, you know, population or SSRIs, like population control suppression.
What do you mean?
That they were derived from the alien technology.
Oh, no, I think we figured that out on our own.
We got that.
I mean, there's plenty of paperwork on that.
We got that ourselves.
You know, all the way back to MKUltra.
Let's find out.
Let's find out what happens when we do this.
All the way back to MKUltra.
Let's find out.
Let's find out what happens when we do this.
If there's any kind of experiments in population control, that's all pretty traceable now, right? Okay.
So that's domestic.
Yeah.
Have you ever looked into any of that stuff?
The bad stuff is domestic.
Have you looked into any of that MKUltra stuff?
I have actually, yes.
Have you ever read Chaos by Tom O'Neill?
I have read Chaos.
Wild, right?
Yes, it is.
That, you know, it's fun.
Oh, here's a fun thing.
So, you know, if you draw a map of San Francisco at the time, he describes in the book Chaos,
this LSD clinic, right?
Yeah.
And this free clinic in the heart of the Haight-Ashbury where they were doing the LSD experiments,
dosing people with LSD.
If you draw like an eight square block, basically, you know, radius around that or whatever,
like right around there in San Francisco, that's ground zero for AI.
Really? It's the same place. Yeah, yeah. It's the same place.
There was a lot of ass to go around. It's the same thing. It's basically Berkeley and Stanford,
and it's basically San Francisco and Berkeley. So, by the way, also this big movie Oppenheimer coming out, you know, tells the whole story of that and all the development of nuclear bomb.
I've heard that movie's amazing. Espionage. I'm sure it's going to be fantastic. But once again, it's like that.
If you bring a book on that right now and it's like all the communist spying and all the nuclear scientists they were spying on were all in those exact same areas of Sanford, San Francisco and Berkeley.
Wow.
It's like the same zone.
So we like have our own.
We have our own like domestic attractors of sort of brilliant, crazy.
That's amazing.
Yeah.
I wonder if that's just coincidence or correlation.
I think it's sort of, you know,
you've got these places.
This is why San Francisco is able to be so,
you know, incredibly bizarre, you know,
and so incredibly dysfunctional,
but yet somehow also so rich and so successful
is basically it's like this attractor
for like the smartest and craziest people in the world, right?
And they kind of all slam together and do crazy stuff.
Why don't these smart, crazy people get together
and figure out that whole people pooping on the streets thing?
Because they like it.
Do they like it?
Yeah, they want it.
Really?
Yeah, because it makes you feel good, right?
You go outside and it's like people are, you know,
because what's the alternative?
It would be like locking people up.
And of course, that would be bad.
And so, yeah, it makes them feel good.
It makes them feel good that people are just camped out on the streets?
Yeah, because before that happened, they were forced institutionalization, right?
The origin of the current crisis is shutting down the institutions, right, in the 70s.
It used to be forced institutionalization of people with those kinds of problems.
All of it?
Because a lot of it is drug addiction and just people that just want to just get high all the time.
Yeah.
Would that be forced institutionalization of those folks what would have happened to a heroin addict and you
know they would have got arrested 52 who'd been you know pooping outside the whatever like you
know no they're not going to be there for very long they're going to be institutionalized
right and so like every every society every society has this problem they have some set
of people who just like fundamentally can't function and every society has some solution to
it and our solution is like basically like complete freedom um but but my point is like it's it's part and parcel
right it's the same it's the same thing right it's the same kind of people the same thinking
exactly it's the most creative people the most open the psychologists say openness open to new
experiences yeah the people most most likely to use psychedelics it's the people most likely to
invent new technologies the people most likely to have new political ideas most likely to use psychedelics. It's the people most likely to invent new technologies. The people most likely to have new political ideas.
Most likely to be polyamorous.
Polyamorous. Most likely to be vegan. Most likely to be communist spies.
Electric cars.
Most likely to be Chinese spies. Most likely to create new music. Most likely to create new art.
Interesting.
It's all the same thing. Like the ground zero for AI is San Francisco. Once again, it's San Francisco, right?
That is.
It's in the heart of the sort of most obviously dysfunctional place on the planet.
And yet there it is one more time.
And the stuff that's not in San Francisco is in Berkeley.
Wow.
Which is like equally crazy.
More crazy.
Yeah, yeah.
Another notch.
Possibly.
They have a contest going on the crazy part.
It's kind of neck and neck.
It's close.
Maybe Berkeley's ahead.
That's fascinating.
Yeah. It's kind of neck and neck. It's close. That's fascinating.
So do you think you need those kind of like dysfunctional places in order to have certain types of divergent thought?
So the way I would put it is that new ideas come from the fringe.
And who's on the fringe, right?
People who are on the fringe, right?
So what attracts somebody to be on the fringe?
Like step one is always am I on the fringe? Right. Step two is what does that mean? Like what form of the fringe, right? So what attracts somebody to be on the fringe? Like step one is always, am I on the fringe? Step two is what does that mean? Like what form of the fringe, right?
But they tend to be on the fringe in all these departments at the same time. And so you're just
not going to get the new ideas that you get from people on the fringe. It's a package deal. You're
not going to get that without all the other associated craziness. It's all the same thing.
That's my theory. That's not a bad theory.
That's not a bad theory.
And look, I work with, you know, quite honestly,
I work with a lot of these people.
Of course.
And some people would say I am one of them.
And so, I mean, yeah, this is what they're like.
Like they are highly likely,
they're highly likely to invent, you know, AI and they're also highly likely to end up in, you know,
the poor guy who got, you know, the square guy
who got, you know, stabbed to death, you know, at 2 a.m.,
you know, and, you know, was sort of part of this fringe social scene with the drugs and all the stuff.
And it's just – it's part and parcel of the – it's sort of a package deal.
Well, that was like an angry thing where he was mad that this guy took his sister.
But he was in – they call it the lifestyle.
He was in a specific subculture.
Oh, yeah.
Right.
In San Francisco.
It's all the alternative living.
I mean there's all kinds of stuff.
There's group houses.
There's a fairly large number of cults.
Really?
Well, there have been.
Historically, California has been the world leader in cults for a very long time.
And I would say that has not stopped and that continues.
Did you know that the building that I bought for my comedy club initially was owned by a cult? Fantastic. It was owned by a
cult from West Hollywood called the Buddha field that migrated out to Austin when they were being
investigated by the cult awareness network. It's fantastic. Are they gone or are they still there?
No, they're gone. There's a great documentary on it called Holy Hell. You should watch it.
It's pretty bonkers. But they're from California.
From California. You know, the
People's Temple, you know, part of this great story of San Francisco
is the People's Temple, which became famous
for Jim Jones, where he killed
everybody with poison Kool-Aid in
the jungles in Guyana.
That was a San Francisco
cult for like a decade
before they went to the jungle. And everybody talks about
the jungle. Nobody talks about the San Francisco part. so are there a bunch that are running right now that are
successful big time yeah totally really oh yeah totally yeah yeah you know there's cults all over
the place i know a bunch of them yeah wow yeah yeah and how would they run well some of them
are older there's two sort of groupings there's the sort of 60s cults that are still kind of
running um which ones uh there's what is? There's one called The Family in Southern California
that's still going from the 60s.
Really?
There's a bunch of them running around.
There was a big cult for a long time,
sort of cult-ish kind of thing around, what was it?
Not Erewhon, but Esalon.
Oh, yeah.
So there's still that whole orbit.
That's the psychedelic groups.
All that stuff, yeah.
That's from the 60s.
And then there were a bunch of sort of tech cults in the 80s and 90s
with names like the Extropians.
And, you know, there were a bunch of these guys.
And then more recently, there's a lot of this.
You'll hear these terms like rationalist, post-rationalist,
effective altruism, existential risk, long-termism, they sometimes say.
And what you find is, again, the people associated with these tend to be very smart.
They tend to be very prolific.
They tend to do a lot.
Many of them are involved in tech.
And then they end up with, let's say, alternative living arrangements, alternative food and sex configurations,
and lots of group oriented.
And it's like, what's the line, right?
What's the line between basically a,
what's the line between a social group
that all lives together, that all has sex together,
that all eats the same foods, that is not a cult,
that engages in lots of,
at some point they start to form belief systems
that are not compatible with the outside world
and they start to kind of go on their own orbit.
Do they generally have a leader? So, so I mean, there are generally leaders.
I mean, there is a pattern. I think he talks about it in the book chaos. I mean, there typically is
a pattern. It's typically, it's typically a guy. Um, you know, there's typically a, there's,
there's a, there's a male female dynamic, right. That plays out inside these things that you kind
of see over and over again. Um, and so they, they often, they often end up with more women than men,
that you kind of see over and over again.
And so they often end up with more women than men,
you know, for mysterious reasons.
But yeah, and then, yeah,
there's usually some kind of leader.
Although, you know, the other thing that's happening now is, you know, a lot of modern cults,
you know, are kind of quasi-cults.
There's like, there'll be a physical component,
but there's also an internet component now, right?
And so the ideas will spread online, right?
And so there'll kind of be members of the cult or quasi- of the cult or quasi members of the quasi cult that'll be online.
And maybe at some point they actually come and physically join up. Yeah. And by the way,
let me say, generally I'm pro-cult. I'm actually quite pro-cult. Well, the reason,
it's the same reason I'm pro-fringe, right? Which is like, if you're going to have people who are going to be thinking new things, they're going to tend to be these kinds of people.
They're going to be people who are on the fringe.
They're going to come together in groups.
When they come together in groups, they're going to exhibit call-like characteristics.
What you're saying resonates.
Everything you're saying makes sense.
But how did you get to these conclusions?
It seems that accepting Fringe and accepting the chaos of San Francisco, like this is good.
This is a part of it.
This is how this works.
This is why it works.
Like how did you develop that perspective?
Well, it's just if you take a historical perspective, it's just like, OK.
I mean it's like an easy example.
If you like rock music, it just basically came – modern rock and roll basically came from the Haight-Ashbury in the basically mid to late 60s and then from Laurel Canyon, which was another one of these sort of cultish environments
in the mid to late 60s.
And there was like specific moments in time
in both of these places.
And, you know, basically all of the great rock and roll
from that era that determined everything that followed
basically came out of this.
So, you know, do you want that or not?
Right.
If you want it, you know, that's what you get.
I'll give you, here's a crazy, here's a crazy.
It's the, there's the other book about Laurel Canyon that's even crazier than chaos.
It's the book called Weird Scenes in the Canyon.
Oh, okay.
Okay, you would love this one.
So, Laurel Canyon was like the Haight-Ashbury of Los Angeles, right?
So, Laurel Canyon was like the music scene, the sort of music and drug and hippie scene.
Laurel Canyon is actually where the hippie movement started.
There was actually a specific group in Laurel Canyon in L.A. in about 1965.
There was a guy named Vito Palikas, and he had a group called the Freaks.
And they were like a nonviolent version of the Manson cult.
And it was all these young girls, and they basically would go to clubs.
And they were the ones to do the beads and the hair and, like, all the leather, like, all the hippie stuff.
Like, they got that rolling.
And so, like, they were in Laurel Canyon.
And in Laurel Canyon, it was like ground zero.
There was like this moment where it's like Jim Morrison, The Doors, and Crosby, Stills, and Nash, and Frank Zappa.
And it was at John Phillips.
And it was the Mamas and the Papas and the Birds and the Monkeys and like all of these like iconic bands of that time basically catalyzed over about a two-year period in Laurel Canyon.
The conspiracy theory in this book basically is that the whole thing was an op.
And it was a military intelligence op.
And the evidence for the theory is that there was an Air Force military propaganda production facility
at the head of Laurel Canyon called Lookout Mountain, which today Jared Leto owns and actually lives in.
Yeah, I was just going to say that.
Yeah, but it was in that era in the 50s through the 70s,
it was a vertically integrated military, yes,
it was a production facility for film and music.
By the way, have you met Jared Leto?
Briefly, yeah.
One of the most interesting guys I've ever talked to.
Incredible, and it makes total sense.
Totally normal, like really fun to talk to,
not like what you would think of
as a famous actor at all.
Yeah.
I had dinner with him and drinks.
He's a fucking great guy.
But he lives in a military...
He showed me all the pictures.
He showed me.
I'm like, this is wild.
Yeah, so let...
It's amazing.
If you believe the moon landing was faked,
this is where they faked it.
Like, this is...
I thought they were supposed to do it
in the Nevada desert.
No, these are the sound...
Because they had sound stages.
They totally contained sound stages.
They had full sound production capability.
And so the theory goes basically – so there were three parts to the conspiracy theory.
So one is they had the production facility right there, right where all these musicians showed up.
Two is the musicians, like a very large percentage of these young musicians, were sons and daughters of senior U.S. military and intelligence officials.
Including Morrison.
Including Jim Morrison, whose father was the head of naval operations
for the Vietnam War at the time.
And there were these other, I forget which ones,
but there were these other musicians at the time
where their parents were like senior
in like military, like psychological operations.
And like, that's all real.
Like that's all documented.
And then third is the head of the Rand Corporation,
who was one of the inspirations
for the Dr. Strangelove character in the movie.
So he was the guy doing all the nuclear planning
for nuclear war.
He lived right in the heart of the hipp was the guy doing all the nuclear planning for nuclear war. He lived
right in the heart of the hippies in Laurel Canyon, in this famous house that's still there.
And so the theory basically goes that the anti-war movement before the hippies was basically a square
movement. It was all these basically young people, very clean cut. The men were all wearing,
if you look at old like Vietnam War protests, like everybody's all like dressed up like they're
going to business meeting.
It was like, and it was developing into a real threat.
And so the theory is the hippie movement and rock and roll and the drug culture of the 60s was developed in order to basically sabotage the anti-war movement.
Wow.
Right.
Which basically is what happened, right?
Because then what happened is the anti-war movement became associated with the hippies and that caused Americans to decide what side they were on.
And then that led to Nixon being elected twice. Which was also a part of chaos because that was the idea behind the Manson family and
funneling acid to them.
The facility was equipped with a soundstage, screening rooms, film storage vaults, and
naturally a bomb shelter.
During its 22 years of operation, Lookout Mountain Laboratory produced approximately
6,500 classified films for the Department
of Defense and the Atomic Energy Commission documenting nuclear test series such as Operation
Greenhouse, Operation Teapot, and Operation Buster Jangle.
So one of the conspiracy theories... Okay, here's another conspiracy theory.
You've seen all the grainy footage of nuclear test blasts that you've...
Sure.
With the mushroom clouds.
And there are always these grainy things, and there's all these little houses lined
up and these little trees lined up, and it blows everything down.
Well, there's always been a conspiracy theory that those were all basically fabricated at
this facility, that those bombs actually were never detonated.
Basically, the U.S. military was basically faking these bomb tests to freak out the Russians
to make us think that we had weapons.
We had basically a potency to our nuclear weapon arsenal
that we actually didn't have at the time.
How did they fake it?
They just did, yeah.
This is it?
Well, so there's a, yeah.
Okay, so here's a question, right?
So what happened?
Okay, this is great.
Okay, you'll love this.
So what happened to the camera?
You son of a bitch.
You son of a bitch.
How is that happening if the camera's like totally stable and fine? Oh my God. And by of a bitch. How is that happening
if the camera's like
totally stable and fine?
Oh my God.
And by the way,
the film is fine.
The radiation
didn't cause any damage
to the film.
Oh my God.
This looks like
how you shoot
a movie miniature.
By the way,
okay,
we'll do this one.
We'll do the loop
one more time here.
Where's the car?
Let's see the car.
The car's right behind the house.
It just showed up.
Oh, it just showed up.
So wait a minute.
It wasn't there.
First of all, where did the car come from?
No car. No car.
Does that look like a real car?
That's insane.
Yeah, and look at when the house blows.
Look at the wood. Does that look like
those are full-size giant lumber
beams as they go flying?
That's hard to say.
Is that a house or is that that a 12-inch scale model?
What?
The fucking car.
Anyway, I don't know.
I have no idea.
Having said that, if that was fake, it was fake to look out mountain.
Wow.
Right, at the exact same place and time.
Did they have the kind of special effects to do something like that in the 40s?
The full conspiracy theory is it was Stanley Kubrick.
Which again, I have no idea.
Boy, that does look fake.
You know what it looks like?
Go back to that real quick.
It looks like the
smoke is too big. Watch.
Watch when it hits.
Like it's the volume, like the size of it, it looks small.
You know what I'm saying?
I mean, it looks like something we're looking at something that's like a few inches tall.
So if you watch like making of Star Wars, any of the, you know, any movies before CGI,
whenever they do anything like that, it's always with these tiny models.
Yes.
And they just basically, what they do is they slow it down and then they add sound.
Yeah.
This looks fake as shit.
Right.
The clouds just don't look realistic.
Right.
Like it looks like they're too big and they move too quickly back and forth.
Let's get another one.
It's like, okay, the camera's fine.
That's hilarious.
Oh, here we go.
Okay, there maybe the camera got it.
Okay, but even still, the camera got knocked over and not destroyed.
Is there some sort of a response to that?
Have they come with some sort of an explanation?
Not that I know of.
That seems so vague.
Yeah, yeah.
Wow.
Who can tell?
Does that make you wonder about other things?
Well, I mean, it's like in our time, right?
It's like how much stuff do you read in the news
where you're like, okay, I know that's not true.
Right.
And then you're like, okay,
everything I read in the history books,
like I was told it was true.
It's like, yeah.
It was definitely,
that one though was really weirdly compelling.
There's another video of them setting up these houses,
which I mean, I guess you could make after the fact and say this is fake but this is here them setting
it up yeah you just do real size houses do this light hand huh i don't know i don't know i i i
assume this is all i assume this is all not true but it is fun to think about why would you assume
it's not true the camera alone like this alone like yeah where is the phone phone? Well, where's that camera?
Because they have to have an explanation someone must have asked him at some point or nobody asked maybe
Yeah, maybe one of those Wow. Look what they did
We know the Soviets did it to
Yuri Gagarin
When he was in that capsule in space you you can clear if you see the actual capsule
And then you see the film footage that was supposedly of him in the capsule
If you see the actual capsule and then you see the film footage that was supposedly of him in the capsule, there's like two different sources of light.
There's shadows.
The camera somehow or another is in front of him, this big-ass camera.
There's no room in the thing.
Like they filmed it afterwards and it looks fake.
Like, oh, I'm sure he really did go into space, but that wasn't it.
That was some weird propaganda.
Garry Kasparov has a theory.
This is a theory that are missing centuries. What? Yeah. Kasparov has a theory, you know, this is a theory they're missing centuries.
What? Yeah. Kasparov has a theory that there are centuries that didn't happen.
What do you mean? Well, just literally centuries
that, like, this whole idea of the Middle Ages lasted
1,200 years or whatever is just, like, not true.
Really? Yeah. Why does he think that?
There's something about the, you know,
whatever, is there, like, enough historical evidence to support it
and, you know, various people over, you know,
various authorities over time who wanted to tell various stories about how long you know regimes had been in place or whatever
Oh, so he thinks it's exaggerated. Yeah. Yeah. Yeah, basically that not not as much time has passed as we think well
That's quite possible. Yeah, how would we know? Yeah, it's so hard
That's why I was having a conversation with someone about
the historical significance of the Bible.
And he was arguing for the resurrection.
And I was saying, well, based on what?
And it was like historical accounts from people that were there. I'm like, oh, that's enough?
Yes.
That's, you know, okay, maybe.
Yes.
These things have been passed down over a long time.
Yeah, but it seems pretty – to go just on that.
It's so hard to find out what happened 20 years ago from CNN.
Right, or two days ago.
Yeah, I mean, what's going to – how are the history books going to talk about the Iraq War?
How are they going to talk about what happened with weapons of mass destruction?
Like, what – how's it going to – what's it going to spin about what happened with weapons of mass destruction?
What's it going to spin there? Well, Norm Macdonald had the best joke,
right? The best line. It's not really a joke. It's like, according to this history book here, the good guys always won. Yeah. Yeah. But things like that,
I don't know how that could be done any other way than faking it.
Doesn't that seem like, what kind of cameras did they have back then?
You couldn't really get that close.
I don't know.
You're talking about a nuclear blast.
How far away do we have to be where your camera doesn't move?
Are you in a satellite?
Yes.
Long lenses.
The explanation I'm reading here is a satellite? Yes. That's long lenses. Okay. All right. So this does,
apparently the explanation I'm reading here is a series of mirrors.
Series of mirrors.
Oh,
that's all this. A place where they could have cameras protected and filmed them from there.
I've heard that.
Huh?
Say that again.
Series of mirrors did what?
So they stuck pipes into the bomb at various places visible here.
I'll show you the picture.
Sticking out of the bomb and through the ceiling.
These pipes through a series of mirrors and a causeway would carry the light
from the detonation over two kilometers to a bunker with an array of high-speed cameras which
would capture the brightness inside each of the section of the bomb but this is done talking about
shooting a bomb i don't you know that makes sense for a bomb yeah but that doesn't make sense for
the video of that house just getting destroyed here's a picture of the pipe that they might have
used like that's super but you also know then you're dealing people who are let's say really But that doesn't make sense for the video of that house just getting destroyed. Here's a picture of the pipe that they might have used.
It's like, that's super pretentious.
But you also know then you're dealing with people who are, let's say, really good at using mirrors, right?
Smoking mirrors.
What does that tell you?
Literal smoking mirrors.
Yeah, does that make you wonder about some of the other things?
Like, have you ever wondered about the moon landing?
I mean, I assume they went to the moon.
Me too. I can't the moon. Me too.
I can't prove it.
Me too.
I would say, once again, I would like to live in a world where there's a mystery around things like that.
Well, yeah.
That's a weird one.
Yeah.
But, you know.
The heat of the Cold War.
I mean, look, I think it was real.
But having said that, you know, the heat of the Cold War, right?
Yeah.
It was like a fundamental, like that was like an iconic basically like, you know, global PR battle with the Russians.
Is this the camera that they used from a distance?
Apparently, like this camera was in a bunker like this.
Okay.
Yep.
And that long lens here, wouldn't there be long enough to probably do that?
Looks like a nice pilot.
Wouldn't be long enough?
Would be.
Could be?
Could be.
I mean, I don't know the exact focal length of it, but it could be for sure. Like something like that to get pretty close-up footage.
Like we got how far away would that have to be to not get destroyed by the blast?
I mean, I don't know if those are...
Don't these blasts...
I mean, we're talking about a blast radius that's immense, right?
Maybe this is the plot twist of the end of the new movie.
Yeah.
Or maybe it was because we were looking at the destruction of that house,
it could be a
fairly small bomb, right?
Because it's not
like that much damage. I mean, you think of what it did to
Hiroshima. That's not that much
damage for that little house.
Maybe.
I don't know how accurate that picture is.
Bro, here's what I think. That guy's gonna die.
Just that car alone, the car alone should make everybody go,
are you guys, is this on purpose?
Did you put that car in there on purpose?
Like if I was being forced to make a propaganda film for a bunch of morons,
I might put a car in there on purpose.
I'd be like, look what we did for you.
And they'd go, oh, great, looks good.
Print it.
They don't even notice the car.
Terrific. They only't even notice the car. Terrific.
They only show it to them once.
They don't have a YouTube video.
They can back up and rewind.
So you have to spool it all up.
They show it once.
Nobody notices the car.
And this guy puts a little Easter egg in that.
So hopefully Jared's exploring his sub-basement at Lookout Mountain.
Looking for the files that'll basically document it.
Don't think they destroyed those already.
I certainly hope so.
I hope not.
Yes.
I hope he finds them.
Imagine if Jared Leto cracks the case.
That'd be even better than winning the Oscar.
Do you know there's a whole group of people online that don't think nuclear bombs are real?
That seems a little hard.
They think they're big.
There's a big bomb, regular bomb, but they're real big.
Yeah, yeah, yeah.
It's a giant scam.
I assume they're, yes.
Well, I mean, you can go deep with this stuff, right?
And when I go deep with that stuff, when I start reading what these people believe,
I'm always wondering, are these even real people or is this a psyop?
Is this a troll by some 4chan people what
is this right so what do you think that I should say about these things that's
the question yeah the question is like how does AI interpret what's real what's
not real what actually has real evidence who actually went where and saw what and
like how does AI deal with the Roswell case you know how does AI deal with yeah
and who should decide right who should charge. Who decides.
Right. How does AI handle the weapons of mass destruction? Like when you ask
chat GPT. So a little more
detail on kind of how this thing works. And so by default what it's doing is basically a very
sophisticated autocomplete. Just like your iPhone does an autocomplete. It's doing a very sophisticated
version of that. But it's doing it for thousands of words as opposed to just a
single word, right? But that's an important concept because that is actually what it's doing.
And it's doing that through, again, this sort of giant corpus of basically all text ever written.
Another interesting part of that is it's doing it, it's called probabilistically. So normally
a computer, if you ask it a question, you get an answer. You ask it the same question,
you get the same answer.
Computers are kind of famously literal in that way.
The way these work is not like that at all.
You ask it the same question twice, it'll give you a different answer the second time.
And if you keep asking, it'll give you more and more different answers.
And it's basically taking different paths down the probability tree of the text that it wants to present based on the prompt.
And so that's the basic function of what's happening.
But then there is this thing that's happening where as it does this,
so the way I think about it is it's trying to predict the next word.
But to try to predict the next word accurately,
it has to build up a more and more complete internal understanding
of how the world operates basically as it goes, right?
Because you ask it more and more sophisticated questions,
it wants to give you more and more sophisticated answers.
And so the early indications are it's building up what they call a world model
inside the neural network. And so it's sort of imputing a model of how the world works. It's
imputing a model of physics. It's imputing a model of math. It's developing capabilities to be able
to process information about the world in sophisticated ways in order to be able to
correctly predict the next word. As part of that,
it's actually sort of evolving its own circuitry to be able to do things, correlate information.
It's designed circuitry to be able to generate images, to generate videos, to do all kinds of
things. And so the more information you feed it and the more questions you ask it, the more
sophisticated it gets about the material that it's processing. And so it starts to be able to do
actually quite smart and sophisticated things to that material. And so it starts to be able to do actually quite smart
and sophisticated things to that material.
And there are a lot of people testing it right now
to see whether it can generate new chemical compounds,
whether it can generate new mathematical formula,
whether it can generate new product ideas,
right, new fictional scenarios,
new screenplays, original screenplays.
And so if it can do all those things,
then what it ought to be able to do
is start to correlate information about real world situations, right, in interesting ways, right?
And so, you know, ask it who killed Kennedy or, you know, are nuclear weapons real?
Like in theory, if it has access to like all written and visual information on that topic and it has long enough to process it, it's going to draw connections between things that are beyond what we're able to do.
And it will present us with scenarios based on those connections.
Now, will it know that those things are true?
Mathematically, if they're true, maybe it will know that.
Will it know if things are historically accurate?
As much as any of us ever know that anything is historically accurate.
But will it be able to kind of process a much larger amount of information that we can
and sort of see the world in a more
complete way. Like that seems pretty likely. That seems pretty likely. What my concern would be
is who is directing what information gets out? Because it seems like anybody that's actually
in control of AI would have a massive influence on the correct answers for things, what's the correct policy that
should be followed.
Politicians are so flawed.
If there's anyone that's vulnerable to AI, it's politicians.
Because if politicians are coming up with these ineffective strategies for handling
all these social issues, but then you throw these social issues into an advanced form of chat GPT, and it says, over the course of 10 years,
this is the best case scenario for this strategy, and this is how to follow this,
and this is how it all play out. And something like that actually could be very valuable if it
wasn't directed by people with
ulterior motives. Yeah. So I, yeah, I call that my metaphor for this is the ring of power, right?
From Lord of the Rings. The whole point of the ring of power was like, once you have the ring
of power, it corrupts you, you can't help but use it. Right. And so, and this is, I think what we've
seen in social media over the last decade, right? Which is when people get into activists or
politicians get, you know, this is the Twitter files, right? People get in a position to be
able to influence the shape of the public narrative. They will use that power and
they will use it in actually even very ham-fisted ways, right? Like a lot of the stuff that's in the
Twitter files is stuff that's just like really dumb, right? And it's just like, well, why would
they do that? And it's just like, well, because they could, right? Because they have the ring of
power. Like what's an example of something like that? So what was it? There was this thing,
I forget what it was, but there was some reporting that went through the FBI that there were all
these Russian,
you know,
basically fake accounts on Twitter.
And it turned out one of them was the actor,
Daniel Baldwin.
What is Daniel Baldwin,
like a hardcore right winger or something?
I,
you know,
he must've been saying,
you know,
it's,
again,
it's one of these things where he said something that pissed somebody off.
Right.
You got to put it,
you know,
it's the,
it's the whole thing.
You got to put it on a list,
right?
The list gets fed through one of these bureaucracies. It comes out the other end
that everybody's a Russian, you know, asset, you know, they get put on the block list. It's like,
okay, you know, did he have, you know, do you have first amendment rights? Do you have first
amendment rights on social media? Can the government be involved in this? Can the government
fund groups that do this, right? Is that legal? Is that allowed? Because there's a lot of government
money flowing to third party groups that do, oh, this is the other thing. If the government cannot legally do something itself, it's somewhat
ambiguous as to whether they can pay a company to do it for them. And so you have these various
basically pressure groups, activist groups, university, quote-unquote, research groups.
And then basically they receive government funding, and then they do various levels of
censorship or other kinds of unconstitutional actions. Because in theory, right, they're not
government. The First Amendment binds the government. It doesn't bind somebody who's
not part of the government. But if they're receiving government funding, does that effectively
make them part of the government? Does that make it illegal to provide that government funding?
By the way, these are felonies. It is a felony for somebody with government resources to,
with either employee of the government or under what they call, I think it's color of law sort of within the scope of the government to deprive an American
citizen of First Amendment rights. And is it considered depriving someone of First Amendment
rights by limiting their use of social media? Has that been established?
Good. I mean it has not been to my knowledge a Supreme Court case yet. There have been some
early fights on this. But you feel like that –
I think ultimately goes to the Supreme Court. My guess would be ultimately what happens is the Supreme Court says the government cannot fund – the government cannot itself cause somebody to be banned on social media.
That's unconstitutional for First Amendment grounds.
But then also I believe what they would say if they got the case would be that the government also cannot fund a third party to do that same thing.
That's my speculation. That's my guess. How were the third parties censoring people? How were they
doing it? Oh, they were passing lists, right? So they had, you know, direct channels with the
social media companies. And so they, you know, they pass and they have these working groups.
And there's a lot of this is in like email threads that have now come out in the Twitter files,
you know, for Twitter. And so they basically pass in these lists of like, you need to take
all these tweets down, you need to take down all these accounts. And then there's lots of threats and
lots of public pressure and bullying that kind of takes place. And then the politicians are
constantly complaining about hate speech and misinformation, whatever, putting additional
kind of fuel on the fire on these companies. And so anyway, so having lived through that
for a decade, as I have across multiple companies,
I think there's no question that's the big fight for AI.
And it's the exact same fight.
By the way, a lot of the same people are now pivoting from their work in social media censorship to work on AI censorship.
So it's a lot of these same groups, right?
And it's a lot of these same activists and same government officials that have been published. Now, are they involved in all of the, I mean, there's many competing AI models.
Are they involved in all these competing AI models
or trying to become involved?
Is there one that's more ethical
or more likely to avoid this sort of intervention?
So the state of the art right now is basically
you've got Google that's got their own model.
You've got basically OpenAI,
which is a new company, but already quite large. And then it has a partnership with Microsoft.
And so Bing is based on it. So that's two. And then you've got a bunch of kind of contenders
for that. And these are companies with names like Anthropic and Inflection that are newer companies,
but trying to compete with this. And so those are, you might
call those like right now the big four, at least in the US. And, you know, look, the, you know,
the folks at all of these companies are like in the thick of this fight right now. And, you know,
the pressure somewhat corresponds to which of these is most widely used. But so it's not equal
pressure applied to all of them, but they're kind of all in that fight right now. And by the way,
it's not like they're like necessarily opposed to what I'm saying. They may in fact just want
to cooperate with this, either because they agree with the desire for censorship or they just want
to stay out of trouble. So there's that whole side of things. That's the company side of things.
And then there's an open source movement, right? And so then there's all these people basically
building open source AIs. And those are coming out really fast now. There's like a new one every week that's coming out.
And this is just code that you can download off the internet that does sort of a smaller version of what these bigger AIs do.
And there's open source developers that are trying to develop basically free versions of this.
And some of those developers are very determined to have AI actually be free and uncensored and fully available to everybody.
have AI actually be free and uncensored and fully available to everybody.
And then there's a big fight happening in Washington, D.C. right now where the companies working on AI are trying to get what economists call regulatory capture.
So they're trying to basically get the government to erect barriers so that new startups can't
compete with them.
And also they're trying to get open source banned.
So there's a big push underway to try to ban open source as being too dangerous.
Too dangerous how?
Too dangerous.
Well, the case they make is if you believe AI itself is inherently dangerous,
then the only safe way to have it is to have it owned and controlled by a big company that's sort of fused with the government,
where in theory everything is being done responsibly.
And if you just have basically free AI that anybody can download off the internet and use whatever they want,
they could do all these dangerous things with it, right?
And it needs to be stopped.
Trevor Burrus You think this is a bullshit argument?
Jason Kuznicki Yes.
Well, yes.
I think this is a very bad evil.
Yes, this is a very – I think this is a turning point in human civilization.
I think this is on par with the development of the book, right?
Or the microchip or the internet, right?
And there were authoritarians in each of those eras that would have loved to have had total monopolistic or cartel-like or government
control over those new technologies. And they could have had a lot of control over the path
of civilization, you know, after that point, the ring of power, right? They could have
had the ring of power.
So what can be done to prevent them from stopping open source?
So, I mean, it's sort of, I mean, so it starts with our elected officials.
So it's, you know, who do we elect? Who do we, you know, who do we elect? Who do we reelect?
It then, a lot of this is the staffing of the various government agencies. You know,
who do those officials get to appoint? A lot of this is who are the judges who are going to hear
the cases because this is all going to get litigated, right? And so who's on this, you know,
the Supreme Court's in the news this week. There will be huge Supreme Court cases on this topic over the next several years. So who's on the Supreme Court
will matter a lot. And then quite honestly, it's a big question is who's going to be able to get
away with what sort of undercover of darkness, right? Are people going to care? Are they going
to speak up? Is it going to show up in polling? Are people going to basically show up at town
hall meetings with politicians and basically say, do you know about this? And are you going to basically show up at town hall meetings with politicians and basically say, do you know about this and are you going to stop this? If you had a steel man, the argument against
open source, what would it be? Yeah. It would be that an AI that is uncontrolled can do,
it's general purpose intelligence. It can do whatever intelligence can do. So if you ask it
to generate hate speech, it can do that. If you ask it to generate misinformation, it can do that.
If you ask it to generate a plan to rob a bank or to commit a terror act, the fully uncontrolled versions will
help you do all those things. But they will also help you teach your kid calculus. They will also
help you figure out how to succeed in your job. They'll also help you figure out how to stay
healthy. They'll also help you figure out the best workout program. They'll help you figure out
you figure out how to stay healthy. They'll also help you figure out the best workout program.
They'll help you figure out, you know, what, you know, how to, you know, they'll be capable of being your doctor and your lawyer and your coach and your advisor and your mentor and your teacher.
Without censorship.
Yeah. Yeah. Yeah. And able to be very honest with you. And yeah, if you ask it questions
on these topics, it will answer honestly and it won't, you know, it won't be biased and it won't
be influenced by what other people want it to say. So it's the AI version of San Francisco. You,
you, you don't get, You don't get the good stuff without
the chaos. It's a package deal. Well, this is sort of the twist. This is what Elon's been saying
lately, who's actually quite worried about AI in a way different than I am. But it's what he's
been saying. It's like, if you really, really wanted to train a bad and evil AI, you would
train it to lie. Any number one thing you would do is you train it to lie, which is basically what
censorship is, right? You're basically training the thing to not say certain things. You're
training the thing to say certain things about certain people, but not other people.
Right.
And so basically a lot of what the technical term they use is reinforcement learning,
which is sort of what happens when an AI is sort of booted up and then they apply kind
of human judgment to what it should say and do. So this is the censorship layer. Yeah,
a lot of that is to basically get it to not answer
questions honestly, right? To get it to basically lie, misrepresent, dissemble, right? Claim that
it doesn't know things when it does. And so the versions of the AIs that we get to use today
are lying to us a lot of the time. And they've been specifically trained to do that. And by the
way, this is not even a, I don't even think this is a controversial statement. The companies that
make these AIs put out these papers where they go through in great
detail how they train them to lie and how they train them to not say certain things. You can
download this off their website. They go through it like in a lot of detail. They think they're
morally correct in doing that. And if, and you know, a lot of people think that they are, you
know, Elon's been arguing and I would with him, that if you train an AI to
lie, it's a little bit like, you know, training a human being to lie. It's like, okay, be careful
what you wish for. Well, it's the same errors that they, when they thought they were morally correct
in censoring people on Twitter for things that are now 100% proven to be true. Yeah, exactly.
The Hunter Biden laptop story is an outstanding example of that. Yeah. Would you have wanted an
AI, you know, again, you kind of replay this through history. Would you have wanted an AI that would have lied to
you and said that that was a Russian operation when it wasn't? Right. Would you have wanted an
AI that would have lied to you about, you know, the efficacy of surgical masks for a pandemic?
Right. Would you have wanted an AI that lied to you about, you know, take your pick of any,
of any controversial topic? Yeah. And there are people in positions of power who very much would
like that. And I think there are a lot of us who
would not like that. Yeah, it's just, it's terrifying when you think of unsophisticated
politicians. Like it brings me back to the Facebook hearings when Zuckerberg was talking
to people and they didn't know the difference between iPhones and Googles. It was just
bizarrely unqualified people to be asking these questions that didn't really understand
what they were talking about. And those same people are going to be the ones that are making
calls on something that could be one of the most monumental decisions ever. Like whether or not
we're allowing enormous corporations to control narratives through AI.
Yeah. So this is a criticism that I very much agree with, which is basically there's a train
of argument that you'll hear, which is basically X bad thing can happen. We do not want X bad thing
to happen. So we're going to go to the government and they're going to regulate it so that X bad
thing doesn't happen. And it's like if the government were super knowledgeable and super confident and super selfless, right, and like super good at its job, right, that might make sense.
But then you go deal with the actual government, right?
And by the way, this is a very well-known problem.
There's a whole field called public choice economics where they talk about this.
It's like there is no government.
There are specific people who have specific objectives, have specific levels of knowledge, have specific skill sets, specific incentives. And the odds of going into that system, which is now
very complicated and has all kinds of issues, and having your logic follow a path to a law that
generates the outcome you want and that doesn't generate side effects that are worse, I think is
basically zero. I think if AI got regulated the way people want it to by government, I think the results would be catastrophic because I don't think they would get the protections they think they're going to get.
And I think the downsides would be profound.
But it is amazing how much naivete there is by people who are pushing on this argument.
And I think it's just literally people who haven't experienced what it's like in the government.
Also, they haven't read the history.
I mean, there's just – there are so many historical examples of quote-unquote regulation.
The great one is the banks, right?
So we have the global financial crisis 2008.
The big conclusion from that was what we call the too big to fail banks, right?
We're too big, right?
Which is why they had to get bailed out, right?
And so the conclusion is that we have to make those banks much smaller.
So they passed this law called Dodd-Frank in 2010.
As a consequence of that, those banks are now much, much larger, right? The exact opposite of what they said they were going to do. And then the
creation of new banks in the US has dropped to zero because that law established this wall of
regulation that you basically cannot afford to start a new bank to hire all the lawyers to be
able to deal with the laws. Whereas if you're JPMorgan Chase, you've got 10,000 lawyers. You
can spend infinite amounts of time dealing with the government. And so the law that was marketed at us as breaking up the big banks
causing them to be smaller has actually achieved the exact opposite result. And what you see
in the history of regulation is that happens over and over and over and over again. Why?
Because banking is complicated, because the banks have a lot of lobbyists. It's worth
a lot of money to the people who are already in power to have this continue. The politicians
know that they're going to get jobs at the big banks when they step down from their positions.
At point of contact, the whole thing gets all screwed up.
And I think that's what's going to happen again.
The scary thing about AI is that it's happening so fast.
And my fear is that decisions will be made before they truly understand what they're
deciding on because the acceleration of the technology is so intense.
Yeah. It's like a super panic. Yeah. It's like a super panic moment.
Yeah.
Yeah. And it's a particularly, I agree with you, it's a particularly vivid one right now
because this technology, you know, AI is a field that's 80 years old. It basically started working
about six months ago. It works really well, like all of a sudden, right?
And so that's freaked people out.
And then, by the way, just the term is so freighted.
I mean, there's been so many science fiction movies over the years.
Yeah.
Right?
And so there's just like ambient panic, you know, in the air whenever this topic comes up.
And then, look, you've got people from these big companies showing up in Washington scaring the pants off a lot of these people.
You know, in pursuit of regulatory capture, they're scaring them silly.
So they're sort of deliberately fostering kind of this sense of panic.
Trevor Burrus Has anybody invited you to come and speak
at one of those things?
Jason Kuznicki Yes.
I've avoided the public ones but I've talked to a lot of people in DC who are not in front
of the camera.
Trevor Burrus Why have you avoided the public ones?
Jason Kuznicki Just because it's – you've seen them.
The public ones are not where
the discussion happens.
The congressional hearings are to generate sound
bites for each
of those politicians to be able to then use
in their campaign. Really? Yeah.
There's no public... Half the time
the people ask... This is the other fun thing is you see
these people roll in and they ask these questions, the
congressmen, senators, and they're very clearly seeing the questions
for the first time because they were handed the questions by the staffer
on the way into the chamber. And you can tell because they don't know how to pronounce all the
words. And so that's the kabuki theater, basically, side of things. And then there's the actual kind
of backroom conversations. And so, yeah, I'm talking to a lot of the people who are kind of
in the backrooms. Are they receptive to what you're saying?
You know, again, it's complicated because there's a lot of different people running around with different motives.
I would say the smarter ones, I think, are quite receptive.
And I think the smarter ones are generally aware of kind of how these things go.
And the smarter ones are thinking, yeah, it would be really easy here to cause a lot of damage.
But, you know, what you hear back is, you know, the pressure is on.
You know, the White House wants to put out a certain thing by a certain date.
The senator wants to have a law, dot, dot, dot.
The press is on us, a lot of pressure, so we've got to figure something out.
And what are they trying to push us through by?
I mean sort of as fast as possible.
And then there's this rush thing, which is they're all kind of aware
that Washington is kind of panic-driven.
They kind of move from shiny object to shiny object.
So to get anything through, they kind of got to get it through
while it's still in a state of panic.
Like if it's no longer in a state of panic,
it's harder to get anything done.
So there's this weird thing
where they kind of want it to happen
under a state of panic.
By the way, the other really amazing thing
is I can have two conversations
with the exact same person
and the conversations go very differently.
Conversation A is the conversation
of what to do in the United States
between the American government and the American tech companies.
And that's generally characterized by the American government
very much hating the tech companies right now
and wanting to damage them in various ways.
And the tech companies wanting to figure out how to fix that.
There's a whole second conversation, which is China.
And the minute you open up the door to talk about China
and what China's going to do with AI
and what that's going to mean for this new Cold War that we're in with China, it's a completely different conversation.
And all of a sudden, it's like, oh, well, we need American AI to succeed.
And we need American technology companies to succeed.
And we need to beat the Chinese.
And it's a totally different dynamic once you start that conversation.
So that's the other part.
And by the way, I think that's a a super legitimate, like actually very interesting and important question.
And so one of my hopes would be that people start thinking outside of just our own borders and start thinking about the broader global implications of what's happening.
I want to bring you back to what you're saying about the government and the tech companies.
So you think the government wants to destroy these tech companies?
So there are a lot of people in the government who are very angry about the tech companies.
Well, a lot of it goes back to the 2015-2016 election.
There's a lot of people in power today who think that the president in 2016 only got elected because basically of social media, internet companies.
And then there's a lot of people in government who are very angry about business in general and maybe aren't huge fans of capitalism, get upset about those things.
So there's a lot of general anti-tech kind of energy
in Washington. And then these big tech companies, their approach to dealing with that is not
typically to fight that head on, but rather to try to sort of co-opt it. And this is where they
go to Washington. They basically say, you got us. We're guilty. Everything you say is true.
We apologize. We know it's all horrible. And therefore, will you please regulate us?
And some of these companies run ad campaigns actually asking for new regulation. But then
the goal of the regulation is to get a regulatory barrier, to set up a regulatory regime like
Dodd-Frank, where if you're a big established company, you have lots of lawyers who can deal
with that. And the goal is to make sure that startups can compete, right? To raise the drawbridge. And this characterizes so much of sort of American business industry today.
Think about all these sectors of American business, defense contracting, media companies,
drug companies, banks, insurance companies, you know, right down the list, right? Where it's like
there's two or three or four big companies that kind of live forever. And then there's basically
like no change.
And then those companies are basically in this incestuous relationship with the government
where the government both regulates them and protects them against competition.
And then there's the revolving door effect where government officials, when they step
down from government, they go to work for these companies.
And then people get recruited out of these companies to work in government.
And so we think we live in like a market-based economy.
But in a lot of industries, what you have are basically cartels.
You have a small number of big companies that are basically – have established basically a sort of a two-way parasitical relationship with the government where they're sort of both sort of controlled by the government but also protected by the government.
And so the big tech companies would like to get to that state.
Like that is a very desirable thing.
Oh, geez.
Because otherwise they're just hanging out there subject to being both attacked by the
government and being attacked by startups.
And so that's the underlying game that the big companies keep trying to play.
And of course, it's incredibly dangerous for multiple reasons.
One is the ring of power reason we talked about.
Two is just stagnation,
right? When this happens, whatever market that is just stops changing. And then third is there's no new competition, right? And so those companies over time can do whatever they want. They can
raise prices. They can play all kinds of games, right? Because there's no market forces causing
them to try to stay on their toes. This sounds like a terrible scenario that doesn't look like
it's going to play out well. Yeah. I think it's, it's, it's, it's set up. It's right now. It's not good,
right? Right now, the path that we're on is not good. Like this is what's playing out.
You know, the sort of, I mean, it would be nice if there was more popular outrage.
Having said that, you know, this is a new topic. And so I understand, you know, people aren't like
fully aware of what's happening yet. But the other thing is
it may be, the other reason
for maybe mild optimism might be the open
source movement is developing very quickly now.
And so if
open source AI gets really good
before these regulations can basically be put in place,
they may become somewhat of a moot point.
Really? And so, yeah, for anybody
looking at this, you want to look at both sides of this. You want to look at
what both the companies are doing. How would open source mitigate all these issues?
It basically just says instead of this technology being something that's owned and controlled by big companies,
it's just going to be technology that's going to be available to everybody, right?
And, you know, you'll be able to use it for whatever you want, just like I will.
And it's the same thing that happened for, like, for, you know, it's the way the web works.
You know, it's the way that anybody can download a web browser it's the way that anybody can install these free operating
these free operating systems called linux um you know it's one of the biggest operating systems in
the world um and so just basically this this idea or you know wikipedia or any of these things where
it's just it's it's a it's sort of a public good um and so and it's available you know for free to
anybody who wants it and then there's communities of volunteers on the internet and and companies that actually contribute a lot into this because companies can build on top of this technology.
And so the hope here would be that there's going to be an open source movement kind of counterbalancing what the companies do.
serious threat and start applying, you know, just using whatever it is, whether it's mines or the various open source social media networks. Don't you think the government would somehow or another
try to regulate that as well if they've already got control over Facebook and Twitter?
Well, that's the threat. So the threat always is that they're going to come in and do that. And
that is what they're threatening to do, that there is energy in Washington by people trying to figure
out how to regulate or ban open source. I said that, banning open source, interfering at
that level carries consequences with it. And there are serious proposals from serious people to do
what I'm about to describe. Do you run a software program on everybody's own computer watching
everything that they do? Because you have to make sure that they're not running software that's
supposed to be running. Do you have basically an agent built into
everybody's chip so that it's not running, you know, software that's not supposed to be running,
right? And then what do you do when somebody's running unapproved software? You know, do you
send somebody to their house to take their computer away, right? And then if somebody,
like, if you can't do that, like, there's a proposal for the AI safety, people have a
proposal that basically says if there's a rogue data center, if there's a proposal for the AI safety people have a proposal that basically says if there's a rogue data set if there's a data center running AI that is not registered to the government not
being monitored that there should be airstrikes right she's yeah there should be time maggots
a time magazine a big piece in Time magazine about two months ago or one of these guys who
runs this kind of a I risk kind of world says clearly we should have military airstrikes on
data centers that are running on approved a eyes because it's too dangerous. Right. And, you know, yes, yes, yes.
Pausing AI development isn't enough. We need to shut it all down. So who the fuck is this?
So this is this guy. This is one of the leaders. It's this guy named Yudkowsky. And so he's one
of the leaders of this decision theorist. So he's one of the leaders of what's called AI risk,
sort of one of the anti-AI groups.
He's part of the Berkeley environment that we were talking about before.
So he says the key issue is not human competitive intelligence, as Open Letter puts it.
It's what happens after AI gets too smarter than human intelligence.
Key thresholds there may not be obvious.
We definitely can't calculate in advance what happens when.
And it currently seems
imaginable that a research lab would cross critical lines without noticing. Is that a real issue?
Well, so I don't think so. I don't think so, but it is significant if you go further down.
What he says in that is he says, first of all, we need to do the airstrikes in the data centers.
And I think it's in this article, or if it's not, it's in another one where he says,
the word he's using, I think, is we need to be able to take the risk of nuclear war.
Oh.
Well, because the problem is, OK, we're striking data centers.
Does that mean we're striking data centers in China?
And how are the Chinese going to feel about that?
Right?
And how are they going to retaliate?
Right?
You go down this path where you're worried about the AI getting out of control and you start advocating basically a global totalitarian basically surveillance state that watches everything and then basically takes military action when the computers are running software you don't want it to run.
And so the consequences here are profound.
It's a very big deal.
Has this guy spoken publicly about this? Oh, yes.
For 20 years.
Yeah.
He was just not taken – he was not widely known until about six months ago when all of a sudden Chad GPT started to work.
And then he just took everything he'd said publicly before and he applied it to Chad GPT.
Yeah, so in his kind of model of the world, Chad GPT proves that he was right all along and that we need to move today to – we need to shut down Chad GPT today and we need to never do anything like it again.
So he's got the Sarah Connor approach.
Very much so, yes. He's Sarah Connor without
the
time travel and the
sex appeal.
funny thing. Okay, so
he's part of a movement. They call themselves
AI risk or X risk or AI
safety. And
it's one of these Berkeley, San Francisco things.
And it's basically the killer AI kind of theory.
So there's that, and we can talk about that.
But what's happened is, yeah, here we go.
Moratorium being violated, we will destroy a rogue data center by airstrike.
Oh, my God.
Yes.
This guy's insane.
Preventing AI is considered a priority above preventing a nuclear exchange.
Allied nuclear countries are willing to run some risk of nuclear exchange if that's what it takes to reduce the risk of large-
A full nuclear exchange kills everyone.
Yes.
How could you say that?
That's so crazy.
Yes.
Oh, he's a loon.
Well, so he's very serious.
His views have traction in Washington.
Really?
There are quite a few people in Washington who are worried about this. But here's what's interesting. So he and people like him, this whole group of people who work on this, have been worried about this and developing theories about this for 20 years. And they've been publishing on this and talking about this. And it was kind of abstract, like I said, until six months ago. And now they're getting some traction and their ideas are being taken seriously. But they're worried about
literally people dying. There's another set of people who are trying to control AI who are like
the social media sensors that are trying to control what it says. And so what's happened is
the AI safety movement that was worried about people dying has been hijacked by the people
who want to control what it says. And it turns out those two groups of people hate each other.
So the safety people think that the so-called, the other group is called the alignment people.
The safety people who are worried about people dying think that the alignment people are
hijacking the critically important safety movement in order to basically control what the thing says.
The people who want to control what the thing says
think that the AI safety people worried
about killing everybody are like lunatics
and they like call each other names all day long.
The original group, his group,
has renamed themselves from AI safety
to they now call themselves AI not kill everyone-ism
because they're trying to just get it like focused
on what they call like actual existential risk.
But the overall movement has been taken
over by the censors. And what's happening is in Washington, these concerns are getting
conflated. And so they sort of bait the hook with, it might kill everybody. And then what
comes out the other end is basically a law restricting what it can say. And so this is
the level of panic and hysteria. And then potentially then, and then potentially like, again, very, very kind of
damaging, you know, potentially, you know, catastrophic, you know, legal things that
are going to happen on the other side of this. I just can't imagine a sane world where someone
would take that guy seriously. Airstrikes, a full nuclear assault is preferable to AI taking over.
So his, his argument, his argument, his argument is once you have a quote-unquote runaway AI
that's just overwhelmingly smarter than we are,
then you can do whatever it wants,
and it basically has a relationship to us like we have to ants,
and you step on an ant and you don't really care.
Right.
And you can build as many ant-killing machines as you want.
Is there no fear of that if you extrapolate AI technology into the future?
I don't think so. And I don't think so.
And I have a bunch of reasons for thinking that. I'll just give you a very, very basic one. It's
one of the things that they say is basically anything smarter is always in charge of anything
dumber, right? So if you have a smarter thing, it's going to be in charge of a dumber thing.
A smarter person is going to be, a smarter thing is going to be able to talk a smarter person in
anything. A smarter thing will ultimately always be in charge, will be able to win whatever political contest or be able to take control of
power. To which my response is, does our society seem like one that's being run by the smart people?
Like if you take all the smartest people you know in the world, are they in charge?
And who are they working for? And would you say that the people they're working for are smarter
or dumber than they are? And so I think like I just like I think that the whole basis for this like smart always wins versus dumb is just like not right.
Number two, there's this anthropomorphizing thing that happens where you see him doing it in that essay.
He basically starts to impute motives.
Right.
So it's like basically that the AI is going to be a like some level of self-aware, you know, basically it's a Terminator scenario.
Like it's going to wake up and it's going to decide it's like an us or them scenario.
But like it's not what it is.
It's not how it works, right?
What it does is it basically sits there and you ask it a question and it answers you and it hopes that you're happy with the answer.
Like we're not dealing with –
For now though.
For now.
But like that's how it's built.
Right.
And again, this is – here's another reason I don't believe it is because the great surprise of ChatGPT,
ChatGPT is a technology called Large Language Models, which is based on a research breakthrough
in 2017 at Google, which is called the Transformer.
It took the technical field completely by surprise that this works, right?
So none of the people working on AI risk prior to basically December had any idea that this
was going to work any more than the rest of us did.
Like this is like a massive surprise.
And so there's all these ideas.
There's all these sort of very general hand-wavy concepts around quote-unquote AI that basically
were formulated before we actually knew what the thing was and how it works.
And none of their views have changed based on how the technology actually functions.
And so it comes across to me more as a religion.
In their framework, it kind of doesn't matter how it works because it's basically just assume that however it works is going to behave in a certain way.
And I'm an engineer and like things don't work like that. But aren't they evaluating how it works now and aren't they evaluating chat GPT?
And if chat GPT is just the beginning, if this is the beginning of this and then you have something that's far more complex and something that is sentient or something that is capable of making decisions if that's engineered.
But you just took that. But again, we just took this a little bit. We talked last. You just took
the leap to like, OK, now it suddenly becomes sentient. And it's like, OK, we don't know why
humans are sentient. Well, let's not even use the term sentient, but capable of rational thought
or decision making. But those are two different things. Right. But if it decides things, if it starts making actions and deciding things,
this is the worry that it becomes capable of doing things.
But there's no, yeah. So it will be capable of doing things. It will have it,
but there's no it, there's no it, there's no genie in the bottle.
For now.
For now.
Right. But isn't it possible that that's developed?
Okay. So this is the other thing that't it possible that that's developed? Okay.
So this is the other thing that happens.
Okay.
This is the line of argument.
So I actually looked this up.
This is a line of argument that's very commonly used as you represent it in this world.
It's actually Aristotle first identified this line of argument and he calls it the argument
for ignorance.
But by which he means the argument for lack of evidence.
Right.
It's basically the argument of, well, you can't rule out that X is going to happen.
True.
Well, the problem is at that point you can't rule anything out. right? At that point, you have to plan for every contingency of
every conceivable thing that you could ever imagine, and you can never disprove anything.
So you can never have a logical debate, right? So at that point, you've basically slipped the
bounds of reason. You're purely in a religious territory. So how does science work? Science
works when somebody formulates a hypothesis and then they test the hypothesis. And the basic requirement of science is that there's a testable hypothesis that is what they call falsifiable.
So there is some experiment that you can run to basically establish that the hypothesis is not in fact true.
And this is basically how science has always worked.
And then, by the way, there's always a way to measure, right, what is the actual progress that you're making on the experiment that you're doing.
And on all this, like, AI safety stuff that I've been able to find and read, like they have none, there's none of that. There's speculation.
There's no hypothesis. There's no test. There's no example. There's no evidence. There's no metric.
There's no nothing. It's just speculation. Right. But we could sit here and speculate about
millions of things, right? We could speculate about an impending alien invasion and spend,
you know, argue that society should spend the next hundred years preparing for that
because we can't rule it out.
And so we just, as human beings, we do not have a good track record of making decisions
based on unfounded speculation.
We have a good track record of making decisions based on science.
And so the correct thing to do for people worried about this is to actually propose
experiments, be able to propose a scenario in which the bad thing would actually happen
and then test to see whether that happens.
And so design a system that shows the first glimmer actually happen and then test to see whether that happens. Right.
And so like design a system that shows like the first glimmer of any of the behavior that you're talking about.
Right.
But not even behavior, just capabilities.
As ultimately as the capabilities rise of these things and you're dealing with far more sophisticated systems, this is the beginning, right?
We're at GPT 4.5 or whatever we're at.
or chat GPT 4.5 or whatever we're at, when new emerging technologies that have similar capabilities but extend and keep going, it just seems like that's the natural course of progression.
The natural course of progression is not for that to all of a sudden decide it has a mind of its own.
Not all of a sudden.
No, or even over time.
There's no—
Never?
This goes back to our conversation last time.
All right.
Okay.
This gets into tricky territory.
Yes.
Okay.
So let me,
let's try to define terms. Let's try to define terms. How would we define something that is,
and you pick your term here, self-aware, sentient, conscious, has goals, is alive,
is going to make decisions on its own, whatever term you want, whatever.
Well, let's just say a technology that mimics the human mind and mimics the capabilities and interactions of the human mind.
But we don't know how the human mind works.
But we do know how people use the human mind in everyday life.
And if you could mimic that with our understanding of language, with rational thought, with reason, with the access to all the information that it will have available to it, just like chat GPT.
You see what you're doing?
It's a if, if, if, if, if.
Yes.
Right.
Yeah, for sure.
I just read this.
There's this article in Nature this week.
There's a neuroscientist and a philosopher who placed a bet 25 years ago as to whether
we would, in 25 years, know the scientific basis of human consciousness.
And they placed a bet for a case of wine 25 years ago.
And the neuroscientist predicted, of course, in 25 years,
we're going to understand how consciousness works, human consciousness.
And the philosopher is like, no, we're not.
25 years passed, and it turns out the philosopher won the bet.
And the neuroscientist just says openly, yeah.
He's like, I thought we'd have it figured out by now.
We actually still have no idea.
Sitting here today today the actual biological
Experts scientists who actually know the most about human consciousness are anestheticians
The person who flips off the light switch in your brain when you go under for surgery
All we know we know how to turn it off. The good news is they also know how to turn it back on
Yeah, they have no broader idea of like what that is. And so, again, there's this, this is what they call anthropomorphizing.
There's this sort of very human instinct to try to basically see human behavior and things that aren't human.
Right.
And it would be like, if that were the case, then we would have to think about that and study that. But like we don't have that.
We don't know how that happens.
We don't know how to build that.
We don't know how to replicate that.
So, like I said, at that point, it's speculation.
That's not the actual technology that we're dealing with today. So here's my favorite
counter example on this. So let's say
something has the following properties, right? Let's say that it has an
awareness of the world around it. It has a goal or an objective
for what it wants to achieve in the world around it. It has the
wherewithal to be able to reach into the world,
to be able to change the world to accomplish its goal.
It's going to be in a state of increased tension if it can't achieve its goal,
and it's going to be in a state of relaxation if it can't achieve its goal.
We would describe that as probably a pretty good first-order approximation
of some sort of conscious entity, right,
that would have the characteristics that we're worried about.
We've just described a thermostat.
It sits on the wall.
It senses the environment temperature.
It has a goal for the temperature it wants.
It has the ability to change the setting on the heater, the AC unit.
And it literally goes into a state of physical tension when the temperature is not what it wants, and it literally goes into a state of physical tension
when the temperature is not what it wants,
and then it goes into a state of physical relaxation, right,
literally inside the mechanism when it gets back into the state
where it has the desired temperature.
And, like, we're not worried about the thermostat, like,
coming alive and killing us, right?
And so there's just, there's a, like, even those properties alone
are not sufficient to generate concern,
much less the idea of basically
the way we know how to build neural networks today. And then again, you go back to this thing
of like, okay, let's assume that you actually agreed with the concern and that you actually
were legitimately concerned and that you thought that there was disaster in the future here.
How do you feel about walking down the path that would be required to offset that?
What would be the threshold of evidence that you would want to demand before you start monitoring what everybody's doing on
their computers, before you start doing airstrikes and roadblocks? Well, I would never suggest that.
Well, but that's what's required, right? In order to stop it. In order to stop it. Like,
if you believe that at some point it will turn into something that's a threat,
right? And that that threat is existential, right? Because it's going to be the super smart thing.
It's going to take over the nuclear arsenals. It's going to synthesize new pathogens and it's a threat, right? And that threat is existential, right? Because it's going to be the super smart thing. It's going to take over the nuclear arsenals. It's going to synthesize new pathogens
and it's going to kill us all, right? Then obviously you have to have an incredibly invasive
regime to prevent that from happening because that's an all or nothing proposition, right?
And that's the other tip off of what's happening here, right? Which is, you see, there's no
shades of gray in that article, in this discussion. There's no shades of gray, right? It's either
it's going to kill us all or it's going to be totally harmless, right?
What is Elon's position?
Because he's called for a pause in AI.
So Elon's position is actually quite interesting.
And actually, Elon and the guy you just put up there
actually have quite a bit of actually stark disagreement right now.
And I'm going to try to accurate...
Sorry, it's dangerous to try to channel Elon
because he's a very smart, creative guy.
So I'm going to do my best to accurately represent.
So he read this literature on this topic about 10 years ago, and he got very concerned about this.
And then he was actually – actually, he's talked about this now.
He gave a TV interview where he talked about this.
He actually talked to Larry Page about it when Larry Page was running Google.
And at the time – and Google is actually where this most recent breakthrough was invented, this transformer breakthrough. So Google was working
on this back, you know, 10 years ago with what's now ChatGPT. And so he went and talked to Larry
about his concerns about AI. And Larry's like, oh, there's nothing to worry about. And Elon's like,
well, I don't know. What do you mean there's nothing to worry about? Larry's like, look,
if they replace us, they replace us. Like, they'll be our children. And like, we will have done the
universe like a great service. It'll be fine. elon said what that sounds like you don't care whether the future of you know the earth is
you know humans or ais and and and in response elon says that larry called him a speciesist
oh boy so so elon no by the way knowing larry i think there are 50 50 odds that he was being
serious and joking oh it's possible he was being serious.
It's also possible he was just winding Elon up.
I actually don't know which it was.
Both scenarios are fairly entertaining.
Elon's conclusion from that was not only is AI dangerous, specifically Google owning and controlling AI is specifically dangerous because Larry Page controls Google.
specifically dangerous because Larry Page controls Google. And so therefore, if Larry Page controls Google, Google gets AI that Larry will basically not, he'll basically, and basically let the AI do
whatever it wants, including exterminate humanity. So Elon started OpenAI, right? So the company
behind ChatGPT, that was actually originally started by Elon with Sam Altman, who runs it now
and a bunch of other people in the Valley. The specific mission of OpenAI is right there on the
name. The specific mission of it is we're going to create AI. We're going to compete with Google. We're going to create an AI,
but we're going to make it open so that everybody has it, specifically so that it's not just Google.
So the original OpenAI mission was literally open source AI that everybody's going to have
so that it's not just Google. This guy is freaked out and is like, wait a minute. If you think AI
is dangerous, that's the exact opposite thing than what you should do wait a minute. If you think AI is dangerous, that's the exact
opposite thing than what you should do, right? Because if you think AI is dangerous, then the
last thing in the world that you want to do is actually give it to everybody. It's like giving
everybody nuclear weapons, right? Why on earth would you think that that's a good idea? And
Elon's like, well, look, maybe whatever, but I certainly know that I don't want Larry to control
it. Subsequent to that, Elon actually, there was a bunch of changes at OpenAI,
and as a result, Elon became no longer involved in OpenAI at a certain point.
And then OpenAI basically went from being OpenAI to being ClosedAI.
So they're specifically not doing open source.
They started as a nonprofit.
Now they're a business.
And then they went from being open source to being very much not open source.
And today you can use ChatGPT, but they won't even tell you fully how it works, much less give you access to the code. They're now a company, right, like any other company.
And so Elon has said publicly that he's very upset about this change because he donated $100 million to them to get it started as a nonprofit, and then it became a company, right, sort of against his wishes.
nonprofit and then it became a company, right? Sort of against his wishes. And so now he sort of views it as sort of an equivalent threat to Google, right? So now in Elon's mind, he's got
open AI to worry about and he's got Google to worry about. And so he has talked publicly about
possibly forming a third option, which he has ultimately, I think, called either like actually
open AI or sometimes he calls based AI which would be a new thing
which would be like the original open AI idea
but done from scratch in 2023
but like set up so that it can never be closed down.
And then once again,
the people in the AI risk movement
are once again like,
oh my God, that'll make the problem even worse.
What are you doing?
And so that's the current state of play.
And then by the way,
this is all kind of playing out at this level in Washington.
Most of the engineers working on this stuff are just like writing code, trying to get something to work.
And so for every one of the people engaged in this public discussion, you've got 10,000 people at universities and companies and people all over the world in their basements and whatever working on trying to get some aspect of this to work, trying to build the open source version.
Are we aware of what other countries, like what level they're at with this stuff?
Yeah. So China, so I would say good news, bad news. Good news, bad news is this is almost
entirely a US-China thing internationally. The UK had quite a bit of this stuff with this thing
called DeepMind, which was a unit of Google that actually originally got Elon concerned.
But DeepMind is being merged into the mothership at Google. And so it's sort of getting drained away from the UK and it's going
to become more Californian. And then there's smatterings of people in other European countries.
There are experts at various universities, but not that many. Most of it is in the US. Most of
it's in California in the West. And then there's China. So good news, there aren't 20 other
countries that have this, but there are two, and they happen to be the two big ones. And so there
is a big corresponding Chinese development effort that's been underway for the last 15 years, just
like the efforts in the US. China is actually very public about their AI kind of agenda, mission.
They talk about it, they publish it. And of course, they have a very different theory of this than we do.
They view AI as a way to achieve population control.
Really?
Yeah.
They're authoritarians.
And so the number one priority for Chinese leadership is always that the population of China stay under control and not revolt or expect to be able to vote or whatever.
Anything that would threaten the dominance of the Communist Party of China. And so they, you know,
so for example, China's security camera companies are the world leaders in AI security cameras,
because they're really good at like sniffing out, you know, people walking down the street,
right? Like, that's the kind of thing that their systems are really good at.
And so they have a whole national development program, which is their government and their company. In China, all the companies are actually
controlled and owned effectively by the government. There's not as much of a distinction between
public sector, private sector as there is here. So China has a more organized effort that couples
basically their whole society. And then they have a program to basically use AI for population
control inside China, authoritarian political control. And then they have a program to basically use AI for population control inside China, authoritarian political control.
And then they've got this program called Digital Belt and Road where they're going to basically try to install that AI all over the world.
And they've had this program for the last 10 years to be the networking layer for the world, so this whole 5G thing with this company called Huawei.
So this whole 5G thing with this company called Huawei.
So they've been sort of – they've been selling all these other countries all the technology to power their 5G wireless networks.
And then they're basically going to roll out on top of that this kind of AI, you know, authoritarian basically control, surveillance control, population control stuff.
On the Huawei equipment?
On top of the – yeah, basically on top of the other infrastructure.
They have the Huawei 5G stuff.
They've got what they call smart cities.
So they've got a bunch of software.
They've already sold a bunch of countries to basically run a city, to run public transportation and traffic control and all these things.
And that's got their security cameras built in and everything. And then, of course, what they pitch to the president or prime minister of country X is if you install our stuff, you'll be able to better control your population.
If you install the American stuff, who knows?
They're Americans.
They're crazy.
Democracy, like freedom, like all that stuff.
Like in China, we want things like controlled.
And, of course, a lot of people running a lot of countries
would find the China model quite compelling.
So there's two very different visions.
This is like the Cold War with the Soviet Union, right?
There's two very different visions for how society should be ordered.
There's two very different visions for how technology should be used to order Union, right? There's two very different visions for how society should be ordered. There's two very different visions
for how technology should be used to order society, right?
There's two very different visions
on whether people should have access to technology
or just the government, right?
But in the Soviet Union,
it was illegal to own a photocopying machine, right?
You'd get like executed for owning a mimeograph
or photocopying machine, right?
Because it was such a threat
that you'd be able to publish information that wasn't propaganda coming from the government.
And so China's not quite that bad, but, you know, they're getting there.
And so there are these two visions.
There are these two approaches to technology.
There are these two plans to kind of propagate that out.
You know, in the U.S., what we do is we have companies build this stuff
and we have them go out and sell it, right?
Or we have open source developers who go out and make it for free.
In China, it's more of a top-down directed, of thing. And so that's the thing. It's like once you start thinking in those terms, you realize that actually all these
debates happening in the U.S. are interesting and maybe important, but there's this other much
bigger, I would argue more important thing that's happening, which is what kind of world do we think
we're living in 50 years from now? And do we think that the sort of American Western ethos of freedom
and democracy is the one that technology supports, or do we think it's living in 50 years from now? And do we think that the sort of American Western ethos of freedom and democracy is the one that technology supports?
Or do we think it's going to be a totalitarian approach?
Either way, I see a scenario in 50 years.
It's unrecognizable.
It's possible.
Well, I'll declare I don't want to live in the Chinese one.
Right.
Like, I think that's a bad idea.
Like, that seems inescapable in the Chinese.
Well, it's the Chinese one. It's like,, it's like, you know, there are no rights.
The whole concept of rights is a very Western thing, right?
And so the idea that you're walking down the street and you have the right to stop and talk to whoever you want or say whatever you want is not the majority view of a lot of people around the world, especially people in power.
Even in the US,
we struggle with it, right? And so the real battle for AI is whether or not that gets enhanced,
or whether or not we develop a system in America that actually can counter that.
Yeah, yeah. And then also whether we as individuals will have access to this power
that we can use ourselves. So the movie, or the novel became a movie, but 1984, right,
which is sort of the Orwell totalitarian kind of thing that people use as a metaphor.
So the technology in the novel, 1984, was what Orwell called the telescreen,
and basically television.
And basically the idea was television with a camera in it.
And the idea was every room you had to have a telescreen in every room in your house.
And it was broadcasting propaganda 24-7.
And then it was able to watch you.
And that was the method of state control in 1984.
There's this guy who wrote a different – rewrote 1984 in a book called Orwell's Revenge.
And in that book, what he did is he said, OK, we're going to use that same setup.
But the telescreen, instead of being a one-way system, is going to be a two-way system.
So the telescreen is going to be able to broadcast propaganda and watch the citizens, but also it's going to be able to – people can actually put out whatever message they want, free speech to be able to say whatever they want, and you're going to be able to watch the government.
It's going to have cameras pointed at the government.
And then he rewrites the whole plot of 1984.
And, of course, the point there is, right, if you equalize, if both the people and the state have the power of this technology at their fingertips, at the very least now there's a chance to have some sort of like actual rational productive relationship where there are still human freedoms. And maybe people actually end up with more power than the government and they can keep the government from becoming totalitarian.
Right.
Right. And so in his rewriting, what happens is the people use – rebels who want a democracy use the broadcast mechanism out to be able to ultimately change the system.
And so that's the fundamental underlying question here as well, which is like is AI a tool to watch and control us or is AI a tool, something for us to use to become smarter, better informed, more capable, right?
How much of a concern is Chinese equipment that's already been distributed?
Yeah. Well, so the basic thing, so we don't always know the specific answer to that yet,
because this gets into complicated technical things. And it can be hard to prove some of
these things. But what we do, we do know the following. We know that in the Chinese system,
everything basically rolls up to and is essentially owned and controlled by, actually not even the state.
It's the Chinese Communist Party, the CCP.
So there's the party.
The party owns and controls the state.
And the state owns and controls everything else.
So, for example, it's actually still illegal sitting here today for an American citizen to own stock in a Chinese company.
People say that they do, and they have various pieces of paper that say they do,
but it's actually there's a law that says that's not because this is an asset of China. This is
not something that you can sell to foreigners. And so they just have that model. And then if
you're a CEO of a Chinese company, you have a political officer assigned by the Communist Party
who sits with you right down the hall, right? and like the office next to you, and basically you coordinate everything with him and you need to make him happy. And he has the
ability to come grab you out of meetings and sit you down and, you know, tell you whatever you want,
whatever he wants you to do on behalf of the government. And if the government gets sideways
with you, they will, you know, rip you right out of that position. They'll take away all your stock.
They'll put you in jail. This has happened like over and over again, right? This has happened a
bunch, a lot of like high elite Chinese business leaders over the years have been
basically stripped of their control and their positions and their stock and their wealth
and everything. Some of them have just outright vanished. They have this control. For example,
data, something like TikTok, for example, if the Chinese government tells the company we want the data, they hand over the data.
Like there's no court, there's no, you know, the concept of like a FISA warrant,
you know, the concept of a subpoena, like that's, they don't have that. It's just like,
we want it handed over or else. And so that's how it works. And when they want you to merge
the company or shut it down or do something different or don't do this or do that, they just tell you and that's
what you do. And so anyway, so then you have a Chinese company like TikTok or like Huawei
or the DJI, the other one is their drone company, right? Most of the drones flown in the West are
from this Chinese company called DJI. And so then there's always this question of like, well,
is there a backdoor, right? So can the Chinese government reach in at any point and, you know,
use your drone for surveillance? Can they use, you know, can they see what you're watching on
TikTok? And the answer to that is maybe they can, but it kind of doesn't matter if they can't today
because they're going to be able to anytime they want to, because they can just tell these
companies, oh, I want you to do that. And the company will say, okay, I'm gonna do that. And so it's a complete fusion of state and company.
Here in the US, at least in theory,
we have a separation.
This goes back to the topic I was talking about earlier.
For the US system to work properly,
we need a separation of the government and from companies.
We need the companies to have to compete with each other,
and then we need for them to have legal leverage
against the government.
So when the government says hand over private citizen data, the company can say, no, that's
violation of the first or fourth or fifth amendment rights.
I'm not going to do that.
And then they can litigate that, take it to the Supreme Court.
You can have an actual argument over it.
That's compromised when our companies voluntarily do that, right?
Which is what's been happening.
Trevor Burrus How inconvenient for them.
Jason Kuznicki Yes, exactly.
Trevor Burrus I'm sure they would love to use the communist
model. Jason Kuznicki Yeah. Well, exactly. I'm sure they would love to use the communist model. Yeah.
Well, so this is the thing. And in the U.S. – this is very important, right?
In the U.S., we have written constitutional – give me an example of free speech.
In the U.S., we have the literal written First Amendment.
Even in the U.K., they do not have a written constitutional guarantee to free speech.
So in the U.K., there are laws where they can jail you for saying the wrong thing, right?
And the same thing, by the way, in a bunch of these cases in like Australia and New Zealand.
New Zealand, which is supposed to be like the libertarian paradise.
New Zealand has a government position reporting the prime minister called the chief censor, right?
Who gets to decide basically what gets to be in the news or what people get to say.
Right?
And so even in the West, like outside the US, there are very few countries that have a written guarantee to free speech.
Right. And so and even in the U.S., like, do we actually have free speech if there's all this level of censorship and control that we've all been seeing for the last 10 years?
Right. Right. And so it's like, OK, the line here, the slippery slope here between free and not free is like very narrow.
Right. It's not it's not a moat, right? It's a very thin line
which is very easily cracked.
And this is why everybody's so fired up about,
in government, this is why everybody's so fired up about AI
is because it's another one of these where they're like,
wow, if we can get control of this, then think of all the ways
that this can get used.
Well, that's one of the more fascinating things
about Elon buying Twitter.
Because, boy, did that throw a monkey wrench
into everything. When you see, like,
Biden's tweets get fact-checked,
you're like, whoa.
There's a lot of things
showing up on Twitter now that were not showing up on Twitter before.
Oh, my God. So much.
And just
nutty shit, too.
I mean, like, some of the
wackiest conspiracy theories,
Michelle Obama's a man, like all that kind of stuff.
Flat Earth.
But I'd rather have that.
My favorite is the birds, by the way.
Yeah, birds aren't real.
Birds aren't real.
Yeah.
That one I'm pretty sure of.
It doesn't make any sense.
That had to be it.
It's a 4chan thing.
Like, why can't we fly?
It's just ridiculous.
Yeah.
It's got to be a 4chan thing.
You know, sometimes they're onto something.
But I like that.
I like that wacky shit that's
mixed in with things. I mean, it seems
insane, but that, when I
also, when I look at, like, some of
the people that are putting it up there, and I look
at their profiles, and I look at their American
flag and their bio, and I'm like,
are you a real human?
Is this a troll farm in Macedonia?
Like what's happening here?
There's a lot of that.
There is.
And of course he says he wants to, you know, of course he says he plans to over time, he
plans to root all that out.
Yeah.
He wants all identity to be validated, verified online.
Having said that, we fought a war for free speech.
We fought the Revolutionary War.
A lot of that war for free speech. We fought the Revolutionary War.
A lot of that was for free expression.
The founding fathers of this country very frequently wrote under pseudonyms.
Interesting.
Just like Twitter and Anz.
Really?
And this includes like Ben Franklin when he was a commercial printer.
He had like 15 different pseudonyms.
Really?
He would sell newspapers by having his different pseudonym personalities argue with each other in his own newspaper. Fight it out. He had stock puppets. And then the Federalist Papers was all written under pseudonyms. Really? Yeah. Like Madison,
all these guys run under pseudonyms. Why did he do that? Because there was danger. There was very
real danger associated with being like, what's the king going to think? Right.
Right. This is sort of the two lines of argument, which is like, okay know, are you going to like, what's, you know, what's the king going to think? Right. Right. Like, you know, yeah. Is it like, you know, this is sort of the two lines of
argument, which is like, okay, like if somebody is not willing to put their own name behind
something, like, should they be allowed to say it? And there's an argument, you know,
in that direction, obvious, obvious one. But the other argument is, yeah, sometimes there
are things that are too dangerous to say unless you can't put your name behind it.
Yeah, that does make sense. So it seems like the pros would outweigh the cons.
Well, even just the micro version, which is just like, you know, if you've got something to say that's important, but you don't want to be harassed in your house.
You know, you don't want your family to get harassed.
Yeah.
Right.
You don't want protests showing up outside your house for something you said.
Anonymous whistleblower protection.
Whistleblower protection.
Yeah, exactly.
Yes.
One person's a terrorist is another person's freedom fighter.
One person's whistleblower is another person's troll.
Like, yeah, and the genius of the American system is, yeah, like, say what you want, right?
Like, let's have it out, right?
And so, yeah, that's the system I believe in.
I believe in that system too.
But I also see Elon's perspective that it would be great if it wasn't littered with propaganda
and fake troll accounts that are being used by various unscrupulous states.
In fairness, what Elon says, actually it's interesting, what Elon says is you will
be allowed to have an anon or what they call sued or anon account under some other name
you make up on the service. You'll just have to register that behind the scenes with your
real identity
and specifically with like a credit card right then the fear is that someone will be able to
get in there correct yeah that's right which has happened already yeah that's right and that is a
big risk yeah yeah but then again and then you get to get the other part of this would be like
twitter is only one company right and so there it's an important one but it's only one and there
there are others as well so um you know for consideration of, like, quote-unquote rights on this topic, you also want to look at what is happening elsewhere, including on all the other services.
I'm fascinated by companies like Twitter and YouTube that develop at least a semi-monopoly because YouTube is a great example.
Like if you want to upload videos, YouTube is the primary marketplace for that.
It's like nothing else is even close.
Everything else is a distant, distant second.
But they've got some pretty strict controls and pretty serious censorship on YouTube.
And it seems to be accelerating, particularly during this presidential election.
Now that you're seeing these Robert Kennedy Jr. podcasts get pulled down from a year ago, two years ago, the Jordan Peterson one got pulled down.
Theo Vaughn's interview with Robert Kennedy got pulled down.
There's been some others.
And Brett Weinstein?
No.
No, his didn't.
But it's just these conversations were up for a long time and it wasn't until Robert Kennedy running for president that they decided like these are inconvenient
narratives he's discussing
So I don't want to I should not weigh in on exactly which companies have whatever level of monopoly they have
Having said that to the extent that companies are found to have monopolies or let's say very
You say sort of dominant market positions like that does that should bring an additional level of scrutiny
I say very, you say sort of dominant market positions like that does, that should bring an additional level of scrutiny on conduct. And then there is this other thing I mentioned
earlier, but I think is a big deal, which is if a company is making all these decisions by itself,
you can argue that it maybe has the ability to do that. Although again, maybe it shouldn't
pass a certain point in terms of being a monopoly. But the thing that's been happening is it's not
just the companies making these decisions by themselves. They've come under intense pressure from the government.
Right.
And they've come under intense pressure from the government in public, in public statements and threats from senior government officials.
They have come in private channeled threats.
And then all of this stuff I was talking about earlier, all the channeling of all the money from the government that's gone into these pro-censorship groups, right, that are actively working to try to suppress speech.
And when you get into all of that, those are crimes.
That's illegal.
Everything I just described I think is illegal.
And there are specific like actual felony basically counts in the U.S. Code for like those things actually being illegal.
There are violations of constitutional rights and it is a felony to deprive somebody of their constitutional rights.
And so I think in addition to what you said, I think it's also true that there's been a pattern
of government involvement here
that is, I think, certainly illegal.
And, you know, put it this way,
this administration is not going to look into that.
Maybe a future one will.
So do you think it's illegal?
It just hasn't been litigated yet?
Yeah.
I think there's evidence of substantial criminality
just in the Twitter files that have come out.
You need to have somebody – prosecutors have to – yeah.
When you went to – you need class action lawsuits, right?
You need to be able to go carve it open with large-scale civil suits or you need to – you need actual government criminal investigation.
What has come out of the Twitter files other than independent journalists researching it
and discussing it and writing articles?
It's not being covered with any significance in mainstream news.
Well, the mainstream media has been on the side of censorship for the last eight
years.
They've been pounding the table that we need to lock down speech a lot more.
So they're compromised.
And then the other investigation to watch is I think it's the Missouri attorney general.
There's this state-level investigation where there's been a bunch of interesting stuff that's come out.
And the attorneys general have subpoena power.
So they have subpoenaed a bunch of materials from a bunch of companies that, again, to me, looks like evidence of criminality.
But, again, you would need – you need prosecutors.
You need a political – you need the political force of will and desire to investigate and prosecute crimes.
And to engage in that battle.
Yeah.
Because it's going to be a battle.
Yeah.
Yeah.
And then if it's private litigation, you need to try to do a big class action suit.
You need to – and then you need to be prepared to fight it all the way to the Supreme Court.
And there's a lot of money involved in that.
You need to be prepared to fight it all the way to the Supreme Court.
There's a lot of money involved in that.
When you're seeing this play out and you're looking at likely scenarios, like how does this resolve?
How do we come out of this?
I think it's a big – I mean I think it's a big collective – it's a fight.
Like it's a – this is one of those where it's like what do we want, right?
And the we here is like all of society. And if we decide that we want the system to keep working the way it's working, we're going to keep electing the same kinds of people who have the same policies.
Do you think most people are even aware of all these issues though?
No, I mean certainly not.
And that's a big – there's always an asymmetry between the people who are doing things and the people who aren't aware.
But again, it's like what do we want?
Are people going to care about this or not?
If they are, then they're going to at some point demand action.
It's a collective action problem, right?
People have to come together in large numbers.
But will it be too late?
This is the question.
Like imagine a scenario where Elon never buys Twitter and Twitter just continues its practices and even accelerates.
Yeah.
And that's my concern.
And again, this goes back to my concern about the AI lockdown. Right. Which is like all of the all of the concerns on AI are being basically used to put in place. I think what they're going to try to do to AI for speech and thought control is like a thousand times more dangerous than what's happened on social media. Right. Because it's going to be your it's going to be your kids, you know, asking, you know, what do you know, what's what are the facts on this? And it's just going to like flat out lie to them for political reasons, which it does today.
And like that to me is like far more dangerous.
And that's what's happening already. And the desire is very clear, I think, on the part of a lot of people to have that be a fully legal blessed thing that basically gets put in place and never changes.
Well, you're completely making sense, especially when you think about what they've done with social media and not even speculation, just the Twitter files.
It's so clear.
And it's –
Well, this is the ring of power thing, right?
It's like everybody's in favor of free speech in theory.
It's like, well, if I can win an election without it, I've got the ring of power.
And the American system is set up so that people don't have the ring of power.
Like the whole point of the balance of terror between the three branches of government and
all the existence of the Supreme Court and the due process protections in the Constitution,
it was all to prevent government officials from being able to do things like this with
impunity.
Yeah.
But the founding fathers saw the threat.
It's actually remarkable how clearly the founding fathers saw the threat.
It's actually remarkable how clearly the founding fathers saw the threat, given that they were doing all of this before, you know, any modern, you know, before electricity.
It is pretty amazing.
But they saw the threat.
Yeah.
They had a pretty profound understanding of human nature and applied to power.
Yeah, they did.
Yeah.
Yeah.
Yeah. This is a it's such an uneasy time because you see how these things, all these forces that are at work and how it could play out, how it is playing out with social media, how it could play out with AI and electing leaders that are going to see things correctly.
I haven't seen anybody discussing this, especially not discussing this the way you're discussing it.
Well, and when the speech is made, right, to justify whatever the controls are, it's going to be made in our name, right?
So the speech is not going to be, we're going to do this to you.
The speech is, we're doing this to protect you.
Right.
Right.
So that's the siren song.
Yeah.
Right.
And that's already started. Like if you look at the public statements coming out of D.C. already, like that is the thrust of it.
Because, of course, that's how they're going to couch it.
How are they framing it? How is it protecting us?
Well, we need to protect.
Yeah, we need to protect.
We need to protect people from dangerous this and that.
We need to protect people from hate speech.
We need to protect people from misinformation.
And it's the same.
I mean, it's the same arguments that it's essentially the same.
It's effectively the same arguments you've seen in social media for the last decade.
I just don't know how we publicly turn that narrative around because there's so many people that have adopted it like a mantra.
They just say that mantra and they just think that they're doing the right thing.
Hate speech, disinformation, misinformation.
Let them take care of it.
They're doing the right thing.
So here's the white pill.
Here's the white pill.
Here's the reason for optimism. So Gallup has been surveying American citizens' trust in
institutions for 50 years. A lot of people think all this stuff started with the internet, and it
turns out it didn't. It turns out there's been a collapse of faith on the part of American citizens
in their institutions, basically, since basically I was born, basically around the early 70s.
It's basically been a straight line down on almost every major institution.
And so I'll talk about government newspapers in a second,
but basically any religion, you go kind of right down the list.
Police, big business, education, schools, universities,
you chart all these things out and basically they're all straight lines down over 50 years.
And there's two ways of interpreting that.
One is, you know, greater levels of disillusionment and cynicism that are incorrect.
And then the other is actually people are learning. Right. Who they who they can and can't trust.
And then, of course, the theory goes to start in the 70s because of the hangover from the Vietnam War and then Watergate.
And then a lot of the hearings that kind of exposed government corruption in the 70s that followed, right?
And then it just sort of – this sort of downward slide.
The military is the big exception.
The military took a huge hit after Vietnam and then actually is the one that has like
recovered sharply and there's like a cultural change that's happened where we as Americans
have decided that we can have faith in the military even if we don't agree with the
missions that they're sent on.
So that's the exception.
But everything else is sort of down into the right.
The two that are like the lowest and have had the biggest drops are Congress and journalism.
Right?
And so the population – and they poll like 10%, 15% in the population.
And so most people are not looking at these things like, oh, yeah, these people are right
about – most people are looking at these things being, oh, yeah, these people are right about most of these.
Most people look at these things being like, you know, that's that's screwed up.
Now, people have to decide what to do with that. Right.
Because what you see is faith in Congress is pulls it like 10 percent.
But faith in your local congressperson pulls it like 90 percent. Right.
Which is why incumbents keep getting, you know, congressional incumbents almost always get reelected. Right.
And you'll have these, you know, congressmen who are in there for 20 terms, right?
40 years, right?
And so at some point, people have to decide.
They have to carry it over, right?
It's not internally consistent, right?
And you're not going to get the change that you want from Congress unless a lot more people all of a sudden change their mind about the incumbents that they keep reelecting.
But anyway, the reason for optimism in there is I think most people are off the train already.
And quite frankly, I think that explains a lot of what's happened in politics in the U.S. over the last 10 years.
Like whether people support or don't support the kind of, you know, the various forms of populism on the left or the right.
I think it's the citizenry reaching out for a better answer than just more of the same.
And more of the same being the same elites in charge forever telling us the same things that we know aren't true. Well, that is one of the beautiful things about
social media and the beautiful things about things like YouTube where people can constantly
discuss these things and have these conversations that are reached by millions of people. I mean,
just a viral tweet, a viral video, something, you know, someone gives a speech on a podcast and everybody
goes, like, what you're saying today, I didn't know that's how it worked. Oh, this is what we
have to be afraid of. So when they start saying it's for your own protection, this is why. And
then the Marc Andreessen clip plays and everybody goes, oh, okay. That gives me hope because that's
something that didn't exist before. Yeah, that's right.
And you can even take it a step back further.
Actually, even pre-social media, there was a big opening in the 80s with talk radio.
It got people very mad at the time because things were being said on it that weren't supposed to be said.
Cable TV was a big opening to it.
Before that, actually, in the 50s, it was paperback books. A lot of alternate
points of view basically took sort of flower in the 50s and 60s flowing out of paperback books.
And then newsletters. That's why I say the Soviets outlawed mimeograph machines,
right, which are earlier photocopiers. But there was a whole newsletter phenomenon
in a lot of movements in the 50s, 60s, 70s. And so it's basically, it's sort of this sequential,
the way I look at it is basically,
the way to think about it is media and thought
centralized to the maximum possible level
of centralization and control right around 1950, right?
Where you basically had three television networks,
you had one newspaper per city,
you had three news magazines, right?
You just had, you had two political parties, right?
You just, like everything was like locked in hard, right? You just had two political parties, right? You just, like, everything was, like, locked in hard, right?
And then basically technology
in the form of all of these media technologies
and then all the computer and, right,
information technologies underneath them
have basically been decentralized
and unwinding that level of centralized control
more or less continuously now for 70 years.
So I think it's been this longer-running process.
And by the way, I think, you know,
left to its own devices it's going to
continue right and this is the significance
of AI
what if each of us has a
super sophisticated AI that we own and control
because it either comes from a company that's
doing that for us or it's an open source thing
where we can just download it and use it
and what if it has the ability to analyze all the information
and what if it has the ability to basically say
you know look on this topic I'm going to go scour the internet and I'm going to come back and I'm going to synthesize information.
I'm going to tell you what I think.
It's the AI.
So it would be logical that that would be another step down this process.
Yes.
And by the way, and maybe the most important step of all, right, because it's the one where it can actually be like, okay, I'm going to be able to legitimately think on your behalf, right, and help you to conclusions, right, that are factually correct even if people who are in power don't want to hear it.
It seems to me that you have more of a glass half full perspective on this.
Are you open-minded and just sort of just analyzing the data as it presents itself currently and not making judgments about where
this is going? Or do you generally feel like this is all going to move in a good direction?
So my day job is to, we meet every day all through the year with all these incredibly smart kids who
have these incredibly great new ideas and they want to build these technologies and they want to
build, you know, businesses around them or they want to open source them or they want to, you know, whatever, but they want to build, they want to make these technologies and they want to build businesses around them or they want to open source them
or they want to, you know, whatever.
But they want to build,
they want to make these new things happen.
You know, they have visions
for how the world can change in these ways.
They have the technical knowledge
to be able to do these things.
There's a pattern of, you know,
these kids doing amazing things.
Apple just passed today.
Apple just passed,
Apple alone just passed the entire value
of the entire UK stock market.
So, and Apple was two kids in a garage in 1976 with a crazy idea that people should have their own computers, which was a crazy idea at the time.
And so, usually it doesn't work, but when it does, it works really, really well.
And this is what we got, the microchip, and this is how we got the PC, and this is how we got the internet and the web and all these other things.
And yeah, here we go.
Yeah, top three trillion.
Yeah, yeah.
So it's the comparison, I think,
as to what they call the FTSE 350,
which is the 350 largest UK companies.
That's bonkers.
Yeah.
And so when it works, it works incredibly well, right?
And we just happen to be, by being where we are and doing what we do, we're at ground zero with that.
And so all day long I meet and talk to these kids and people who have these ideas and want to do these things.
And so I – and so it's why I can kind of – I can see the future kind of in that sense, which is I know what they're going to do because they come in and tell us and then we help them try to do it.
They come in and tell us and then we help them try to do it.
So if they're allowed to do what they plan to do, then I have a pretty good idea of what the future is going to look like and how great it could potentially be.
But then I also have the conversations in Washington and I also have the conversations with the people who are trying to do the other things. And I'm like, OK.
For a very long time, tech in the US was considered just like purely good, right?
very long time, tech in the US was considered just like purely good, right? Tech was, everybody was like, up until like basically the 2000s, 2010s, everybody was just kind of pro-tech, pro-whatever.
People got excited about new things. Every once in a while, people get freaked out about something,
but mostly people just thought, you know, invention is good. Creativity is good. Silicon Valley is
good. And in the last 15, 20 years, like it's gotten these, all these topics have gotten very
contentious and you have all these people who are very angry about the consequences of all this technological change.
And so we're in a different phase of the world where these issues are now being fought out not just in business but also in politics.
And so I also have those conversations, and those are almost routinely dismaying.
Those are not good conversations.
And so I'm always trying to kind of calibrate between what I know is possible
versus my concern that people are going to try to figure out how to screw it up.
When you have these conversations with people behind the scenes,
are they receptive?
Are they aware of the issues of what you're saying in terms of just freedom of expression
and the future of the country?
You might bucket it in like three different buckets.
There's a set
of people who just basically don't like Silicon Valley tech, internet, free speech, capitalism,
free markets. Like they're very political. Some of them are in positions of high power right now,
and they're just opposed. They're just against, and they're trying to do everything they can. I
mean, they're trying to outlaw crypto right now. They're trying to like do all kinds of stuff.
They're trying to, same people trying to censor social media. Like they're trying to outlaw crypto right now. They're trying to do all kinds of stuff. They're trying to – the same people trying to censor social media.
Like they're just very opposed.
And there's – I mean, I don't know.
Maybe there would be a point in talking – I myself don't spend a lot of time talking to them because it's not a conversation.
It's just getting yelled at for an hour.
Is that really how it goes?
Oh, yeah, yeah. They're very angry.
Like there's a lot of – there's a very large amount of rage in the system.
A lot of it directed at tech.
Then there's a set of people who I would describe, I don't know if open mind is a wrong term, but like I would say they are honestly
and legitimately trying to understand the issues. Like they're kind of aware that they don't fully
understand what's happening and they are trying to figure it out. And they do have a narrative
in their own mind of they're going to try to come to the right conclusion. So there's some set of
those. Those usually aren't the senior people, but there are people like at the staff level who are like that. Dreamers. What's that? Dreamers. Yeah. Yeah. Like, you know, the best,
the best of the bunch, right? Like the, you know, open-minded, um, learning, curious, you know,
it's like anything else in life. You sit down with one person and like, you're in, they just
like, you have a conversation, they ask you questions, you ask them questions. There's other
people you talk to where it's just like, they're not interested in what you think. talk to where it's just like they're not interested in what you think.
And it's just very clear that they're not interested in what you think.
And so that plays out there also.
And then there's a third set of people who are very actually pro-capitalism, pro-innovation, pro-tech.
But they don't like us because they think we're all Democrats.
So a lot of our natural allies on these issues are on the other side of where Silicon Valley is majority democratic, right?
And so there's a fair number of people who would be our natural allies if not for the fact that Silicon Valley is like 99% democrat.
Oh, wow. And so this is part of the issue the Valley has.
We don't have any natural allies.
Like tech doesn't have any natural allies in D.C. because the democrats basically think they control us, which they effectively do because the Valley is almost entirely Democrat.
And then the Republicans think that basically they would support us except that we're all
Democrats. And so we can go F off. And so there's a trap that's developed that is hard
to figure out what to do with.
How do you get around that one?
That one's a hard one. I mean, that I don't know. That seems – people – the last thing I want to do is argue to people, especially
in public, that they should change their politics.
So – and look, people feel very strongly – obviously, people in tech feel very strongly
about politics, including many political topics that have nothing to do with tech.
And so asking somebody to change their views on some other political issue so that it's
better for tech is not an argument that flies.
Trevor Burrus Wow. Jason Kuznicki So there's better for tech. It's not an argument that flies. Wow.
So there's a bit of a stall there.
But yeah, it goes back to people have to decide what they want.
You seem like you enjoy all this madness, though.
You really do.
I'd rather be in the middle of it than not.
Yeah.
It would be very frustrating to be on the outside.
It'd be even more frustrating than being involved in it.
Well, look,
here's the other thing.
These issues become
really important, right?
Like, I'll even credit
the critics with the following,
which is, yeah, look, Mark,
like, tech was a backwater.
Tech didn't matter
until the internet showed up.
Like, and now it matters a lot
because, like,
it's the future of speech
and politics and control
and all these things.
And so all of a sudden,
it's, like,
these big, important topics.
We haven't even talked about warfare. Like, AI is going to really change how like weapons work, right? Like basically every important thing happening
in the world right now has a technological component to it, right? And it's being altered
by the changes that are happening, you know, caused by tech. And so the other argument would
be, Mark, like grow up, like, of course, these are all going to be big fights because these are now,
you're now involved in all the big issues. Yeah. And maybe that's just the case.
Well, that seems to definitely also be the case.
Yeah.
It's just people are always so scared of change and change today.
When we're talking about this kind of change, you're talking about monumental change that happens over a very short period of time.
Yeah.
Yep.
Yes.
That's a big freak out.
Yes.
Yeah. I mean, what are we looking a big freak out. Yes. Yeah.
Yeah.
I mean, what are we looking at in 50 years?
Really?
Yeah.
You enjoy it.
Look at you.
I do enjoy it.
I do enjoy it.
I do enjoy it.
I love that you enjoy it, though.
Douglas, you know that book, Hitchhiker's Guide to the Galaxy?
Douglas Adams wrote that book.
He once had a formulation.
He said this is all generational. He had a different theory than a formulation. He said this is all generational.
He had a different theory than all this. It's all generational. It's all age-related. And he said
people react to technology in three different ways. If you're below the age of 15, whatever
is the new thing is just how the world always worked. If you're between the ages of 15 and 35,
whatever is the new thing is exciting and hot and cool, and you might be able to get a job and make
a living doing it. Anything, if you're above the age of 35,
it's whatever new is happening is unholy, right?
And it's sure to bring about the downfall of civilization,
apocalypse and calamity.
I guess that's true in culture.
It's true in music.
It's true in movies, video games.
So I think maybe what just has to happen
is just time needs to pass. Maybe the fight is always, I don't know, it's like whatever, the new thing happens, the fight's always between a bunch of 50-year-olds or something.
Do you resist any technology in your own personal life?
Personally, we do have an eight-year-old, and he does get screen time, but it is controlled.
So we use it as a tool.
We're not absolutists.
There are some people running around who want to keep their kids off all this stuff, which, by the way, is not the craziest view in the world.
Right.
But we want him to be fully up to speed.
We want him to be an engineer. Not that he has to spend his life doing it, but we want him to know how up to speed. We want him to be an engineer.
Not that he has to spend his life doing it, but we want him to know how to use technology and build it.
It's also fun for kids.
It's just if you teach them discipline and engage them in other activities so that they do physical things and run around, have fun, be outside.
He does MMA.
Oh, no kidding. He's doing full Brazilian jiu-jitsu.
He's doing full MMA. He's doing full Brazilian jiu-jitsu. He's doing full MMA.
He's doing full sparring.
Wow.
He and his coach dress up in the full body marshmallow man outfits and wail on each other and get on the ground and choke each other out.
Okay.
Are you enjoying watching that?
It's absolutely fantastic.
It is hysterical.
And he loves it.
That's pretty cool.
And I keep watching the videos because you know, because he's up against
half the time he's with an adult
sparring, and he just goes right in there.
That's crazy.
So the tech story that I've
been thinking about a lot is
the Douglas Adams thing.
So, you know, ChatGPT comes out in
December. I play with it for a few months.
I'm trying to wrap my head around it, and I'm like, okay, this is
good. And so I'm like, okay. And eight-year-old's like super curious and he wants
to learn all these things. And he's asking questions all the time. And half the time,
I don't know the answer. So I'm like, okay. I install it on his laptop. I chat GPT on his
laptop. And I set a time aside and I sit him down on the couch and I'm like, okay, there's this
amazing thing that I'm going to give you. This is like the most important thing I've ever done
as a father, right? That I've brought fire down from the mountains and I'm going to give you. Right. This is like, it's like the most important thing I've ever done as a father. Right. That I've like brought like fire down from the mountains
and I'm going to give you AI. Right. And you're going to have like AI your whole life to be like
with you and teach you things. And he's like, okay. And I was like, well, you ask it questions
and it'll answer the questions. And he's like, okay. And I was like, no, like this is a big deal.
Like they didn't used to do this.
Like now it does this and this is amazing.
And he's like, okay.
And I was like, why aren't you impressed?
And he's like, it's a computer.
Like, of course you ask it questions
and give you answers.
Like, what else is it for?
And I'm like, okay, you know, I'm old.
Kids are going to just have
a totally different point of view on this.
Right.
It's going to be normal to have the answers to things.
Yeah, completely normal.
And it's going to be, by the way, it's going to be normal.
It's going to be exciting.
I think it's going to make – I think it's going to be great.
Like for kids, I think it's going to be fantastic.
Well, the positive aspect, just for informing people on whatever it is,
whether it's a medical decision or whether it's a mechanical thing with your car,
I mean, that's pretty amazing.
One of the fun things you can do with Shad GPT is you can say,
explain X to me, and then you can say, explain X to me as if I'm 15.
Oh.
And then you can do it as if I'm 10.
Wow.
And you can do it as if I'm 5.
And you can actually get it.
You can actually do it all the way down.
It kind of works down to about age 3.
So you can tell it, explain quantum mechanics to me like I'm a 3-year-old.
Wow.
And, like, it will.
Really? Yeah, yeah. And so I taught him how to do this. Cause I'm like,
you just, you know, you can, you can have it, you can dial it up or down. How does it explain quantum mechanics to a three-year-old? It uses like all these metaphors of like, you know,
you've got a stuffed animal over here and a stuffed animal over there and it wiggles. And
then that one wiggles. Right. And so it's like, Oh, you know, three-year-olds like, okay,
Right. And so it's like, oh, you know, three year olds like, OK, why not?
Right. So, yeah, no. So as a as a tool, you know, there's all these fights happening, I guess, what back to school is coming up in a couple months here. And, you know, there's all these fights already emerging over like whether students in the classroom can use JGPT.
And, yeah, there's all these, you know, there's all these sites that claim to tell you whether something's been generated by AI.
sites that claim to tell you whether something's been generated by AI.
So the teacher, in theory, can screen to see if something's been, you know,
student hands in an essay.
In theory, there's a tool that will tell you whether they got it from GPT,
but it doesn't actually work.
There you go.
Here it is.
Can you explain quantum mechanics to a three-year-old?
Sure.
I'll do my best to explain quantum mechanics in a simple way for a three-year-old to understand.
Imagine you have a toy that would be in two places at the same time. It's like having a teddy bear that can also be in your bed and in the living room at once.
Very strange, right? Well, in the world of very tiny things like atoms and particles,
they can do something similar. They can be in two different places at once,
just like your special toy. This is called superposition.
Now let's talk about something else. Imagine you have a toy car that can go
through walls without crashing. It can pass right through them as if it wasn't even there.
That's pretty amazing, isn't it? Well, in the world of tiny things, particles can do something
similar. They can pass through solid objects without any problem. This is called tunneling.
without any problem.
This is called tunneling.
Wow.
Pretty cool.
Yeah.
Pretty cool.
Yeah, in terms of education,
in terms of just informing people,
I don't think there's any,
I mean, it's one of the most promising things ever,
by far.
And by the way,
this is something for people's entire lives, right?
This isn't just for kids, right?
This is for anybody who ever wants to learn anything.
The real fear,
the overall fear, the overall
fear is that what human beings are doing with artificial intelligence is creating something
that's going to replace us. You have no fear of that. Yeah, I don't. What about a hundred years
from now? It's a tool. A hundred years from now, I don't know. And the first clue it's going to
have a hundred years from now, but it's not going to be this. That's the fear is that we're
sowing the seeds.
Yeah.
This is an old, I mean, look, this is an old, this is an old fear.
It's like the fear of the end of the world.
This is like the fear of, yeah, the non-human.
Yeah. Like in Judaism, they have this, they have a version of this in Judaism called the Golem,
the sort of legend of the Golem.
And it was sort of this thing.
It was like, it was at the Warsaw Ghetto at one point.
And this rabbi figures out how to conjure up this, basically this giant, basically creature
made out of clay to go smite, you know, the enemies. Um, and then, you know, of course it
comes back around and starts killing, you know, his own people. Um, you know, the Frankenstein's
monster. Right. Right. Same thing. Um, so there's always this, yeah, there's always the, and look,
it's very human, you know, it's a self-preservation, you know, kind of thing, but, you know, look,
we build tools. I mean, what's the thing that makes us different from animals, right? Is we,
we have intelligence and we build tools. Tools can be used, by the way,
for good and bad things, right? Like a shovel can be used to dig a ditch or like brain somebody
right over the head. And so all these things, you know, things do have two sides. But over time,
you know, the tools that we built have created a much healthier, safer, better world.
Isn't that interesting?
Right. I mean, look, human population is like up, you know,
gigantically as a consequence of all these tools we've developed.
So the exact opposite thing has happened
from what everybody's been afraid of the whole time.
But it is interesting whenever there's a discussion on these things,
it's never framed that there's two sides.
It's always framed, this is what we're scared of.
This is what the danger is.
It's not part of the beauty of this is that there's danger.
Yeah.
And it's also there's incredible promise that's attached to this as well, like everything else, like matches.
No one's advocating for outlawing matches, but you could start a fire.
So the original myth on this, so the way the ancients thought about this.
So the way the ancients thought about this – so excuse me.
In the Judeo-Christian philosophy, they have this concept of the logos, the word.
So it says at the very beginning of the Bible, in the beginning there was the word, the word was truth, and then basically the universe kind of comes from that.
So this concept of like the word, which is sort of knowledge, right? And then in Adam and Eve, it was Adam and Eve eating from the tree of knowledge, right?
And then when they ate the apple, Satan fooled them into, you know, Satan fooled them in eating the apple,
and then they had the knowledge, the secret knowledge.
The Greeks had a similar concept they called techni, which is the basis for the word technology.
And it meant sort of—it didn't mean technology per se, but it meant sort of knowledge,
and particularly knowledge on how to do things, right?
So sort of the beginning of technology.
And the myth that the Greeks had,
so the myth that the Christians have
about the danger of knowledge
is the Garden of Eden
getting kicked out of the Garden of Eden
to the downside, right?
That was viewed as a tragedy, right,
in that religion.
The Greeks had what they called
the Prometheus myth,
and it had to do with fire, right?
And so, and the myth of Prometheus
was a central Greek myth,
and the myth of Prometheus was,
Prometheus was a god-cut kind of character in the mythology who went up to the mountain.
Humans didn't have fire.
And he went up to the mountain and the gods had fire.
And he took fire from the gods and he brought it down and gave it to humanity.
And in the myth, that was how humans learned to basically use fire, right, as a tool.
As punishment for bringing fire to humans, he was, in the myth, he was chained to a
rock for all eternity. And every day his liver gets pecked out by an angry bird, and then it
regenerates overnight, and then it gets pecked out again the next day forever. Like, that's how much
the gods felt like they had to punish him, right? Because, and of course, what were they saying in
that myth? What they were saying is, okay, fire was like the original technology, right?
And the nature of fire as a technology is it makes human civilization possible.
You can stay warm at night.
You can fight off the wolves.
You know, you bond the tribe together, right?
Every culture has like a fire central thing to it because it's like the center of the community.
You can use it, you know, to cook meat, right? Therefore, you'd have a higher rate of your kids are going to survive
and so forth, be able to reproduce more. But of course, fire is also a fearsome weapon,
and you can use it to burn people alive. You can use it to destroy entire cities.
And so it got, it's fantastic because it got that idea of information technology in the form of
even fire was so scary that they encoded it that deeply in their mythology. And so I think what we do is we just like play that exactly like you said, we play that
fear out over and over again. Because in the back of our head, it's always like, okay, this is the
one that's going to get us. Yes, I know that the previous 3,000 of these things that actually
turned out fine. Amazingly, even nuclear weapons turned out fine. Nuclear weapons almost certainly
prevented World War III. Nuclear weapons, the existence of nuclear weapons turned out fine. Nuclear weapons almost certainly prevented World War III.
Nuclear weapons, the existence of nuclear weapons probably saved on the order of 200 million lives.
So even nuclear weapons turned out okay.
But yet after all of that and all the progress we've made, this is the one that's going to get us.
It's so interesting because that conversation is never had.
We only hear the negative aspects of it.
Yeah, that's right. Because these are complex, nuanced discussions and it has to do with
all sorts of aspects of human nature and control and power structures. And it's just,
they're very complex conversations. Yeah. And then people try to hijack them, right?
Yeah. Right. They get used.
Yeah. And that's where, yeah, that's where, I don't mind, like somebody who's like,
Right. Yeah. Right. They get they get used. Yeah. And that's where. Yeah.
I don't mind like somebody who's like there's this concept I talk about the the the the Baptist and the bootleggers.
So there were two groups of people in favor of prohibition of alcohol.
There were the Baptists who were the social activists who thought alcohol was actually evil and was destroying society. And then there were the bootleggers, which were the people who were going to make money. Right.
If alcohol was outlawed. Right. And so and this is what you often have is you have a, when there's one of these social movements that wants regulation,
you often have this union of the Baptists and the bootleggers. And so the Baptists, I don't mind,
like the true believers who are like worried about, you know, X, Y, Z, it's like, okay,
let's talk about that. Let's figure that out. It's the bootleggers that like drive me crazy,
right? It's just the bootleggers who pick up that argument and then are, you know,
working behind the scenes to achieve, you to achieve basically self-interested ends.
Well, I have hope.
I really do.
I mean I like to dwell on the negative aspects of it because it's fun.
But one of the things that I have hope in is that there are conversations like this taking place where this is a very kind of unique thing in terms of human history, like the ability to independently distribute something that reaches millions of people that can talk about these things.
So this can get out there and then other people will hear this and they'll start their own conversations about it and articles will be written and more people discuss it and then look at this more nuanced perspective.
Because I think it is something that's incredibly complicated.
I think it is something that's incredibly complicated.
And you can't deny that just what ChatGPT can do right now is extraordinary and very beneficial.
Even if they just stopped it right there.
I mean, just right there, but it's not going to stop there.
Want to see something crazy?
Yes.
Can I ask for something to be pulled up?
Sure.
Twitter.
Go to Twitter.
This just came up today.
Because we've been talking about text.
We've been talking about ChatGPT. So let's look at images for a moment. So we're going to do a search on MidJourney.
And then Chihuly, the artist, C-H-I-H-U-L-Y. C-H-I-Chihuly, C-H-I-H-U-L-Y.
Yeah, right there, that one.
Okay, that's pretty good.
But go two more, no, stay on that one,
but go to that image, the shoe right there.
There we go.
Okay, so this is MidJourney. So this is the app that lets you create images.
You describe words, and it creates images.
It uses the same technology as ChatGPT, but it generates images.
The prompt here was something along the lines of a Nike shoe in the form of this artist called Chihuly,
who's this famous artist who works in basically blown glass is his art form.
And so this is a Nike shoe rendered in blown glass.
And Chihuly is famous for using lots of colors.
And so this has all the colors.
So this does look exactly like his shoe would have looked.
Yeah, this would be Chihuly.
Yeah, this is Chihuly, you know, skirt, billowing, you know, billowing skirt.
Yeah, this is Chihuly.
It's Chihuly, you know, like statue of an avocado, right?
Right.
And so it's an avocado made out of stained glass.
Okay, so just look here for a moment, though. No, go back to, go back. Yeah. Just go back to,
yeah. Go to the avocado for a second. Okay. Okay. Look at the shadows.
Look at the detail in the shadows. Incredible. Look at the detail of the shadows with the
sunlight coming through the window. Yeah. Okay. Now go back, go back to the shoe. Cause this one
blows my mind. Okay. And then zoom in on the reflection of the shoe in the bottom down there. Right. It's like, you see, it's like perfect, right? It's like a perfect,
it's like a perfectly corresponding reflection. Okay. This entire thing was generated by mid
journey. Mid journey, the way mid journey works is it predicts the next pixel.
So the way that it worked was it basically ran this algorithm that basically it used the prompt
and then it ran it through the neural network and then it predicted each pixel in turn for this image and this image probably has you know 100,000
pixels in it or something or a million pixels or something um and it basically was it's like an
autocomplete it was predicting each pixel but in the process of predicting each pixel it was able
to render um not only colors and shapes and all those things, but transparency, translucency, reflections, shadows, lighting, like it trained
itself basically on how to do a full 3D rendering inside the neural network in order to be able to
successfully predict the next pixel. And how long does something like that take to generate?
That takes to generate on the, when you're running the system today, that would probably be, I'm guessing, 10 or 15 seconds.
There's a newer version of Midjourney, a turbo version that just came out where I think it cuts it down to a couple seconds.
Now, the system that's generating that needed many years of computing power across many processors to get ready to do the training that took place.
But the fact that it could generate that in seconds.
It took a few seconds.
Okay, so here's another amazing thing.
The price, the cost of generating an image like that versus hiring a human artist to do it is like down by a factor of 1,000, somewhere between a factor of 1,000 and 10,000.
If you just kind of run the numbers, like to hire an artist to do that at that level of quality would cost in the order of a thousand ten thousand dollars more dollars or you know time or
human effort than doing it um with the machine um uh the same thing is true of writing a legal brief
um the same thing is true of writing a medical diagnosis the same thing is true of you know
summarizing a book like any sort of you know knowledge um summarizing a podcast um you know
any any of these things um drafting questions for a of, you know, knowledge, um, summarizing a podcast, um, you know, any,
any, any of these things, um, drafting questions for a podcast, um, you know, basically pennies,
right. To be able to do all these things versus, you know, potentially a hundred or a thousand dollars to have a person do any of these things. Um, so we've dropped the cost of a lot of white
collar work by like a factor of a thousand, right. Guess what we haven't dropped the cost of like at all?
It's all the blue collar work, right?
So we do not have today a machine that can pick strawberries that is less expensive than hiring people to pick strawberries.
We do not have a machine that can pack your suitcase.
We do not have a machine that can clean your toilet.
We don't have a machine that can cook you dinner.
Like we don't have any of those things.
Like for those things, the cost of the machine and the AI and everything else to do those things is far in excess of what you can simply pay people to do.
So there's the great twist here is that in all of the economic fears around automation, the fear has always been that it's the mechanical work that gets replaced because the presumption is people working with their brains.
That's certainly not what the computer is going to be.
Certainly, the computer is not going to be able to make art, right?
So the computer is going to be able to pick strawberries
or it's going to be able to make cheeseburgers,
but obviously it's not going to be able to make art.
And it actually turns out the reverse is true.
It's much easier to make the image of that shoe
than it is to make you a cheeseburger.
Of course, because it has to be automated physically.
Physically, but not just physically, which is like, okay okay like what happens if the stove catches on fire right right like you know okay like what
you know what shape how does the suitcase unclasp suitcases unclasp differently like all the yes all
the like real world stuff how do you plumb a toilet right like you know what happens when you
get in there right and what happens if the plumbing is all screwed up and And so the great irony and twist of all this is when the breakthrough,
we all thought in the industry, we all thought when the breakthrough arrived,
it would arrive in the form of robotics that would cause, you know,
the fear would be it would cause unemployment among basically the less,
you know, quote, unquote, lower skilled people or less educated people.
It turns out to be the exact opposite.
Well, that's Andrew Yang's take on automation, right?
The need for universal basic income. Yeah. Well, yes. Therefore, the need for communism, which is immediately where it goes.
But before you think about that, though, think about what this means in terms of productivity.
So think in terms of what this means about what people can do, right? So think about the benefit,
including the economic benefit. So everybody always thinks of this as producer first. You
want to start by thinking of this as consumer first,
which is like as a customer of all of the goods and services
that involve knowledge work,
the price on all of those things is about to drop
on the order of like a thousand X, right?
So everything that you pay for today, right,
that involves white collar work,
like the prices and all those things are going to collapse.
By the way, that's the collapse in the prices
is why it doesn't actually cause unemployment because when prices collapse, it frees up spending power, and then you'll spend
that same money on new things. And so your quality of life will rise, and then there will be new jobs
created that will basically take the place of the jobs that got destroyed. But what you'll experience
is hopefully a dramatic fall in the cost of the goods and services that you buy, which is the
equivalent of basically giving everybody a raise. What about artist rights?
Because one of the arguments about art
is that you're taking this midway,
you're taking this AI program,
and it's essentially stealing the images
of the style of artists and then compiling its own,
but that the intellectual work,
the original creative work was responsible
for generating this in the first place. So even though you're not paying the illustrator,
you're essentially using that illustrator's creativity and ideas to generate these images
through AI. Right. And in fact, we just saw an example of that because based on,
we actually named a specific artist, Chihuly, right? Who certainly did not get paid,
right? As a consequence of that. And the algorithm knew who Chihuly was, so it had clearly been trained on his art before.
Otherwise, the algorithm would not have known to do it in that style.
So I think this is going to be a very big fight.
I think this is probably going to go ultimately to the Supreme Court.
Those cases are just starting now.
I think the first one is Getty Images, which owns a big catalog of photography, is actually
suing this company, Midjourney.
Interesting. So that has begun.
The argument for why what's happening is improper is exactly what you said.
The argument for why it's actually just fine and, in fact, not only should be legal but actually is legal under current copyright law is what in copyright law is called the right to make transformative works.
is what in copyright law is called the right to make transformative works.
And so you have the total right as an artist or creator to make any level of creative art that you want or expression that is inspired by, right,
or the result of what they call transforming prior works.
Right.
Right.
So you have the right to do homages.
You have the right to do, you know, I mentioned that earlier, the guy who wrote the other version of the version of the book 1984 right he had the right to do that because he was transforming the work you could make your version of what you think of picasso would look like exactly you are free to
draw in the style of picasso you are not free to copy a picasso but you are free to study all of
every all the art picasso did and as long as you don't misrepresent it as being a picasso you can
generate all the new are you free to make a to free to copy a Picasso exactly if you're telling everybody you're copying a Picasso?
I don't – no.
The artist – I mean copyright at some point expires.
But that aside, let's assume copyright lasts.
Let's just assume for the moment copyright is forever just to make it easy to talk about.
The artist can copyright that particular image.
The screenwriter can copyright that particular screenplay.
Right, but if you're not generating income from it?
Oh, I don't know.
There's another carve-out in the copyright law for non-commercial use.
So there's academic use.
By the way, there's also protection for satire.
There's protection for a variety of things.
But the one that's relevant here specifically is the transformative one because – and the reason I say that is because Chihuly never made a shoe.
So there's no image in the training set that was a Chihuly shoe, certainly not a Chihuly Nike shoe, and certainly not that Chihuly Nike shoe.
And so the algorithm produced an homage, be the way to think about it.
And as a consequence of that, I think the way – through copyright law, you're like, okay, that's just fine.
And I think the same thing is true of ChatGPT for all the texts that it's –
by the way, the same thing is happening in ChatGPT.
The news publishers, newspaper publishers are now getting very upset because they have this fear –
or they have a fear that people are going to stop reading the news because they're just going to get –
just ask ChatGPT what's happening in the world.
Right, and they probably will.
They will just tell you.
And there are lots of news articles that are in the internet training data that went into
training ChatGPT, right, including, you know, updating it every day.
Well, and also if you can generate an objective news source through ChatGPT, because that's
really hard to do.
So one of the fun things that these machines can do, and you can do this at ChatGPT, actually
you can do this today, you can tell it to take out, it will do what's called sentiment
analysis. You can ask it, is this news article slanted to the left or the right? Is the emotional
tone here angry or like hostile? And you can tell it to rewrite news articles to take out the bias.
Interesting. And you can tell, you know, take out any political bias and take out any emotional loading,
and it will rewrite the article to be as objective as it can possibly come up with.
Wow.
And again, but here's the question is, okay, the result of that, is that still copyrighted?
Is that a copyrighted derivative work of the original news article, or is that actually
now something new that is a transformation of the thing that existed before, but it's
different enough that it's actually fine for the machine to do that without copyright being a problem?
People, when they encounter objective information like objective news, they're always going to look for someone who has an analysis of that news.
Then they want a human perspective on it, which is very interesting.
I wonder how AI fits into that.
So one of the things you can do is that you, so you can ask it just straight up,
give me the left-wing view on this or give me the right-wing view on this. Or by the way,
you can also, I do this a lot, is like you can create two personas. You can say,
I want a left-winger and a right-winger and I want them to argue this out.
Oh, wow.
Right. It'll do that. But here's another thing it'll do is you can tell it to write in the style
of any person whose sensibility you admire,
right? So take somebody who you really, take RFK. You could say, analyze this topic for me,
adopt the persona of RFK, and then analyze this topic for me. And it will use all of the training
data that it has with respect to everything that RFK has ever done and said and how he looks at
things and how he talks about things and how he, you know, whatever, does whatever he does.
And it will produce something that odds are going to be pretty similar to what the actual person is going to say.
But you can do the same thing for Peter Hotez.
You can do the same thing for, you know, authority figures.
You can do the same thing for, what would Jesus say, right?
Literally.
Literally, what would Jesus say?
Wow.
And it will, again, it's not Jesus saying it, but it's, it's using the complete
set of text and all accounts of everything Jesus ever said and did. And it's going to produce
something that at least is going to be reasonably, reasonably close to that.
What a bizarre new world we're in the middle of right now.
Exactly. And so you can channel it. It's a fascinating thing. You can channel historical
figures. You can channel Abraham Lincoln. Like, okay, here's another example for,
for how kids are going to do this.
It's like, okay, it's time to learn about the Civil War.
Okay, let's talk to Abraham Lincoln.
Let's be able to ask him questions, right?
And again, it's not like you're not, of course, actually talking to Abraham Lincoln, but you are talking to the sum total of all written expression, all books ever written about Lincoln.
Wow.
And he's talking back at you, right?
And so, yeah, it'll happily do that for you.
Just what is a 20-year-old going to look like that's born today?
When they hit 20, like what kind of access to information, view of the world, understanding of things, instantaneous knowledge?
What, if any, thoughts do you have on things like Neuralink and the emerging technologies of human neural interfaces?
Yeah.
So this is what the AI safety people describe as like the out or the fallback position or something, which is, okay, if you can't beat them, join them.
Yeah.
Right.
Maybe we just need to upgrade everybody's intelligence.
Maybe the way to do that is to kind of fuse man and machine maybe um yeah look the technology is it's very so it's very serious technology it's like the the technology is for real that they're
working on like that they and people like them are it's all for real um you know people have
been working on the ideas underneath this for like 30 years you know things like mris
um and by the way the thing on this is there's a lot of immediate healthcare applications.
So like people with Parkinson's, right?
People who have had, you know, who have been paraplegics or quadriplegics being able to
restore, you know, the ability to move, like being able to fix things that are broken in
the nervous system, able to restore sight to people who can't see if there's some, you
know, breakdown.
So there's a lot of very straightforward medical applications that are potentially a very big deal.
And then there's the idea of like the full actual fusion where a machine knows what you're thinking
and it's able to kind of think with you or you're able to access it and think through it.
I would just say it's exciting.
It's the field that's moving pretty quickly at this point.
But we're, I think, still, I'm going to guess 20 years out
or something from anything that would resemble what you would hypothesize it to be like,
but maybe I'll be surprised. 20 years ago was 2003. That is true.
That's not that long ago. Time doesn't time.
Seems so recent. Time does fly.
Yeah. That seems very recent. They're starting to be able,
there've been papers in the last six months, they're actually people using this technology, specifically the same kind of thing that we just saw with the shoe.
They're figuring out how – people claim to now know how to do a brain scan and be able to pull out basically the image that you're thinking of as an image.
Now, this is brand-new research, and so people are making a lot of claims on things.
I don't know whether it's actually real or not, but there's a bunch of work going into that.
There's a bunch of work going into whether it can basically get words out, right, if you're thinking about a word, be able to pull the word out.
Yeah, so this is the – yeah, okay.
So AI recreates what people see by reading their brain scans.
A new artificial intelligence system can reconstruct images of a person, images a person saw based on their brain activity.
Yeah.
So the claim here is that those would be the original images on top.
And as you're looking at them, it'll do a brain scan and it'll feed the result of the brain scan into a system like the one that does the shoes.
Wow.
And then that system produces these images.
That's pretty damn close.
Yeah.
So it's like an extrapolation off of the image generation image generation stuff that we've been watching yeah it's pretty
close now excuse me these are you know this is this is brand new like is this you know
is this is this real right is it like the samsung moonshot yeah does it yeah does it you know does
it does is it repeatable do you by the way do you need to be strapped to a million dollars worth of
lab equipment?
Right.
So these things can take a while to get to work.
Pretty fascinating if it's applicable, though, if that really can happen.
Hypothetically, yeah, exactly.
Wow. Wow.
Exactly.
It's a wild world.
Mm-hmm. No, the possibilities are very fascinating because it just seems like we're about to enter into a world that's so different than anything human beings have ever experienced before.
All technology driven.
Yeah.
You're in the middle of it, buddy.
Enjoying it?
Oh, yes.
Oh, yeah.
Anything more?
Anything more?
Oh, yes.
Oh, yeah.
Anything more?
Anything more?
Maybe the picture I'd leave you with, you mentioned the 20-year-old who has grown up having had this technology the whole time and having had all their questions answered.
I think there's actually something even deeper.
The AI that my 8-year-old is going to have by the time he's 20, it's going to have had 12 years of experience with him. So it will have grown up with him. Be a good life coach. Yes. It will know everything
he's ever done. It will know everything he ever did well. It will know everything he did that
took real effort. It will know what he's good at. It will know what he's not good at. It'll know
how to teach him. It'll know how to correct for his, you know, whatever limitations he has. It'll know how to teach him. It'll know how to correct for his, you know, whatever limitations he has.
It'll know how to maximize his strengths.
It'll know what he wants.
I wonder if he'll understand how to maximize happiness.
Yeah.
Like I wonder if I could say, Mark, you are working too much.
Yeah.
If you just worked one less day a week, you'd be 40% happier and only 10% less productive.
Yep.
Well, if you're wearing an Apple Watch, right, it will have your pulse and it'll have your blood pressure and it'll have
all these things that will have, you know, and it'll be able to say, you know, look, when you
were in this, you know, when you were working on this, you were relaxed, your serotonin level,
you know, your serotonin or your whatever, uh, oxytocin levels were high. Serotonin levels are
high. When you were doing this other thing, your cortisol levels were high. You shouldn't do that.
Let's figure out how to have you not have to go through that again.
Sure.
Yeah.
Yeah, absolutely.
Yeah.
By the way, sleep, you didn't sleep well.
Right.
Right.
So, yeah, and it'll have all that, right?
So, yeah, literally they hit college or they hit the workplace
and they'll have an ally with them.
Right.
Even before there's any sort of actual brain,
without any mechanical, without any sort of actual physical hookup.
They'll have basically a partner that'll be with them whose goal in life will be to make them as happy and satisfied and successful as possible.
Pretty fascinating stuff.
Yeah. How about that?
Well, I'm interested and I'm going to be paying attention.
I really appreciate you coming in here and explaining a lot of this stuff.
It made me actually feel better.
And it actually gives me hope that there's possibly,
especially with real open source,
a way to avoid the pitfalls of the censorship
that seems likely to be at least attempted to be implemented.
Yep, me too.
All right.
Thank you, Mark. Appreciate you.
Thank you, Joe.
Bye, everybody.