The Jordan B. Peterson Podcast - 308. AI: The Beast or Jerusalem? | Jonathan Pageau & Jim Keller
Episode Date: November 24, 2022Dr. Peterson's extensive catalog is available now on DailyWire+: https://utm.io/ueSXh Dr. Jordan B. Peterson, Jonathan Pageau, and Jim Keller dive into the world of artificial intelligence, debating ...the pros and cons of technological achievement, and ascertaining whether smarter tech is something to fear or encourage. Jim Keller is a microprocessor engineer known for his work at Apple and AMD. He has served in the role of architect for numerous game changing processors, has co-authored multiple instruction sets for highly complicated designs, and is credited for being the key player behind AMD’s renewed ability to compete with Intel in the high-end CPU market. In 2016, Keller joined Tesla, becoming Vice President of Autopilot Hardware Engineering. In 2018, he became a Senior Vice President for Intel. In 2020, he resigned due to disagreements over outsourcing production, but quickly found a new position at Tenstorrent, as Chief Technical Officer. Jonathan Pageau is a French-Canadian liturgical artist and icon carver, known for his work featured in museums across the world. He carves Eastern Orthodox and other traditional images, and teaches an online carving class. He also runs a YouTube channel dedicated to the exploration of symbolism across history and religion. —Links— For Jonathan Pageau: Icon Carving: http://www.pageaucarvings.com Podcast: www.thesymbolicworld.com Youtube Channel: https://www.youtube.com/c/JonathanPageau For Jim Keller: Twitter: https://mobile.twitter.com/jimkxaJim's Speech, "10 Problems to Solve": https://m.youtube.com/watch?v=o70yKYWgtVI&t=21s Jim's Speech, "Overclocking AI": https://m.youtube.com/watch?v=L4AgmG8V3LE&t=3s https://open.spotify.com/episode/13evHqkSPMpMMU1zfXEtAg?si=cCmtYe8yQsaAV9_ZUN8j7Q Ian Banks References: https://en.m.wikipedia.org/wiki/Culture_series— Chapters — (0:00) Coming up(1:48) Intro(5:00) Conceptualizing artificial intelligence(9:10) Language models and story prediction(12:20) Deep story and prompt engineering(18:10) Friston, error prediction and emotional mapping(23:37) Generative models(24:36) Does the intelligence in AI come from humans?(27:26) Can AI have goals that are not understandable to humans?(30:22) When a human records data vs an AI(34:00) When will AI become autonomous?(37:48) To create what could supplant you(47:36) When technology is used to achieve desire, unintended consequences(55:14) Abundance and nihilism(58:30) High human goals and the weaponization of intelligence(1:04:28) AI: Who will hold the keys?(1:14:09) Technology through biblical imagery(1:17:30) When the term “AI” ceases to make sense(1:20:12) What will humans worship in the tech age? // SUPPORT THIS CHANNEL //Newsletter: https://mailchi.mp/jordanbpeterson.com/youtubesignupDonations: https://jordanbpeterson.com/donate // COURSES //Discovering Personality: https://jordanbpeterson.com/personalitySelf Authoring Suite: https://selfauthoring.comUnderstand Myself (personality test): https://understandmyself.com // BOOKS //Beyond Order: 12 More Rules for Life: https://jordanbpeterson.com/Beyond-Order12 Rules for Life: An Antidote to Chaos: https://jordanbpeterson.com/12-rules-for-lifeMaps of Meaning: The Architecture of Belief: https://jordanbpeterson.com/maps-of-meaning // LINKS //Website: https://jordanbpeterson.comEvents: https://jordanbpeterson.com/eventsBlog: https://jordanbpeterson.com/blogPodcast: https://jordanbpeterson.com/podcast // SOCIAL //Twitter: https://twitter.com/jordanbpetersonInstagram: https://instagram.com/jordan.b.petersonFacebook: https://facebook.com/drjordanpetersonTelegram: https://t.me/DrJordanPetersonAll socials: https://linktr.ee/drjordanbpeterson #JordanPeterson #JordanBPeterson #DrJordanPeterson #DrJordanBPeterson #DailyWirePlus
Transcript
Discussion (0)
Hello, everyone. Watching on YouTube or listening on associated platforms. I'm very excited today to be bringing you two of the people I admire most intellectually, I would say, and morally, for that matter, Jonathan Pazzo and Jim Keller, very different thinkers.
Jonathan Pazzo is a French Canadian, a turdical artist, an icon, carver known for his work, featured in museums across the world. He carves Eastern Orthodox among other traditional images and teaches
an online carving class. He also runs a YouTube channel, this symbolic world, dedicated to the exploration
of symbolism across history and religion. Jonathan is one of the deepest religious thinkers I've ever
met. Jim Keller is a microprocessor engineer, known very well in the relevant communities and beyond them for his work at Apple and AMD
among other corporations. He served in the role of architect for numerous game-changing processors
has co-authored multiple instruction sets for highly complicated designs
and is credited for being the key player behind AMD's renewability to compete with Intel in the high-end CPU
market.
In 2016, Keller joined Tesla, becoming vice president
of autopilot hardware engineering in 2018.
He became a senior vice president for Intel.
In 2020, he resigned due to disagreements
over outsourcing production, but quickly found a new position at
Tens Torrent as chief technical officer. We're going to sit today and
discuss the perils and promise of artificial intelligence. And
it's a conversation I'm very much looking forward to. So welcome to
all of you watching and listening. I thought it would be interesting to have a
three-way conversation. Jonathan and I have been talking a lot lately, especially with John
Verveki and some other people as well, about the fact that we seem, it seems necessary for us to
view, for human beings, to view the world through a story. In fact, that when we describe the structures that governs
our action and our perception, that is a story. And so we've been trying to puzzle out,
I would say, to some degree, on the religious front, what might be the deepest stories,
and I'm very curious about the fact that we perceive the world through a story, human
beings do, and that seems to be a fundamental part of our cognitive architecture, and of
cognitive architecture in general, according to some of the world's top neuroscientists.
And I'm curious, and I know Jim is interested in cognitive processing and
and building systems that in some sense seem to run in a manner analogous to the manner
in which our brains run.
And so I'm curious about the overlap between the notion that we have to view the world
through a story and what's happening on the AI front.
There's all sorts of other places that we can take the conversation.
So maybe I'll start with you, Jim.
Do you want to tell people what you've been working on and maybe give a bit of a background to everyone about how to how you conceptualize artificial intelligence?
Yeah, sure. So first I'll say technically I'm not an artificial intelligent researcher. I'm a computer architect. And I'd say my skill set goes from, you know, somewhere around the
atom up to the program. So we, we make transistors out of atoms, we make logical
gates out of transistors, we make computers out of logical gates. We run programs
on those. And recently we've been able to run programs fast enough to do
something called an artificial intelligence
model or neural network, depending on how we say it.
And then we're building chips now that run artificial intelligence models fast.
And we have a novel way to do it, a company I work at.
But lots of people are working on it. And I think we were sort of taken by surprise what's
happened in the last five years.
How quickly model started to do interesting and intelligent
seeming things.
There's been an estimate that human brains do about 10
to the 18th operations a second.
It sounds like a lot.
It's a billion billion operations a second.
And a little computer, you know, the processor in your phone probably does 10 billion operations
a second.
You know, and then if you use the GPU, maybe 100 billion, something like that. And big modern AI computers like OpenAI
use this or Google or somebody, they're doing like 10 to the 16th, maybe slightly more
operations in a second. So they're within a factor of 100 of a human brain's raw computational
ability. And by the way, that could be completely wrong on understanding of how the E.V.D.D.D.D.D.D.D.
does computation could be wrong,
but lots of people have estimated,
based on number of neurons, number of connections,
how fast neurons fire, how many operations a neuron firing
seems to involve.
I mean, the estimates range by a couple orders of magnitude.
But when our computers got faster,
we started to build things called
language models and image models
that do fairly remarkable things.
So what have you seen in the last few years
that's been indicative of this of the change
that you described as revolutionary?
What are the computers doing now
that you found surprising because of this increase in speed?
Yeah, you can have a language model read
a 200,000 word book and summarize it fairly accurately.
So it can extract out the gist.
The gist of it.
Can you do that with fiction?
Yeah.
Yeah, and I'm going to introduce you to a friend
who took a language model and changed it
and fine-tuned it with Shakespeare and
used it the right screen place that are pretty good. And these kinds of things
are really interesting and then we were talking about this a little bit
earlier. So when computers do computations, you know a program will say add a
equal b plus c.
The computer does those operations on representations of information, ones and zeros.
It doesn't understand them at all.
The computer has no understanding of it.
But what we call a language model translates information like words and images and ideas
into a space where the program,
the ideas and the operation that does on them
are all essentially the same thing.
Right, so a language model can produce words
and then use those words as inputs.
And it seems to have an understanding
of what those words are.
So I'm not-
Which is very different from our computer
operates on data.
About the language models.
I mean, my sense of at least in part how we understand a story is that maybe we're
watching a movie, let's say, and we get some sense of the character's goals, and then
we see the manner in which that character perceives
the world and we in some sense adopt his goals, which is to identify with character. And then
we play out a panoply of emotions and motivations on our body because we now inhabit that goal space
and we understand the character as a consequence of mimicking the character with our own physiology.
And you have computers that can summarize the gist of a story, but they don't have that underlying
physiology. First of all, it's a theory that your physiology has anything to do with it. You could
understand the character's goals and then get involved in the details of the story,
understand the character's goals and then get involved into details of the story. And then you're predicting the path of the story.
And also having expectations and hopes for the story.
And a good story kind of takes you on to ride,
because it teases you with doing some of the things you expect,
but also doing things that are unexpected.
And it possibly that creates emotional.
That could create a...
Yeah, it does.
So in an AI model, so you can easily have a set of goals.
So you have your personal goals.
And then when you watch the story, you have those goals.
You put those together.
Like how many goals is that?
Like the story goes in your goals, hundreds, thousands.
Those are small numbers, right?
Then you have the story. The AIM
model can predict the story too, just as well as you can. How do you? And? That's
the thing that I find mysterious is that as the story progresses, it can look at
the error between what it predicted and what actually happened. And then
iterate on that. Right. So you would call that emotional excitement, disappointment.
Anxiety?
Anxiety.
Yeah, definitely.
We'll all be part of what anxiety does.
Those states.
Descrepancy.
Like some of those states are manifesting your body
because you trigger hormone cascades, you know, a bunch of stuff.
But you also can just scan your brain and see that stuff move around.
Right.
Right.
And, you know, the AI model can have an error function and look at the
difference between what it expected and not and you could call that the
emotional state. Yeah yeah well I just talk with the
that speculation but no no I think that's accurate but you know we can make an
AI model that could predict the result of a story probably better than the
average person. So what is some people really get it?
You know, they're really well educated about stories or they know the genre or something.
But yeah.
But you know, these things, and what they see today is the capacity of the models is,
if you say start describing a lot, it will make sense for a while, but it will slowly
stop making sense.
But that's possible.
That is simply the capacity of the model right now.
And the model is not well, not grounded enough in a set of the say goals and reality or something to make sense for.
So what do you think would happen, Jonathan? This is all I think associated with the kind of things that we've talked through to some degree. So one of my hypotheses, let's say about deep stories, is that they're
their, their meta-gists in some sense. So you could imagine a hundred people telling
you a tragic story, and then you could reduce each of those tragic stories to the gist
of the tragic story, and then you could aggregate the gists, and then you'd have something like a meta-tragedy. And I would say the deeper the gist, the more religious like the
story gets. And that's part of, it's that idea as part of the reason that I wanted to bring you
guys together. I mean, one of the things that what you just said makes me wonder is, imagine that you
took Shakespeare and you took Dante and you took like the canonical Western writers
and you trained an AI system to understand the structure
of each of them and then now you have,
you could pull out the summaries of those structures,
the gists and then couldn't you pull out another gist
out of that so it'd be like the essential element of Dante and Shakespeare.
And I've wanted to get biblical.
I want to hear what Jonathan's so far.
And then.
So, so there's here's one funny thing to think about.
You use the word pull out.
So when you train a model to know something, you can't just look in it and say,
what does it know?
You have to query it. Right. Right. Right. Right. What's the next sentence in this paragraph?
What's the answer to this question? There's the thing on the internet now,
now called prompt engineering. And it's the same way I, I can't look in your
brain to see what you think. Yeah. I have to ask you what you think.
Because if I killed you and scanned your brain and got the current state of all the synapses and stuff,
A, it'd be dead, which should be sad.
And B, I wouldn't know anything about your thoughts.
Your thoughts are embedded in this model that your brain carries around.
And you can express it in a lot of ways.
And so, the...
So you could add...
How do you train...
So, this is my big question is, I mean,
because the way that I've been seeing it until now is that
artificial intelligence is, it's based on us.
It's not, it doesn't exist independently from humans
and it doesn't have care.
The question would be, why does the computer care?
Yeah, that's not true.
Well, why does the computer care to get the gist of the story?
I think you're asking the wrong question.
You can train an AI model on the physics and reality
and images in the world just with images.
There are people who are figuring out how to train a model
with just images.
The model itself still conceptualizes things
like tree and dog and action and run,
because that's all existing in the world.
And you can actually train,
so when you train a model with all the language and words,
so all information has structure.
And I know you're a structure guy from your video.
So if you look around you at any image,
every single point you see makes sense.
Yeah.
It's a telly, it's a telly logical structure.
It's like a purpose in laid in structure.
So this is something we've talked about.
And so it turns out all the words
that have ever been spoken by human beings also have
structure.
Right.
Right.
And so physics has structure.
And it turns out that some of the deep structure of images and actions and words and sentences
are related.
Like, there is actually a common core of, like you imagine,
there's like a knowledge space and sure,
there's details of humanity where, you know,
they prefer this accent versus that.
Those are kind of details, but they're coherent
in the language model, but the language models themselves
are coherent with our world ideas.
And humans are trained in the world just the way the A.M. models are trained in the world.
Look, a little baby, as it's looking around, it's training on everything it sees when
it's very young.
And then it's training rate goes down and it starts interacting with what it's learning,
interacting with the people around it.
But it's trying to survive.
It's trying to live.
It has the infant or the child has to get in.
Neurons aren't trying.
The weights in the neurons aren't trying to live.
What they're trying to do is reduce the error.
So neural networks generally are predictive things.
Like what's coming next?
What makes sense?
How does this work?
And when you train an AI model,
you're training it to reduce the error in the model.
And if your model's big.
Okay, let me ask you about that.
So, first of all,
so babies are doing the same thing.
Like, they're looking at stuff go around,
and in the beginning their neurons are just randomly firing,
but it as it starts to get object permanence and look at stuff, it starts predicting what
will make sense for that thing to do.
And when it doesn't make sense, it'll update.
It's all-
Basically, it compares its prediction to the events and then it will adjust its prediction.
So in a story prediction model, the AI would predict the story, then compare it to its
prediction and then fine tune itself slowly as it trains itself.
Okay, so our universe, you could ask it to say, given the set of things, tell the rest
of the story and it could do that.
Right, and that's what, and the state of it right now is there are people having conversations
with us that are pretty good.
So, I talked to Carl Friston about this prediction idea in some detail. Now is there are people having conversations with us that are pretty good.
So I talked to Carl Friston about this prediction idea in some detail.
And so Friston, for those of you who are watching and listening, is one of the world's
talk neuroscientists.
And he's developed an entropy enclosure model of conceptualization, which is analogous
to one that I was working on.
I suppose across approximately the same time frame. So the first issue, and this has been well established in the neuropsychological literature
for quite a long time, is that anxiety is an indicator of discrepancy between prediction
and actuality.
And then positive emotion also looks like a discrepancy reduction indicator.
So imagine that you're moving towards a
goal and then you evaluate what happens as you move towards the goal and if you're moving in
the right direction, what happens is what you might say what you expect to happen and that
produces positive emotion and it's actually an indicator of reduction in entropy.
That's that's one way of looking at it. Then The point is that you have a bunch of words in there that are psychological definitions
of states, but you could say there's a prediction and an error of prediction and you're reducing
error.
What I'm trying to make a case for is that your emotions directly map that, both positive
and negative emotion, look like there's signifiers of discrepancy reduction, on the positive and negative emotion side.
But then there's a complexity that I think is germane to part of Jonathan's query, which
is that, so the neuropsychologists and the cognitive scientists have talked a long time about
expectation, prediction, and discrepancy reduction. But one of the things they haven't talked about
is it isn't exactly that you expect things. It's that you desire them. You want them to happen.
Like, because you could imagine that there's, there's in some sense, a literally infinite number of
things you could expect. And we don't strive only to match prediction. We strive to bring about what it
is that we want. And so we have these pretty set systems that are teleological, that are
motivational systems. Well, it depends. Like if you're sitting
idly on the beach, like in a bird flies by, you expect it to fly along in a regular path.
Right. But you don't really want that to happen. Yeah, but you don't want it to turn into something that could pick out your eyes either.
So that's a long.
But you're kind of following it with your expectation to look for discrepancy.
Yes.
Now, you'll also have a, you know, depends on the person somewhere between 10 and a million
desires, right?
And then you also have fears and avoidance.
And those are context.
So if you're sitting on the beach with some anxiety
that the birds are gonna swear about you
and tap your eyes out.
So then you might be watching it much more attentively
than somebody who doesn't have that worry for it.
For example, but both of you can predict
where it's gonna fly.
And you will both notice a discrepancy, right?
The motivations one way of conceptualizing fundamental motivation is they're like,
they're like a primary prediction domains.
Right. And so when it helps us narrow our attentional focus, because I know,
I know when you're, when you're sitting and, and, and you're not motivated in any sense,
you can be doing just in some sense
trivial expectation computations,
but often we're in a highly motivated state.
And what we're expecting is bounded by what we desire
and what we desire is oriented as Jonathan pointed out
towards the fact that we want to exist.
And one of the things I don't understand
and wanted to talk about today is how the computer models, the IAM models, can generate intelligible
sense without mimicking that sense of motivation. Because you said, for example, they can just
derive the patterns from observations of the objective world.
So, again, I don't want to do all the talking, but
so, so AI generally speaking, like when I first learned about it, had two, two behaviors,
they call it inference and training. So inferences, you have a trained model, so you say you give
it a picture and say, is there a cat in it, and it tells you where the cat is. That's inference.
The model has been trained to know where a cat is. And training is the process of giving it an input
and an expected output.
And when you first start training the model,
it gives you garbage out.
It's like an untrained brain width.
And then you take the difference between the garbage output
and the expected output and call that the error.
And then they invent the big revelation
was something called back propagation withagation with gradient descent.
But that means take the error and divide it up across the layers and correct those calculations
so that when you put a new thing in, it gives you a better answer.
And then to somewhat my astonishment, if you have a model of sufficient capacity and you train it with
a hundred million images, if you give it a novel image and say,
tell me where the cat is, it can do it. That's called, so training is the process of doing
it pass with an expected output and propagating an error back through the network and inferences
the behavior of putting something in and getting it out.
Yeah, I think I'm really pulling.
But there's a third piece, which is what the new models do,
which is called generative model.
So for example, say you put it in a sentence,
and you say predict the next word.
This is the simple thing. So it predicts the next word. This is the simple thing.
So it predicts the next word.
So you add that word to the input and I'll say,
predict the next word.
So it contains the original sentence
and the word you generated.
And it keeps generating words that make sense
in the context of the original word
and addition.
Right.
This is the simplest basis.
And then it turns out you can train
this to do lots of things. You can change it to summarize a sentence. You can train it to answer
a question. There's a big thing about, you know, like Google every day has hundreds of millions of
people asking a question to giving answers and then rating their results. You can train a model
with that information.
So you can ask a question and it gives you a sensible answer.
But I think in what you said, I actually have the issue that has been going through my mind
so much is when you said, you know, people put in the question and then they rate the answer.
My intuition is that the intelligence still comes from humans.
In the sense that it seems like in order to train whatever AI,
you have to be able to give it a lot of power and then say at the beginning,
this is good, this is bad, this is good, this is bad.
Like reject certain things, accept certain things.
In order to then reach a point when then you train the AI.
That's what I mean about the care.
So the care will come from humans
because the care is the one giving it the value,
saying this is what is valuable,
this is what is not valuable in your calculation.
So when they first, so there's the program called AlphaGo
that I learned how to play Go better than a human.
So there's two ways to train the model.
One is they have a huge database of lots of go games
with good winning moves.
So they trained the model with that.
And that worked pretty good.
And they also took two simulations of go
and they did random moves.
And all that happened was,
as these two simulators played one go game, and they just recorded
whichever moves happened to win.
And it started out really horrible.
And they just started training the model, and this is called adversarial learning.
It's not particular adversarial.
It's like, you make your moves randomly, and you train a model.
And so they train multiple models, and over time those models got very good.
And they actually got better than human players.
Because the humans have limitations about what they know,
whereas the models could experiment in a really random space
and go very far.
Yeah, but experiment towards the point.
So there's lots of the game.
Yes, well, but you can experiment
towards all kinds of things that turns out.
And humans are also training that way.
Like when you were learning, you were reading,
you were said, this is a good book, this is a bad book,
this is good sense construction, it's good, it's going.
So you've gotten so many error signals over your life.
Well, that's what culture does in large parts.
And culture does that, religion does that,
your everyday experience does that, your family.
So we embody that.
And we're all, and everything that happens to us,
we process it on the inference path,
which generates outputs.
And then sometimes we look at that and say,
hey, that's unexpected, or that got a bad result,
or that got bad feedback.
And then we back propagate that and update our models.
So could really well train models can then train other models.
So the big is now are the smartest people in the world.
So the biggest question, the biggest question that I that comes now based on what you said
is because my main point is to try to show how it seems like artificial intelligence
is always an extension of human intelligence. It remains an extension of human intelligence.
And maybe the way to have-
That would be true at all.
So do you think that at some point the artificial intelligence
will be able to, because the goals, recognizing cats,
writing plays, all these goals, our goals which are based on embodied human existence.
Could you train, we could an AI at some point develop a goal
which would be incomprehensible to humans
because of its own existence?
Yeah, I mean, for example,
there's a small population of humans that enjoy math, right? And they are pursuing
You know adventures in math space that are incomprehensible to 99.99% of humans
But that's but they're interested in it and you could imagine like an AI program
Working with those mathematicians and coming up with very novel mass ideas and
then interacting with them. But they could also, you know, if some AI's were
elaborating out really interesting and detailed stories, they could come up with
stories that are really interesting. We're gonna see it pretty soon like
art. Could there be a story that is interesting only to the AI and not interesting to us?
That's possible.
So stories are like, I think, some high-level information space.
So the computing age of big data, there's all this data running on computers,
but the only humans understood it, right? The computers don't. So AI programs are now at the state where the information, the processing, and the feedback
loops are all kind of in the same space.
They're still relatively rudimentary to humans.
Like, if some AI programs insert things are better than humans already, but for the most
part, they're not.
But it's moving really fast.
So you could imagine, I think in five or ten years, most it's moving really fast. So you could imagine, you know, I think in five or
10 years most people's best friends will be AI's. And, you know, they'll know you're really well and
to be interested in you. And, you know, it's unlike your real friends. Yeah, real friends are
problematic. They're only interested in you when you're interested. Yeah, yeah, real friends are.
The AI systems will love you even when you're dull and miserable.
Well, there's and there's so much idea space to explore. And humans have a wide range. Some
humans like to go through their everyday life, doing their everyday things. And some people
spend a lot of time like you a lot of time reading and thinking and talking and arguing and debating.
talking and arguing and debating, you know, and, you know, there's going to be, like, say, a diversity of possibilities with what's what a thinking thing can do when the thinking
is fairly unlimited.
So, I'm curious about, I'm still, I'm curious in pursuing this issue that Jonathan has been developing.
So there's a literally infinite number of ways,
virtually infinite number of ways that we could take images
of this room.
Right?
Now if a human being is taking images of this room,
they're going to be, they're going to sample a very small space
of that infinite range of possibilities
because if I was taking pictures in this room,
in all likelihood, I would take pictures of objects
that are identifiable to human beings
that are functional to human beings
at a level of focus that makes those objects clear.
And so then you could imagine that the set of all images
on the internet has that implicit structure
of perception built into it.
And that's a function of what human beings find useful.
I mean, I could take a photo of you that was, the focal depth was here and here and here
and here and here and two inches past you.
Now I suppose you could.
There's a technology for that called light fields.
So then you could, if you had that picture properly done, then you could move around it
and image and see.
But yeah, fair enough, I get your point.
Like the human recorded data has,
here's two.
Has that biology built into it?
Has our biology built into it,
but also unbelievably detailed encoding
of how physical reality works.
So every single pixel in those pictures,
even though you kind of selected the view,
the focus, the frame, it still encoded
a lot more information than you're processing.
Right. If you take a large,
it turns out if you take a large number of images of things in general,
so you've seen these things where you take a 2D image and turn it into a 3D image.
Yeah. Right. The reason that works is you take a 2D image and turn it into a 3D image. Right.
The reason that works is even in the 2D image, the 3D is in the room actually got embedded
in that picture in the way.
And if you have the right understanding of how physics and reality works, you can reconstruct
the 3D model.
And so, you could, you know, an AI scientist made crews around the world with infrared and
radio wave cameras and they might take pictures of all different kinds of things and every
once in a while they'd show up and go, hey, the sun, you know, I've been staring at the
sun in the ultraviolet and radio waves for the last month.
And it's way different than anybody thought, because humans tend to look at light and visible spectrum.
And you know, it could be some really novel things come out of that.
Well, so, so, but humans also we live in the spectrum we live in, because it's a pretty good one for planet Earth.
Like it wouldn't be obvious that AI would start some different place, like visible spectrum is
interesting for a whole bunch of reasons.
Right, so in a set of images that are human derived, you're saying that there's the way
I would conceptualize that is that there's two kinds of logos embedded in that.
One would be that you could extract out from that set of images what was relevant to
human beings, but you're saying that the fine structure of the
objective world outside of human concern is also embedded in the set of images. And that an AI
system could extract out a representation of the world, but also a representation of what's
motivating to human beings. Yes. And then some human science does already do look at the Sun and
Radio waves and other things, because they're know, get different angles on how things work.
Yeah, well, I guess it's a, it's a curious thing. It's like the same with like buildings
and architecture, mostly fit people.
Well, the other, you know, there's a reason for that.
The reason why I keep coming back to, hammering the same point is that even in terms of the development of the AI that is
Developing AI requires immense amount of money energy
You know and time and so that's a transient thing and 30 years. It won't cost anything
So it's that's that's gonna change so fast. It's amazing. So that's that's a super computer's used to cost millions of dollars,
and now your phone is the super computer.
So it's the time between millions of dollars
and $10 is about 30 years.
So it's like, I'm just saying,
it's like the time and effort isn't a thing in technology.
It's moving pretty fast.
It just says to date.
Yeah, but even making, even making, let's say even, I mean, I guess maybe this is the
nightmare question, like, could you imagine an AI system which becomes completely autonomous,
which is creating itself even physically through automized factories, which is, you know,
programming itself, which is creating its own goals, which
is not at all connected to human endeavor.
Yeah, I mean, individual researchers, I have a friend who I'm going to introduce you to
him tomorrow.
He wrote a program that scraped all the internet and trained an AI model to be a language
model on a relatively small computer.
And in 10 years, the computer he could easily afford
would be as smart as a human.
So he could train that pretty easily.
And that model could go on Amazon
and buy 100 more of those computers and copy itself.
So yeah, we're 10 years away from that.
And then why would it do that?
I mean, it's all about the motivational question.
I think that that's what even Jordan and I both
have been coming at from the outset.
So you have an image, right?
You have an image of SkyNet or of the matrix,
in which the sentient AI is actually
fighting for its survival.
So it has a survival instinct, which is pushing it to self perpetuate,
like to replicate itself,
a decreed variation on itself in order to survive
and identify humans as an obstacle to that.
You know?
Yeah, so you have a whole bunch of implicit assumptions there.
So humans, last I I checked are unbelievably competitive.
And when you let people get into power with no checks on them, they typically run a mock.
It's been a historical experience.
And then humans are self-regulating to some extent.
Obviously, with some serious outliers, because they self-regulate with each other and humans and AI models at some point
will have to find their own calculation of self-regulation and trade-offs about that.
Because AI doesn't feel pain, at least as we don't know that it feels like.
A lot of humans don't feel pain either.
Humans feeling pain or not doesn't stop a whole bunch of activity.
I mean, that's the fact that we feel pain doesn't stop.
Doesn't regulate the level of many people. Right. Right. I mean, there's definitely people like,
you know, children, if you threaten them, but, you know, go to your room and stuff, you can regulate
them that way. But some kids ignore that completely and adults are the same. And it's often counterproductive. So, right, you know, culture,
societies, and organizations, we regulate each other, you know, sometimes. In competition
and cooperation, do you think that, well, we've talked about this to some degree for decades mean when you look at how fast things are moving now and as you push that along.
What when you look out ten years and you see the relationship between the AI systems that are being built and human beings what do you envision.
Or can you envision it? Good well, can I yeah, so like I said, I'm a computer guy and I'm watching this with let's say some fascination as well
The last so the break heart swell said, you know progress progress accelerates. Yeah, right?
So we have this idea that 20 years of progress is 20 years, but you know the last 20 years of progress was 20 years and the next 20 years, but the last 20 years of progress was 20 years, and the next 20
years will probably be five to 10.
Right, right.
And you can really feel that how to just some level that causes social stress independent
of whether it's AI or Amazon deliveries.
There's so many things that are going into the stress of it all.
Well, there's progress, which is an extension
of human capacity.
And then there's this progress, which I'm hearing about,
the way that you're describing it,
which seems to be an inevitable progress
towards creating something which is more powerful than you.
And so what is that?
I don't even understand that drive.
What is that drive to create something which can supplant you?
So look at the average person in the world.
Right.
So the average person already exists in this world.
Because the average person is halfway up to human hierarchy.
There's already many people more powerful than any of us.
They could be smarter, they could be richer, they could be better connected.
We already live in a world like very few people are at the top of anything.
Right. So that's already a thing. So basically the drive to make someone a superstar that's there, the drive to elevate someone above you. That would be the same drive that is bringing us to
creating these ultra powerful machines.
Because we have that, like we have a drive to elevate. Like, you know, when we see
a rock star that we like, people want to submit themselves to that. They want to dress like them.
They want to raise them up above them as an example, something to follow, right? Something to
subject themselves to. You see that with leaders. You see that in the political world.
And in teams, you see that in the political world. And in
teams, you see that in sports teams, the same thing. And so you think that that's the
right.
Well, we've always tried to build things that are beyond us. You know, I mean, I mean,
it's about, are we building, are we building a God? Is that, is that what people, is that
the drive that is pushing someone towards? Because when I hear what you're describing,
Jim, I hear something that is extremely dangerous,
right?
Sounds extremely dangerous to the very existence of humans, yet I see humans acting and moving
in that direction almost without being able to stop it.
As if there's no one that-
I think it is unstoppable.
Well that's one of the things we've also talked about is because I've asked Jim straight
out, you know, because of the hypothetical danger associated with this,
why not stop doing it?
And well, part of his answer is the ambivalence
about the outcome, but also that it isn't obvious at all
that in some sense it's stoppable.
I mean, it's the cumulative action
of many, many people that are driving this along.
And even if you took out one player, even a key player,
the probability that you do anything but slow it infinitesimally is quite.
That because there's also a massive payoff for those that will succeed. It's also set up
that way. People know that at least until the AI take over whatever, that whoever is on the line towards increasing the power of the AI will
will break in major rewards. Right. Well, so there's a cognitive acceleration, right? Yeah, I
could recommend Ian Banks as an author, English author, I think. He wrote a series of books on the
he called the culture novels. And it was a world where there were humans and then there was AI as the smartest humans and AI as that were done more than humans,
but there were some AI's that were much, much smarter. And they lived in harmony because they
mostly all pursued what they wanted to pursue. Humans pursued human goals and super smart AI's
pursued super smart AI goals. And, you know, they communicated and worked with each other,
but they mostly, you know, they're different,
when they were different enough
that that was problematic,
their goals were different enough
that they didn't overlap.
Because one of the, one of the,
that would be my guess.
It's like these ideas where these super AI's get smart
and the first thing they do is stomp out the humans.
It's like, you don't do that.
Like, you don't wake up in the morning and think,
I have to stomp out all the cats.
No, it's not about the cats.
The cats do cat things and the ants do ant things
and the birds do bird things.
And super smart mathematicians do smart
mathematician things.
And I select to build houses, do build house things.
And everybody, the world, there's so much space things and you know guys who like to build houses do build house things and you know everybody you
know the world that there's so much space in the intellectual zone that people people tend to go
pursue the in a good society like you tend to pursue the stuff that you do and then the people
in your zone you self-regulate and you also even in the social status you self-regulate. And you also, even in the social
strategist, we self-regulate. I mean, the recent political events of the last
10 years, the weird thing to me has been why of, you know, people with power have
been overreaching to take too much from people with less. Like that's bad
regulation.
But one of the aspects of increase in power
is that increase in power is always mediated
at least in one aspect by military,
by let's say physical power on others.
And we can see that technology is linked
and has been linked always to military power.
And so the idea that there could be some AIs that will be our friends or whatever is
maybe possible.
But the idea that there will be some AIs which will be weaponized seems absolutely inevitable
because increasing power is always, increasing technological power always moves towards military.
So we've lived with atomic bombs since the 40s, right?
So the, I mean, the solution to this has been mostly,
you know, some form of mutual assert destruction or
attacking me, like the response to attacking me is so much worse than the.
Yeah, but it's also because we have reciprocity.
We recognize each other as the same.
So if I look into the face of another human, there's a limit of how different I think that
person is from me.
But if I'm hearing something described as the possibility of superintelligences that have
their own goals, their own cares, their own structures, then how much mirror is there between these two groups of people, these
two groups?
Well, is there a projection?
James objection seems to be something like we're making, we may be making when we're
doomsay, let's say, and I'm not saying there's no place for that, we're making the presumption
of something like a zero sum competitive landscape, right?
Is that the idea and the idea behind movies
like the Terminator is that there is only so much resources
and the machines and the human beings
would have to fight over it.
And you can see that that could easily be a preposterous
assumption.
Now I think that one of the fundamental
points you're making, though, is also there will definitely be people that will weaponize
AI. And those weaponized AI systems will have as their goal something like the destruction
of human beings, at least under some circumstances. And then there's the possibility that that
will get out of control because
the most effective systems at destroying human beings might be the ones that win, let's
say. That could happen independently of whether or not it is a true zero sum competition.
Yeah, and also the effectiveness of military stuff doesn't need very smart AI to be a lot
better than it is today. The Star I used to you know that like the Star Wars movies where like you know tens of thousands of years in the future
Super highly trained, you know fighters can't hit somebody running across the field like that silly right you can you can already make a gun that can hit
Everybody in the room without aiming at it. It's you know there's like the
The military threshold is much lower than any intelligence threshold like for danger.
And like did it say that we self-regulated through the nuclear crisis is interesting.
I don't know if it's because we thought that the Russians were like us, I kind of suspect
the problem was that we thought they weren't like us.
But we still managed to make some calculation to say that any kind of attack would be mutually
devastating. Well, when you look at the destructive power of the military, we already have
so far exceeds the planet. I'm not sure like adding intelligence
to it is the tipping point. I think the more likely thing is things that are truly smart
in different ways will be interested in different things. And then the possibility for, let's
say mutual flourishing is really interesting. And I know artists using AI already
to do really amazing things, and that's already happening.
Well, when you're working on the frontiers of AI development
and you see the development of increasingly intelligent
machines, I mean, I know that part of what drives US,
I don't want to put words in your mouth, but what drives
intelligent engineers
in general, which is to take something that works and make it better and maybe to make
it radically better and radically cheaper.
So there's this drive toward technological improvement, and I know that you like to solve
complex problems, and you do that extraordinarily well.
But is there also a vision of a more abundant form of human flourishing emerging from the development?
So what do you see happening?
You said a year ago, it's like we're going to run out of energy.
What's next, we're going to run out of matter.
Like our ability to do what we want and ways that are interesting and for some people beautiful
is limited by a whole bunch of things,
because we're partly a technological and partly we're stupidly divisive.
But there is also a reality which is one of the things that technology has been
is of course an increase in power towards desire towards human desire.
And that is represented in mythological stories where let's say technology is used to accomplish
impossible desire, right?
We have, you know, the story of the story of building the the the the the ball around
the the King of Mino, the wife of the King of Minos, in order to be disseminated by a bull,
we have the story of the,
sorry Frankenstein, et cetera,
the story of the golem where we put our desire
into this increased power.
And then what happens is that we don't know our desires.
That's one of the things that I've also been worried about
in terms of AI is that we don't know our desires. That's one of the things that I've also been worried about in terms of AI is that we
act, we have secret desires that enter into what we do, that people aren't totally aware
of.
And as we increase in power these systems, those desires, let's say the, the, the, the
like the idea, for example, of the possibility of having an AI friend.
And the idea that an AI friend would be the best friend you've ever had because that
friend would be the nicest to you, would care the most about you, would do all those things.
That would be an exact example of what I'm talking about, which is, it's really the story
of the genie, right?
It's the story of the genie in the lamp where the genie says, what do you wish and the person?
And I have unlimited power to give it to you.
And so I give him my wish,
but that wish has all these underlying implications
that I don't understand,
all these underlying possibilities.
Yeah, but the cool thing,
the moral of almost all those stories is
having unlimited wishes will be lead to your downfall.
And so humans, like if you give a young person
unlimited amount of stuff to drink for six months,
they're gonna be falling down drunk
and they're gonna get over it.
Right?
Having a friend that's always your friend no matter what.
Probably gonna get boring.
Well, the literature on marital stability indicates that. So there's a sweet spot with regards
to marital stability in terms of the ratio of negative to positive communication.
So if on average, you receive five positive communications and one negative communication
from your spouse, that's on the low threshold for stability. If it's four positive and one negative communication from your spouse, that's on the low threshold
for stability. If it's far positive to one negative, you're headed for divorce. But interestingly
enough, on the other end, there's a threshold as well, which is that if it exceeds 11 positive
to one negative, you're also moving towards divorce. So there's, so, so, So there might be self-regulating mechanisms that would in
sense take care of that. You might find a yes-man AI friend extraordinarily boring, very, very rapidly.
But as opposed to an AI friend that was interested in what you were interested in, it was actually
interesting. Like, you know, we go through friends in the courts of our lives, like different friends
are interesting
at different times. And some friends we grow with, and that
continues to be really interesting for years and years and
other friends, you know, some people get stuck in their
thing. And then you've moved on or they've moved on or
something. So yeah, I so we tend to think of a world where
there was more abundance and more possibilities and more
interesting things to do is an interesting.
Okay, okay.
And modern society has let the human population, and some people think this is a bad thing,
but I don't know.
I'm a fan of it.
Modern population has gone from tens of 200 million to billions of people.
That's generally being a good thing. We 200 million to billions of people. That's generally
being a good thing. We're not running out of space. So I've been in, you know, so in some
of your audience has probably been in an airplane. If you look out the window, the country
is actually mostly empty. The oceans are mostly empty. Like we're we're weirdly good at
polluting large areas, but so as we decide not to, we don't have to. The most of our energy pollution problems are technical.
Like we can stop polluting, like electric cars are great.
So there's so many things that we could do tactically.
I forget the guys name.
He said the earth could easily support a population of a trillion people.
And trillion people will be a lot more people doing random stuff.
And he didn't imagine that the future population would be a trillion humans and a trillion
AI's, but it probably will be.
So it will probably exist on multiple planets, which will be good the next time an asteroid
shows up.
So what do you think about, so one of the things that seems to be happening, tell me if
if you think I'm wrong here, I think it's germane to and I just want to make the point of yeah.
You know where we are compared to living in the middle ages, our lives are longer, our families
are healthy, our children are more likely to survive. Like many, many good things happened.
Like sending the clock back with and be good. And you know, if we have some care and people who actually care
about how culture interacts with technology
for the next 50 years, you know, we'll get through this.
Hopefully more successful than we did the atomic bomb
and the cold war.
But it's it's okay.
So so it's a major change.
I mean, this is like like your worries.
There's you know, I mean, they're they're relevant
but that you know, but also your Jonathan, your stories about how humans have faced abundance and
faced evil kings and evil overlords. Like we have thousands of years of history of
facing the challenge of the future and the challenge of things that cause radical change.
Yeah, that's very valuable information.
But for the most part, nobody's succeeded
by stopping change.
They've succeeded by bringing to bear on the change
our capability to self-regulate the balance.
Like a good life isn't having as much gold as possible.
It's a boring life. A good life is, you know, having some quality friends and doing what you want
and having some some insight in life and some more optimal challenge. And, you know,
and in a world where a larger percentage of people can have, well, live in relative abundance
and have tools and opportunities,
I think is a good thing.
Yeah, and I don't want to pull back abundance,
but what I have noticed is that our abundance
brings a kind of nihilism to people.
And I don't, like I said, I don't wanna go back.
I'm happy to live here and to have these tech things.
But I think it's something that I've also noticed.
That increase of the capacity to get your desires
when that increases to a certain extent also leads to a kind of nihilism where exactly that,
well, I wonder, Jonathan, I wonder if that's partly a consequence of the erroneous maximization
of short-term desire.
I mean, one of the things that you might think about that could be dangerous on the AI
front is that we optimize the manner in which we interact with our electronic gadgets to capture short-term
attention.
Right?
Because there's a difference between getting what you want right now and getting what you
need in some more mature sense across a reasonable span of time.
And one of the things that does seem to be happening online, and I think it is driven
by the development of AI systems is that we're
assaulted by systems that parasitize our short-term attention. And at the expense of longer-term
attention, and if the AI systems emerge to optimize attentional grip, it isn't obvious to me that
they're going to optimize for the attention that works over the medium to long run, right?
They're going to be, they could conceivably maximize something like whim centered.
Yeah, because all the higher-ality is based on that, all the social media and everything
are all based on this reduction, this reduction of attention, this reduction of desire to
reaching your rest, let's say, and that desire,
right, to like, to click all these things there.
Yeah, now.
Yeah, exactly.
So, but that's something that, you know, so for reasons that are somewhat puzzling,
but maybe not.
The business models around, a lot of those interfaces are around, you know, the part, the users,
the product, and, you know, the advertisers are trying to get your attention.
But that's something culture could regulate.
We could decide that, no, we don't want tech platforms to be driven by advertising money.
That would be a smart decision, probably.
And that could be a big change.
And what you see as an old term, see, well, the problem
for a lot of products.
The market's drive that in some sense, right?
And I know they're driving that machine.
But we can take steps, like, you know,
at various times, you know, alcohol is pinnacle.
Like, you can, we society can decide
to regulate all kinds of things.
And, you know, sometimes some things need
to be regulated and some things don't.
Like when you buy a hammer, you don't fight with your hammer for its attention, right?
Hammer's a tool.
You buy one when you need one.
Nobody's marketing hammers to you.
Like that has a relationship that's transactional to your purpose, right?
Our technology has become a thing where...
But there's a relationship.
There's a relationship between human, let's say,
high human goals, something like attention and status.
And what we talked about, which is the idea of elevating something higher in order to see it
as a model. See, these are where intelligence exists in the human person.
And so when we notice that in the systems, in the platforms,
these are the aspects of intelligence, which are being weaponized in some ways,
not against us, but are just kind of being weaponized because they're the most beneficial
that the short term to be able to generate our constant attention.
And so what I mean is that that is what the AIs are made of, right?
They're made of attention, prioritization, you know, good, bad.
What is it that is worth putting energy into in order to predict towards a tell-os?
And so I'm seeing that it's so, so what?
Yeah, you know, we could disconnect them suddenly.
It seems very difficult to me.
Yeah, so I'll give it to, first I want to give an old example.
So after, after World War II, America went through this amazing building, boom, a building
sub-erbs.
And the American dream was, you could have your own house, your own yard, and a sub-erb
with a good school, right?
So in the 50s, 60s, early 70s,
they were building that crazy by the time I grew up,
I lived in a suburban dystopia, right?
And we found that that as a goal wasn't a good thing
because people ended up in houses separated
from social streets and structures.
And then new towns are built around like a hub with places to go and eat.
So there was a good that was viewed in terms of opportunity and abundance, but it actually was a
fail culturally. And then some places it modified and it continues in some places are still dystopian,
suburban areas, and some places people simply learn to live with it.
Right?
So, that has to do with attention, by the way.
It has to do with a subsidiary hierarchy, like a hierarchy of attention, which is set
up in a way in which all the levels can have room to exist, let's say.
And so, you know, these new systems, the new way, let's say,
the new urban, urbanist movement, similar to what you're talking about, that's what they've
understood. It's like, we need places of intimacy in terms of the house. We need places of communion
in terms of, you know, parks and alleyways and buildings where we meet and a church,
all these places that kind of manifest our community together. Yeah, so those existed coherently for a long periods of time, and then the abundance
of the post-World War II and some ideas about what life could be like, causes big change,
and that change satisfied some needs, people got houses, but broke community needs, and then new sets of ideas about what's the synthesis,
what's the possibility of having your own home,
but also having community, not having to drive
15 minutes for every single thing,
and some people live in those worlds,
and some people don't.
Do you think we'll be smart?
So one of the problems is,
is he over there?
Well, why were we smart enough to solve some of the problems?
Because we had 20 years, but now, because one of the things that's happening now is we're as you pointed out earlier is we're going to be producing equally revolutionary transformations, I think it's driving some of the concerns in the conversation is, are we going to be intelligent enough to direct with regulation the transformations
of technology as they start to accelerate?
Well, we've already, you look what's happened online.
We've inadvertently, for example, radically magnified the voices of narcissists, psychopaths, and macchi-evalience.
And we've done that so intensely, partly, and I would say partly as a consequence of AI
mediation, that I think it's destabilizing the entire block.
It's just stabilizing part of it. Like, Scott Adams point out, you just block everybody that
acts like that. I don't pay attention to people that talk like that.
Yeah, but they seem to be ready.
Well, there's still places that are sensitive to it.
Like 10,000 people here can make a storm
in some corporate person, fire somebody.
But I think that's like we're five years from that,
being over.
Corporation will go 10,000 people out of 10 billion.
Not a big deal.
OK, so you think that's a learning moment that will re-regulate.
What's natural to our children is so different than what's natural to us, but what was natural
to us was very different from our parents.
So some changes get accepted, excepted generationally, really.
So what's made you so optimistic?
What's made, what do you mean optimistic?
Well, most of the things that you have said today,
and maybe it's also because we're pushing you,
I mean, you really, you really do.
I was my nephew Kyle, who's really smart, clever guy.
He called me a, what did he call it, a cynical optimist.
Like I believe in people. Like I like people, but also people are complicated. And they all got all kinds of nefarious goals. Like I worry a lot
more about people burning down the world than I do about artificial intelligence, just because
you know people, well, you know people, they're difficult, right?
And the, but the interesting thing is in aggregate, we mostly self-regulate.
And when things change, you have these dislocations.
And then it's up to people who talk and think, and while we're having this conversation,
I suppose, to talk about how, how do we re-regulate this stuff?
Yeah. Well, because one of the things that, one of the things that the increase in power has done in terms of AI, and you
can see it with Google, and you can see it online, is that there are certain people who
hold the keys, let's say, and then who hold the keys to what you see and what you don't
see.
So you see that on Google, and you know it, if you know what search is to make, where
you realize that this is not, this is actually being directed by someone
who now has huge amount of power
in order to direct my attention
towards their ideological purpose.
And so, so that's why, like, I think that to me,
I personally think it would,
I always think to see AI has an extension of human power.
Even though there is this idea that it could somehow become totally independent, I still
tend to see it as an increase of the human care and whoever will be able to hold the keys
to that will have increased in power.
And that can be, and I think we're already seeing it.
Well, that's not that's not really any different though,
is it, Jonathan, the situation that's always confronted us
in the past?
I mean, we've always had to deal with the evil uncle
of the king, and we've always had to deal with the fact
that an increase in ability could also produce
a commensurate increase in tyrannical power, right?
I mean, so that might be magnified now, and maybe the danger in some sense is more acute,
but possibly the possibility is more present as well.
Because you can train AI through to find hate speech, right?
You can train an AI to find hate speech and then to act on that hate speech immediately.
Within, now it's only, we're not only talking about social media, but we've seen is that
that is now encroaching into payment systems and into people losing their bank account,
their access to different services.
And so this idea of automation.
Yeah, there's an Australian bank that already has decided that it's a good thing to send
all of their customers a carbon load report every month.
Right? And to offer them hints about how they could reduce their polluting purchases, let's say.
And well, at the moment that system is one of voluntary compliance, but you can certainly see
in a situation like the one we're in now, that the line between voluntary compliance and involuntary
compulsion is very, very thin.
Yeah.
So I'd like to say, so during the early computer world, computers were very big and expensive.
And then we, they made many computers and workstations, but they were still corporate only.
And then the PC world came in.
All the sudden, PCs put everybody online, everybody could suddenly see all kinds
of stuff and people could get a free-dative of information act request, put it online somewhere
and a hundred thousand people could see it. It was an amazing democratization moment.
And then there was a similar but smaller revolution with the world of smartphones and apps.
But then we've had a new completely different set
of companies by the way,
from what happened in the 67s and 80s to today,
it's a very different company set, control it.
And there are people who worried that AI
will be a winner take all thing.
Now, I think so many people are using it,
and they're working on so many different places,
and the cost is gonna come down so fast.
That pretty soon, you'll have your own AI app
that you'll use to mediate the internet.
Just strip out, you know, the endless stream of ads,
and you can say, well, is this story objective?
Well, here's the 15 stories,
and this has been manipulated this way, and this has been manipulated that way. And you can say, well, is this story objective? Well, here's the 15 stories, and this has been manipulated
this way, and this has been manipulated that way.
And you could say, well, what's more like the real story?
And the funny thing is information that's broadly
distributed and has lots of input
is very hard to fake the whole thing.
So right now, a story can pull through a major media
and if they can control the narrative,
everybody gets to fake story.
But if the media has distributed across a billion people
who are all interacting in some useful way,
somebody standing up there,
some, yeah, there's real signal there.
And if somebody stands up and says something,
it's not true, everybody knows that's not true
so the
Like a good outcome
With people thinking seriously
Would be the democratization of information and you know objective facts in the same way the same thing that happened with
PC's versus corporate central computers could happen again
Yeah, so you have an increase in power.
And the problem is that these are the increase in power.
And that's a real possibility.
Their increasing power always creates the two at the same time.
And so, you saw that, you know, increase in power creates,
creates first, or it depends in which direction it happens.
It creates an increase in decentralization,
it increases in access, it and it creates all that.
But then it also, at the same time,
creates the counter reaction,
which is an increase in troll and increase in centralization.
And so, now, the more the power is,
the more the waves will, the bigger the waves will be.
And so the image of,
the image that 1984 presented to us,
of people going into newspapers and
changing the headlines and taking the pictures out and
doing that, that now obviously can happen with just a click.
So you can click and you can change the past.
You can change the past,
you can change facts about the world because they're all held, you know, online.
And we've seen it happen, obviously, in the media recently.
But it, so does decentralization win over centralization?
How is that even possible, it seems?
I mean, it's also interesting, like when Amazon became a platform, suddenly, any mom and
pop business could have, you know, Amazon, an Amazon eBay or a bunch of platforms,
which had an amazing impact, because any business could get to anybody.
But then the platform itself started to control the information flow.
But at some point, that will turn into people go, well, why am I letting somebody control my information flow?
Amazon objectively doesn't really have any capability.
Right.
So, so, so like you point out though the waves are getting bigger, but they're real waves.
It's the same with information.
The information's all online.
It's also on a billion hard drives.
Right.
So somebody says, I'm going to raise objective fact.
That distributed information system would say,
yeah, go ahead and raise it anywhere you want.
There's another thousand copies of it.
Yeah, and that's what that's what it's like.
But again, this is right to do, did they?
Yeah, yeah, and this is where thinking people have to say,
yeah, this is a serious problem.
Like if humans don't have anything in the fight for,
they get lazy and you know, a little bit dopey in my view. Like we't have anything in the fight for, they get lazy and
a little bit dopey, in my view. We do have something in the fight for.
And that's worth talking about. What would a great world with distributed
and human intelligence and artificial intelligence work in together in a collaborative way to create abundance and fairness and
you know like some some better way at arriving at good decisions than what the truth is. That would be
a good thing. But you know it's not well leave it to the experts and then the experts will tell us
what to do. That's a bad thing. Yeah. So that's... Well so do you the model that you just laid out
which which I think is very interesting? I'm not somewhat optimistic about that. Yeah. So that's. Well, so do you, is it the model that you just laid out, which I think is very interesting?
I'm not so much optimistic about that.
Yeah. Well, it did happen on the computational front.
I mean, it happened a couple of times, both directions.
Okay. Right. You know, the PC revolution was amazing.
Yeah. Right. And Microsoft was a fantastic company and enabled everybody to write a $10, $50 program to use.
Yeah. Right. And then at some point, there are also, you know,
let's say a difficult company.
And they made money off a lot of people
and became extremely valuable.
Now for the most part, they haven't been that directional
until you want to do the thing and how to do it.
But they are a meni-making company.
You know, Apple created the App Store, which is great,
but then they also take 30% of the
up store profits, and there's a whole section of the internet that's fighting with Apple
about their control of that platform. And in Europe, you know, they've decided to regulate
some of that, which that should be a conversation, that should be a social, cultural conversation
about how should that work. Yeah.
So do you see the more likely,
certainly the more desirable future,
is something like a set of distributed AIs,
many of which are under personal,
in personal relationship, in some sense,
the same way that we're in personal relationship
with our phones and our computers,
and that that would give people the chance to fight back so to speak against the system.
And there's lots of people really interested in distributed platforms.
And one of the interesting thing about the AI world is,
there's a company called OpenAI, and they open source a lot of it.
The AI research is amazingly open. It's all done in public. People published a new model all the time.
You can try them out.
People, there's a lot of startups doing AI
in all different kinds of places.
It's a very curious phenomena.
And it's kind of like a big, huge wave.
It's not like a, you can't stop a wave with your hands. Well, when you think about waves, there are two actually in the book of Revelation,
which is describes the end or describes the finality of all things or the totality of
all things is maybe a way for people who are more secular to kind of understand it. And
in that in that book, there are two images, interesting images about technology. One is that
there is a dragon that falls from
the heavens and that dragon makes a beast. And then that beast makes an image of the beast.
And then the image speaks. And when the image speaks, then people are so mesmerized by the
speaking image that they worship the beast ultimately. So that is one image of, let's say,
making and technology and Scripture in Revelation.
But there's another image,
which is the image of the heavenly Jerusalem.
And that image is more an image of balance.
It's an image of the city which comes down from heaven
with a garden in the center
and then becomes this glorious city.
And it says, the glory of all the kings
is gathered into the city. So the glory of all the kings is gathered into the city.
So the glory of all the nations is gathered into this city.
So now you see a technology which is at the service of human flourishing and takes the
best of humans and brings it into itself in order to kind of manifest.
And it also has hierarchy, which means it has the natural at the center and then has the
artificial as serving the natural, you could say.
So those two images seem to reflect these two waves that we see and this kind of idea
of an artificial intelligence will be, which will be ruling over us or speaking over us.
But there's a secret person controlling it, even in revelations.
Like there's a beast controlling it and making it speak.
So now we're mesmerized by it and then this other image.
So I don't, I don't, I don't know if you ever thought about those two images in revelation
as being related to technology, let's say.
Well, I don't think I've thought about those two images in the specific manner that you
described, but I would say that the work that I've been doing,
and I think the work you've been doing too,
and the public front reflects the dichotomy
between those images,
and it's relevant to the points that Jim has been making.
I mean, we are definitely increasing our technological power,
and you can imagine that that'll increase our capacity
for tyranny and also our capacity for abundance.
And then the question becomes,
what do we need to do in order to increase the probability
that we tilt the future towards Jerusalem and away from the beast?
And the reason that I've been concentrating on helping people bolster their individual
morality to the degree that I've managed that is because I think that whether the outcome
is the positive outcome that in some sense Jim
has been outlining or the negative outcomes that we've been querying him about, I think that's
going to be dependent on the individual ethical choices of people at the individual level,
but then humanly, right? So if we decide that we're going to worship the image of the beast,
so to speak, because we're mesmerized by our own reflection. That's another way of thinking about it. And we want to be the victim of our own dark desires. Then the IAA revolution
is going to go very, very badly. But if we decide that we're going to aim up in some positive way,
and we make the right micro decisions, well, then maybe we can harness this technology to produce
a time of abundance in the manner that Jim is hopeful about. Yeah, and let me make two funny points.
So one is, I think there's going to be continuum.
Like the word artificial intelligence won't actually make any sense soon.
Right.
So humans collectively, not individuals know stuff, but collectively we know a lot more.
Right.
And the thing that's really good is in a diverse
society with lots of people pursuing individual, interesting, you know, ideas, worlds, like we have
a lot of things. And more people, more independence generates more diversity. And that's a good thing where
the Tolitarian society, where everybody's told the word
the same shirt and like it's inherently boring.
Like the beast speaking through the monsters inherently dull.
Right.
Like, but in an intelligent world where not only can we have
more intelligent things, but in some places go far beyond what most humans are capable of,
in pursuit of interesting variety, and I believe the information, while say intelligence,
is essentially unlimited. And the unlimited intelligence won't be the shiny thing that tells everybody what to
do.
That's sort of the opposite of interest in intelligence.
Interesting intelligence will be more diverse, not less diverse.
Like that's a good future.
And your second description, that seems like a future we're working for and also we're
fighting for.
And that means concrete things today.
And also, it's a good conceptualization.
I see the messages as my kids are taught, don't have children and the world's going to
end, we're going to run out to everything, you're a bad person, why do you even exist?
It's like these messages are terrible, the opposite is true.
More people would be better. We live in a world of potential abundance.
It's right in front of us. There's so much energy available. It's just amazing.
It's possible to build technology without pollution consequences. That's called an externalizing
cost. We know how to do that. We can have very good clean technology.
We can do lots of interesting things.
So if the goal is maximum diversity
than the line between human intelligence, artificial intelligence
that we draw, you'll see all these kind
of really interesting partnerships and all kinds of things
and more people doing what they want,
which is the world I want to live in.
Yeah, but to me it seems like the the question is going to be related to attention ultimately, that is
what are humans attending to at their highest? What is it that humans care for in the highest?
You know, in some ways you could say what do humans, what are humans worshiping? And like depending on what humans worship,
then their actions will play out in the technology
that they're creating, in the increase in power
that they're creating.
Well, that's, well, and if we're guided by the negative vision,
the sort of thing that Jim laid out
that is being taught to his children,
you can imagine that we're in for a pretty damn dismal future,
right?
Human beings are a cancer on the face of the planet.
There's too many of us.
We have to accept top-down, compelled limits to growth.
There's not enough for everybody.
A bunch of us have to go because there are too many people on the planet.
We have to raise up the price of energy so that we don't burn the planet up with carbon
dioxide pollution, et cetera. It's a pretty damn dismal view of the potential that's in front of us.
And so the world should be exciting, and the future should be exciting.
Well, we've been sitting here for about 90 minutes,
banding back and forth, both visions of abundance and visions of apocalypse.
And I mean, I mean, I've been hardened, I would say,
over the decades, talking to Jim about what he's doing
on the technological front.
And I think part of the reason I've been hardened is
because I do think that his vision is guided primarily
by desire to help bring about something
approximating life more abundance.
And I would rather see people on the AI front
who are guided by that vision working on this technology.
But I also think it's useful to do what you and I
have been doing in this conversation, Jonathan,
and acting in some senses, friendly critics,
and hopefully learning something in the interim.
Do you have anything you want to say in conclusion?
I mean, I just think that the question is linked very directly
to what we've been talking about now for several years. It's just the question of attention,
the question of what is the highest attention? And I think the reason why I have more alarm
let's say than Jim is that I've noticed that in some ways human beings have come to, have
come to now let's say worship their own desires, they've come to worship.
And that even the strange thing of worshiping their own desires is actually led to an anti-human
narrative. You know, this weird idea is almost suicidal desire that humans have. And so I think
that seeing all of that together in the increase of power, I do worry that the image of the beast is closer to what will manifest itself.
And I feel like during COVID, that sense in me was accelerated tenfold in noticing
to what extent technology was used, especially in Canada, how technology was used to instigate
something which looked like authoritarian systems. And so I am worried about it,
but I think like Jim, honestly,
although I say that, I do believe that in the end truth wins.
I do believe that in the end,
these things will level themselves out.
But I think that because I see people rushing towards
AI almost, you know,
almost like lemmings going off a cliff,
I feel like it is important to sound the alarm
once in a while and say, you know,
we need to orient our desire before we go
towards this extreme power.
So I think that that's mostly the thing that worries me
the most and that preoccupies me the most.
But I think that ultimately in the end,
I do share Jim's positive vision.
And I do think that I do believe the story has a happy ending.
It's just, you might have to go through hell
before we get there.
I hope not.
So Jim, how about you?
What have you got to say in closing?
A couple years ago, a friend who's my age said,
oh, kids coming out of college,
they don't know anything anymore.
They're lazy.
And I thought, I was working at Tesla at the time. And we hired kids out of college, they don't know anything anymore. They're lazy and I thought, I was working at Tesla at the time.
And we hired kids out of college
and they couldn't wait to make things.
They were like, it's a hands-on place, it's a great place.
And I've told people, if you're not in a place
where you're doing stuff, it's grown, it's making things,
you need to go somewhere else.
And also, I think you're right, the mindset of,
if people are feeling this is a productive, creative
technology, that's really cool.
They're going to go build cool stuff.
And if they think it's a shitty job and they're just
tuning the algorithm so they can get more clicks,
they're going to make something, beastly, you know,
beastly, perhaps.
And the stories, you know, our cultural tradition is super
useful, both cautionary and, you know, explanatory about something good.
Like, and I think it's up to us to go do something about this.
And I know people are working really hard to make, you know, the internet
of more open place, to make sure information is distributed
to make sure AI isn't a winner take all thing. These are real things and people should be talking
about them and they should be worried. But the the upsides really high and we faced these kind
of technological, like this is a big change, like the AI is bigger than the internet. Like I've said,
this publicly, like the internet was pretty big. And you know, this is bigger. It's, it's true.
But the possibilities are amazing. And so, so then like, we could utilize them. Yeah, it's
what some sense we could achieve it. Yeah. And's and the world is interesting. Like I think it'll be more interesting place.
Well, that's an extraordinarily cynically optimistic place to end. I'd like to
thank everybody who is watching and listening and thank you, Jonathan, for
participating in the conversation. It's much appreciated as always. I'm going to
talk to Jim Keller for another half an hour
on the Daily Wire Plus platform.
I use that extra half an hour to usually walk people
through their biography.
I'm very interested in how people develop successful careers
in lives and how their destiny unfolded in front of them.
And so for all of those of you who are watching and listening,
who might be interested in that, consider heading over to the Daily Wire Plus platform
and partaking in that.
And otherwise, Jonathan will see you in Miami
in month and a half to finish up the Exodus seminar.
We're gonna release the first half of the Exodus seminar.
We recorded Miami on November 25th, by the way.
So that looks like it's in the can.
Yeah, I can. I'm just doing it. Yeah.th, by the way. So that looks like it's in the can. Yeah, I can't wait to see it.
Do you rest in the room? Yeah. Yeah, yeah, absolutely. I'm really excited about it. And
just for everyone watching and listening, I brought a group of scholars together about two and a
half months ago. We spent a week in Miami, some of the smartest people I could gather around me,
to walk through the book of Exodus. We only got through halfway because it turns out there's more information there than I had originally considered, but it went exceptionally well and I learned a lot and Exodus means X photos. That means the way forward and
well, that's very much relevant to everyone today as we strive to find our way forward through all these complex issues, such as the ones we were talking about today. So I would also encourage people to check that out when it launches on
November 25th. I learned more in that seminar than any seminar I ever took in my life,
I would say. So it was good to see you there. We'll see in a month and a half, Jim, we're
going to talk a little bit more on the daily wear plus platform. And I'm looking forward
to meeting the rest of the people in your AI-oriented community tomorrow and learning more about what seems to be an optimistic version of a life more abundant
and to all of you watching and listening.
Thank you very much.
Your attention isn't taken for granted and it's much appreciated.
Hello, everyone.
I would encourage you to continue listening to my conversation with my guest on dailywireplus.com.
listening to my conversation with my guest on dailywireplus.com.