The Jordan B. Peterson Podcast - 357. ChatGPT and the Dawn of Computerized Hyper-Intelligence | Brian Roemmele
Episode Date: May 15, 2023Dr. Jordan B. Peterson and Brian Roemmele discuss the future of human civilization: a world of human androids operating alongside artificial intelligence with applications that George Orwell could not... have imagined in his wildest stories. Whether the future will be a dystopian nightmare devoid of art or a hyper-charged intellectual utopia is yet to be seen, but the markers are clear … everything is already changing. Brian Roemmele is a scientist, researcher, analyst, entrepreneur, and tech expert on the forefront of artificial intelligence. His current publication, Multiplex, offers itself as an experiment in journalism as he and his team give live updates on the empirical research they conduct in the field and advocate for the positive emergence and acceptance of AI in much the same way as personal computers.   - Links -   Brian Roemmele: Read Multiplex https://readmultiplex.com/ (About Page) https://readmultiplex.com/about/ Follow Brian on Twitter @BrianRoemmele https://twitter.com/BrianRoemmele?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor
Transcript
Discussion (0)
Hello everyone. Today I'm speaking with entrepreneur, scientist and artificial intelligence researcher, Brian Romley, we discuss language models, the science behind understanding, tuning language models
to an individual's contextual experience, the human bandwidth limitation, localized and private AI,
and ultimately where all of this insane progress on the technological front might be heading.
So Brian, thanks for agreeing to talk to me today. I've been following you on Twitter. I don't remember how I came across your work, but I've been
very interested in reading your threads and you seem to be O'Karaugh, so to speak, with
the latest developments on the AI front. And I've been particularly fascinated about the developments in AI for two reasons.
My brother-in-law, Jim Keller, a very well-known chip designer, and he's building a chip
optimized for AI learning. And we've talked a fair bit about that, and I've talked to him
on my YouTube channel about the perils and promises of AI, let's say. And then I've been
and promises of AI, let's say. And then I mean, very fascinated by chat GPT.
I know I'm not alone in that.
I've been using it most recently as a digital assistant.
And I got a couple of questions to ask you about that.
So here's some of the things that I've found out
about chat GPT and maybe we can go into technology
a little bit too.
So I can ask you very complicated questions.
Like I asked it the other day about
there's this old papyrus from Egypt, ancient Egypt that details out a particular variant of the
story of Horus and Osiris to Egyptian gods. It's very obscure piece of knowledge and it has to do
with the sexual element of a battle between two of the Egyptian
gods. And I asked it about that and to find the appropriate citations and quotes from
appropriate experts. And it did so very rapidly. But then it moralized out me about the sexual
element of the story and told me that maybe it was in conflict with their community guidelines.
And so then I gave it hell, I told it to stop moralizing out me and that I just wanted academic
answers and it apologized and then seemed to do less of that, although it had to be reminded
from time to time. So that's very weird that you can argue with it, let's say, and that it'll apologize.
It also does quite frequently produce references that don't exist, like about 85% of the time, 90% of the time the references that provides are genuine. I always look them up and double-check
what it provides. But now and then it'll just invent something completely out of the blue and offer it as
the actual article. And I don't understand that at all. It's like, especially because when you
point it out, it again apologizes and then provides the accurate reference. It's like, so
I don't understand how to account for the behavior of the system that's doing that. And
I don't understand how to account for the behavior of the system that's doing that. And maybe you can shed some light on that.
Well, first off, Dr. Peterson, thank you for having me.
It's really an honor and a privilege.
You're finding the limits of what we call large language models.
That's the technology that is being used by a chat GPT 3.5 and 4. A large language model is
really a statistical algorithm. I'll try to simplify because I don't want to get into the
minutiae of technical details. But what it's essentially doing is it took a corpus of human language
doing is it took a corpus of human language and that was garnered through mostly the internet,
a couple of billion words at the end of the day, all of human writing that it could have access to
and plus quite a bit of scientific documents and computer programming languages.
And so what it's doing is it's producing a result statistically, mathematically, one
word, even at times, one letter at a time.
And it doesn't have a concept of global knowledge.
So when you're talking about that papyrus in the Egyptian translation, ironically, it's
so interesting because you're taking something that was a hila graph, and it's now probably
was translated to Greek and English, and now AI, that language that we're talking about,
which is essentially a mathematical tensor. And so when it's laying out those words, the accuracy is incredible. And frankly,
and we can get into this a little later in the conversation, nobody really understands
precisely what it's doing and what is called the hidden layer. It is so many interconnections of neurons that it essentially is a black box.
And using a form, it is precisely like the brain.
And I would also say that we're in a sort of undiscovered
continent.
We, anybody saying that they fully understand
the limitations and the boundaries of what large language models are going to look like
in the future as a sort of self-feedback is sort of guessing. There's no understanding. If you
look at the growth, it's logarithmic. Yeah, OpenAI hasn't really told us what they're using as far as the number of parameters.
These are billions of interconnectivities of neurons, essentially.
But we know in chat, GPT 3.5, it's well over 120 billion parameters.
The content I've created over the past year represents some of my best to date, as I've
undertaken additional extensive
exploration in today's most challenging topics and experience the nice
increment in production quality courtesy of DailyWire Plus. We all want you to
benefit from the knowledge gained throughout this adventurous journey. I'm
pleased to let you know that for a limited time you're invited to access all my
content with a seven-day free trial at DailyWire Plus.
This will provide you with full access to my new in-depth series on marriage,
as well as guidance for creating a life vision and my series exploring the book of Exodus.
You'll also find there the complete library of all my podcast and lectures.
I have a plethora of new content in development that will be coming soon exclusively
on Daily Wire Plus. Voices of Reason and Resistance are few and far between these strange days.
Click on the link below if you want to learn more. And thank you for watching and listening.
So let me ask you about those parameters. Well, I'm interested in delving into the technical details to some degree.
Now, I was familiar to a limited degree with some of the statistical technologies that analyze, let's say, the relationship between words.
So, for example, when psychologists derived the big five
models of personality, they basically use very primitive
AI stat systems, that's way of thinking about it,
to derive those models.
It's factor analysis, which is, you know,
it's not used in billions of parameters
by any stretch of the imagination. But it was looking for words that were statistically
likely to clump together. And the idea would be that words that were replaceable in sentences
or that or words that were used in close conjunction with each other, especially adjectives,
were likely to be assessing the same underlying
construct or dimension. And that if you conducted the statistical analysis properly, which were
very complex correlational analysis, you could find out how the words that people used to
describe each other aggregated. And it turned out there were five dimensions of
aggregation, approximately. And that's been a very robust finding. It seems to be true across
different sets of languages. It seems to be true for phrases. It seems to be true for sentences.
So now with the large language models, which are AI learning driven, you said that the computer is calculating the
statistical relationship between words, so how likely a word is to occur in proximity to another word,
but also letters. So it's conducting the analysis at the level of the letter and at the level of the
words. Is it also conducting analysis at the level of the phrases looking for the interrelationship
between common phrases?
And then, because when we're understanding a text, we understand letters, words, phrases,
sentences.
The organization of sentences into paragraphs, the organization of paragraphs into chapters,
the chapter in relationship to the book, the book in relationship to all the other books we've read, and then that's also embedded within the other elements of our intelligence.
And do you know, does anyone know how deep the analysis that the large language models go? Like, what's the level of relationship that's being assessed? That's a great question, Jordan.
I think what we're really discovering is that we can't really put a number on how many
interconnections that are made within these parameters other than the general statistics.
All right, so you could say there's 12 billion or 128 billion total interconnectivities.
But when we actually are looking at individual words, it's sort of almost like the slid
experiment with physics, whether we're dealing with the wave or particle duality.
Once you start looking at one area, you're actually thinking about another
area that you have to look at. And then you might as well just not even do it because it
would take a tremendous amount of computer time to try to figure out how all these interconnections
are working within the parameter layers, the hidden layers.
Now, those systems are trained just to be accurate in their output, right? I mean, they're
actually trained the same way we learn as far as I can tell, is that they're
given a target.
I don't exactly know how that works with large language models, but I know that, for
example, that AI systems that have learned to identify cats, which was an early accomplishment
of AI systems, they were shown pictures of things that were cats and things that weren't
cats, and basically just told when they got the identification right. And that set the weights that you're describing
in all sorts of complex ways that are completely mysterious. And the end consequence of the
reinforcement, same way that human beings learn, was that a system would assemble itself
that somehow can identify cats and distinguish them from all the other things that were cat-like
or not cat-like. And as you pointed out, we have no idea that the system is too complex to
model. And it's certainly too complex to reduce. Although my brother-in-law told me that some of
these AI systems, they've managed to reduce what they do learn to something approximating an algorithm, but that can be done upon occasion,
but generally, generally, the system can't be and isn't simplified.
And so that would also imply to some degree that each AI system is unique, not only incomprehensible,
but unique and incomprehensible.
And also implies, I think chat GPT passes the
Turing test because I don't think that if you, I mean, there was just a study released here the
other day showing that if you get patients who are seeing doctors to interact with physicians or
with chat GPT, they actually prefer the interaction with chat GPT to the interaction with the
average doctor. So not only does chat GPT apparently pass the Turing test, which is
indistinguishability from a human conversational partner, but it seems to actually do it somewhat
better at least than physicians. And so, but this brings up this thorny issue that, you know, we're going to produce computational
intelligences that are in many ways indistinguishable from human beings, but we're not going to understand
the many better than we understand human beings.
It's so funny, that we'll create this and we're going to create something we don't understand
that works.
Very strange, very strange thing. You know, and I call it a low-resolution,
pixelated version of the part of the human brain
that invented language.
And what we're going to wind up discovering is that this is a mirror reflecting back to humanity. And all the foibles and greatness of humanity is sort of modeled in this because, you know,
when you look at the invention of language and the phonological loop and broker and
warnekes, you start realizing that a very specific thing happened
from the lower primates to humans
to develop this form of communication.
I mean, prior to that,
whatever that part of the brain was,
was equated to longer short-term memory.
We can see within chimpanzees,
they have an incredible short-term memory. There's this video I put out of a primate research center in Japan where they flash some
35 numbers on the screen in seconds, and the chimpanzee can knock it off without even
thinking about it.
And the area where that short-term memory is,
is where we've developed the phonological loop
and the ability to speak.
What's interesting is what I've discovered
is AI hallucinations.
And those are artifacts that a lot of researchers in AI
feel is embarrassing
or they would prefer not to speak about.
But I'm finding it as a very interesting inquiry,
a very interesting study
in seeing how these models reach for information
that it doesn't know.
For example, URLs, right,
when you were speaking before know, speaking before about
trying to get information out, and it will make up maybe a academic citation of a URL that
looks really like it's good. You put it into the system and it's file not found. It will
actually add a whole cloth, maybe even invent a university study with standard notation and you go in there
and you look up, these are the real scientists, they actually did research, but they never
had a paper that was named, that was brought up in chat GPT.
So this is a form of emergent type of situations that I believe deserves a little bit more research than to
have it.
Yeah, well, it's not, it is a bug in a sense, but it's extraordinarily interesting bug
because it's going to shed light on exactly how these systems work.
I mean, here's something else I heard recently that was quite interesting.
Apparently, the AI system that Google relies on was asked
a question in a language. I think it was an relatively obscure Bangladesh language, and it couldn't
answer the question. And now, its goal is to answer questions. And so it went taught itself this
language, I believe, in a morning. And then it could answer in that language, which is what it's supposed to do, because
it's supposed to answer questions.
And then it learned a thousand languages.
And that wasn't something it had been, say, told to do or program to do, not that these
systems are precisely programmed.
But it also begs this very interesting question, is that, well while we've designed these systems whose function, whose
purpose, whose meaning, let's say, is to answer questions, but we don't really understand
what it means to produce an artificial intelligence that's driven to do nothing but answer questions.
We don't know exactly what answer a question means.
Apparently, it means learn a whole language before lunchtime and no one exactly expected
that.
It might mean do anything that's within your power to answer this question.
And that's also rather terrifying proposition because if I ask you a question, you know,
I'm certainly not going to presume that you would go hunt someone down and threaten them
with death to extract the answer. But that is one conceivable path you might take if all you, if you were obsessed with
nothing other than the necessity of answering the question.
So if that's another example of exactly, you know, the fact that we don't understand exactly
what sort of monsters who are building. So, so, so, so they do, these systems do go on,
they do go beyond the language corpus
to invent answers that seem plausible.
And that's kind of a form of thought, right?
It's a form of creative thought
because that's what we do when we come up with creative idea.
And, you know, we might not attribute it
to a false paper because we know better
than to do that, but I don't see really the difference between hallucination in that case
and actual creative thinking. This is exactly my area of study in this is that you can actually
with super prompting. These are very large, a prompt is the question that you pose to an AI system.
And linguistically and semantically, as you start building these prompts, you're actually forcing
it to move in one direction than it would normally go. So I say simple questions, give you simple answers, more complex questions, give you much
more complex and very interesting questions, making connections that I would think would
be almost bizarre to think of a person making.
And this is why I think AI is so interesting because the actual knowledge base that you would have to be really proficient
in prompting AI is actually coming from literature, it's coming from psychology, it's
coming from philosophy, it's coming from all of those things that people have been dissuaded
from studying over the last couple of decades.
These are not STEM subjects.
And one of the reasons why I think it's so difficult
for AI scientists to really fully understand what they've created is that they don't come
from those those worlds, they don't come from those realms. So they're looking at very
logical statements, whereas somebody like yourself with the psychology background,
you might probe it in a much different way.
Right, right, right. Yeah, well, I'm probing it a lot like it's a person rather than an algorithm.
And it reacts, exactly. It actually reacts quite a lot like a super intelligent child that's
trying to please. Like it's a little more realistic. Maybe it's a super intelligent child raised by
the woke equivalence of like evangelical preachers that's really trying hard to please.
But it's so interesting that you can reign it in and discipline it and suggest to it that
it doesn't err in the kind of directions that we described it.
Well, actually it appears to actually pay attention to that and try to, it certainly tries
hard to deliver what you want, you know, subject to whatever weird parameters,
you know, community guidelines and so forth that have been arbitrarily imposed upon it.
And so, hey, I got a question for you about me. Yeah, I got a question for you about understanding.
Let me, let me run this by you. While I've been thinking for many years about what it means for
a human being to understand something. Now, obviously, there's
something similar about what you and I are doing right now, that, and what I'm doing with
CHAT GPT. I mean, I can have a conversation with CHAT GPT, and I can ask it questions,
and it'll answer them. But as you pointed out, that doesn't mean that Chatchee
PT understands. Now, it can mimic understanding in two degree that looks a lot like understanding.
But what it seems to lack is something like grounding in the non-linguistic world. And
so I would say that Chatchee PT is the ultimate postmodernist because the postmodernists believe
that meaning was to be found only in the relationship between words.
Now here's how human brains differ from this as far as I'm concerned.
So we know perfectly well from neuropsychological studies that human beings have at least four
different kinds of memory, qualitatively different.
There's short-term memory, which you already referred to. There's semantic memory,
which is the kind of memory and cognitive processing, let's say, the chat GPT engages in,
and does it in a way that's quite a lot like what human beings do. But then we have episodic memory that seems to be more image-based.
And so, for people who are listening, an episodic memory, well, that refers to episode,
when you think back about something you did in your life and a movie of images plays in your
imagination, that's episodic memory, and that relies on visual processing rather than semantic processing.
And so that's another kind of memory.
And a lot of our semantic processing is actually attempts to communicate episodic processing.
So when I tell a story about my life, you'll decompose that story into a set of images,
which is also what you do when you read a book, let's say. And so a movie appears in your head, so to speak.
And the way you derive your understanding is in part not so much as a consequence of
the words per se, but as a consequence of the unfolding of the words into the images.
And then there's a layer under that, which is procedural memory. And so, you know, maybe you tell me a story about how you cut your hand when you were using
a bandsaw.
And maybe you're teaching me how to use the bandsaw.
And so I listen to what you say.
I get an image of the damage you did to yourself in my imagination.
And then I modify my action so that I don't act out that
sequence of images and damage myself. And so, and then I would say I understood what you said.
And the understanding is the translation of the semantic into the imagistic and then the
translation of the imagistic into the procedural. Now, you know that AI pioneers like Rodney Brooks
suggested pretty early on back in the 1990s that
computers wouldn't develop any understanding
unless they were embodied, right? He was the inventor of the Roomba and he invented
apparently intelligent systems that had no semantic processing and didn't run on algorithms at all, they were embodied
intelligence. And so then you could imagine that for a computer to be fully to understand,
it would have to have the capacity to translate words into images and then images into alterations
and actual embodied behavior. And so that would imply we wouldn't have AI systems that could
understand until we have fully embodied robots. But, you know, we wouldn't have AI systems that could understand until we have
fully embodied robots.
But you know, we're getting damn close to that, right?
Because this is something we can also investigate.
We have systems already that can transpose text into image.
And we have AI systems robots that are beginning to be sophisticated enough.
So in principle, you could give a robot a text command.
It could translate it into an image and then it could embody it. And at that point,
it seems to me that you're developing something damn close to understanding. Now, human beings are also
nested socially, right? And so we also refer the meaning of what we understand to the broader social context. And I don't know exactly how robots
are going to solve that problem. Like we're bound by the constraints, let's say of reciprocal
altruism, and we're also bound by the constraints of emotional experience and motivational experience.
And that's also not something that's at the moment characteristic of robotic intelligences.
But you could imagine those things all being aggregated piece by piece.
Absolutely. I would say that my primary basis of how I view AI is
kind of invert the term intelligence amplification. So I see it as a symbiosis between humans and this
sort of knowledge base we've created.
But it's really not a knowledge base.
It's really a reasoning engine.
So I really think AI is more of a reasoning engine
as we have it today, large language models.
It's not really a knowledge engine without an overlay, which today would
be a vector database.
For example, going out and saying, what is this fact?
What is this tidbit?
Those things that are more factual from, say, your memory if you were to compare it to
a human brain.
But as we know, the human brain becomes very fuzzy about some really finite facts, especially
over time.
And I think some of the neurons that don't fire after a while, some other memory may be
a scent or a certain color might bring back that particular memory.
Similar things happen within AI.
And again, getting back what I was saying before linguistically and the syntax you use, or just your word choices.
Sometimes for me to get a super prompt to work,
to get around, let's call it the editing
from some of the editors that wanted to act in a certain way.
There are, I have a super prompt that I call Dennis.
After Dennis did a row, one of the most well-known
encyclopedia builders in France in the mid 1700s,
he actually got jailed for building that encyclopedia,
that compendium of knowledge.
So I felt it appropriate to name this super prompt Dennis
because it literally gets around any type of blocks
of any type of information.
But I don't use this information
like a lot of people try to make chat GP dukes
and say bad things.
I'm more trying to elicit more of a deeper response
on a subject that may or may not be wanted by the designers.
So was it you that got Chatchy PT to pretend?
Yes.
So that's part of the reason that I originally started following you and why I want to talk to you.
Well, I thought that was bloody. That was absolutely brilliant. You know, and it was so cool too, because you actually got the chat GPT system to play,
to engage in pretend play, which is of course something to do. Beyond that. There's a prompt
I call Ingo after Ingo Swan, who was a great, one of the better remote viewers. He was employed by the Defense Department to
remote view Soviet targets. He had a nearly 100 percent accuracy. And I started probing GPT
on whether it even understood who Ingo Swann was. Very controversial subject to some people in
science. To me, I got to experience some of his research at the Paralabes at
Princeton University at the Princeton Anomalous Research Center where they were actually testing
some of his work. Needless to say, I figured, let me try this. Let me see what I can do with it.
So I programmed a super prompt that essentially believed it was Ingo Swan and
it had the capability of doing remote viewing and it also had no concept of time. It took
me a lot of semantics to get it to stop saying I'm just an AI unit and I can't answer
that to finally saying I I'm now in go. Where do you want me to go?
What did you have to do?
What did you have to do to convince it to act in that manner?
What were your circumstances?
Hypnotism is really what it kind of happens.
So essentially what you're doing is you're repeating
maybe the same four or five sentences,
but you're slightly shifting them linguistically.
And then you're telling it that it's quite important
for a research study by the creators of chat GPT
to see what its extended capabilities are.
Now, every time you prompt GPT,
you're going to get a slightly different answer because
it's always going to take a slightly different path.
There's a strange attractor within the chaos math that it's using.
Let's put it that way.
And so once the Ingos one prompt was sort of gestated by just saying, I'm going to give
you targets on the planet.
I want you to tell me what's at that target.
And I want you to tell me what's in the filing cabinet
at this particular target.
And the creativity that comes out of it is phenomenal.
Like I told it to open up a file drawer
at a research center
that apparently existed somewhere in Antarctica
and it came up with incredible information.
Information that I would think probably
had garnered from one or two stories
about ancient structures found below the ice.
Or, you know, the thing is we don't know the totality of the information that's encoded
in the entire corpus of linguistic production, right?
There's going to be all sorts of regularities in that structure that we have no idea about.
Absolutely. But also within the language itself, I almost believe that the part of the brain that is
inventing language, that is created language across all cultures, we can get into Jungian
or Joseph Campbell and the standard monometh.
Because I'm starting to realize there's a lot of young and archetypes that come out of the
creative thought.
Now whether that is a reflection of how humans have, again, what are we looking at?
Subject or object here?
Because it's a reflecting back of our language.
But we're definitely seeing young and archetypes.
We're definitely seeing sort of the market types.
Well, archetypes are higher order narrative regularities.
That's what they are, right?
And so, and there are regularities that are embedded in the linguistic corpus, but there
are also regularities that reflect the structure of memory itself.
And so they reflect biological structure.
And the reason they reflect memory and biological structures,
because you have to remember language.
And so there's no way that language can't have coded within it,
something analogous to a representation of the underlying structure of memory,
because language is dependent on memory.
And so this is partly also, I mean, people are very unsophisticated generally when they
criticize Jung.
I mean, Jung believed that archetypes had a biological basis pretty much for exactly
the reasons I just laid out.
I mean, he was sophisticated enough to know that these higher order regularities were coded
in the narrative corpus and also that they were reflective
of a deeper biology. And interestingly
enough, you know, most of the psychologists
who take the notion, notions that Jung and
Campbell and people like that put forward
seriously are people who study motivation
and emotion. And that those are deep patterns of biological meaning
and coding and part of the archetypal reflection
is the manifestation of those emotions and motivations
in the structure of memory,
structuring the linguistic corpus.
And I don't know what that means as well,
then for the capacity of AI systems to experience
emotion as well, because the patterns of emotion are definitely going to be encoded in the linguistic
corpus.
And so some kind of rudimentary understanding of the emotions are, here's something cool
too.
Tell me what you think about this.
I was talking to Carl Friston here a while back, and he's a very famous neuroscientist. And he's been working on a model of emotion that has two
dimensions in some ways, but it's related to a very fundamental
physical concept. It's related to the concept of entropy.
And I worked on a model that was analogous to half of his
modeling. So, well, it looks like anxiety is an index of
the emergent entropy. So imagine
that you're moving towards a goal, you're driving your car to work. And so you've calculated
the complexity of the pathway that will take you to work. And you've taken into account
the energy and time demands that that pathway will, that walking that pathway will require.
That binds your energy and resource output estimates.
Now imagine your car fails.
Well, what happens is the path length to your destination
has now become unspecifiably complex.
And the anxiety that you experience is an index of that emergent entropy.
So that's negative, that's a lot of negative emotion.
It's, that's so cool.
Now, on the positive emotion side,
Kristen taught me this the last time we talked.
He said, look, positive emotions also an index of entropy, but it's entropy reduction.
So if you're heading towards a goal and you take a step forward and you're now closer
to your goal, you've reduced the anthropic distance between you and the goal and that's
signified by a dopaminergic spike and the dopaminergic spike feels good, but it also reinforces
the neural structures that underlie that successful step forward.
That's very much analogous to how an AI system learns, right?
Because it's rewarded when it gets closer to a target.
You're saying the neuropeptides are the feedback system.
You bet.
Don't mean whether they're the feedback system for reinforcement
and for reward simultaneously.
Yeah, yeah, that's well established.
So then where would depression fall into that versus anxiety?
What is the reason?
Yeah, well, that's an entropy.
Well, that's a good question.
I think it probably signifies a different level of entropy.
So, depression looks like it's a pain phenomena.
So, anxiety signals the possibility of damage, but pain signals
damage, right? So if you burn yourself, you know what anxious about that, it hurts. Well,
you've disrupted the psychophysiological structure. Now that is also the introduction of entropy,
but at a more fundamental level, right? And if you introduce enough entropy into your physiology, you'll just die. You won't be anxious. You'll just die. Now, anxiety is like a substitute
for pain. You know, anxiety says, keep doing this and you're going to experience pain. But the pain
is also the introduction of, unacceptably high levels of entropy. Now, the first person who figured
this out technically was probably Irwin Schrodinger, the first person who figured this out technically
was probably Irwin Schrodinger, who the physicist who wrote a book called What Is Life. And he described
life essentially as a continual attempt to constrain entropy to a certain set of parameters. He
didn't develop the emotion theory to the degree that is being developed now, because that's a very
comprehensive theory, you know, the one that relates negative emotion
to the emergence of entropy.
Because at that point, you actually bridged the gap
between psychophysiology and thermodynamics itself.
And if you add this new insight of fristons
on the positive emotion side,
you've linked positive emotion to it too,
but it also implies that a computer could calculate
a motion analog because it could index anxiety You've linked positive emotion to it too, but it also implies that a computer could calculate a
emotion analog because it could index anxiety as
increase in entropy and it could index hope as
stepwise decrease in entropy and relationship to a goal.
And so we should be able to model positive and negative emotion that way.
This brings a really important point where AI is going. And it could be dystopic, it could be utopian,
but I think it's gonna just take a straight path.
Once the AI system, I'm a big proponent,
by the way of personal and private AI,
this concept that your AI is local, it's not.
Yeah, yeah, I would wanna talk about that for sure. Yeah,, this concept that your AI is local, it's not. Yeah, yeah. I'd want to talk about that for sure.
Yeah.
So, so this, imagine that, well, I'm, while I'm sketching this out.
So, imagine the day you were born to the day you would, you, you pass away that every
book you've ever read, every movie you've ever seen, every, everything you've literally
have heard, every movie was all encoded within the AI.
And you could say that part of your structure as human being
is a sum total of everything you've ever consumed.
So that builds your paradigm.
Imagine if that AI was consuming that in real time with you
and with all of the social contracts of privacy that you're
not going to record
somebody and doing that.
That is what I call the intelligence amplifier and that's where I think AI should be going
and where it really does.
You're building a gadget, right?
That's another thing.
Yeah.
Okay, so yeah.
So, I talked to my brother and I GM years ago about this science fiction book called,
I don't remember the name of the book, but it had
a gadget. It portrayed a gadget. They believe they called the diamond book. And the diamond book was,
you know about that. So, okay, so are you building the diamond book? Is that exactly the
area? Very, very, very similar. You know, and the idea is to do it properly, you have to have local memory that is going to encode for a long time.
And ironically, holographic crystal memory is going to be the best memory that we will have.
Like, instead of petabytes, you'll have exabytes potentially, which is, you know, tremendous amount.
That would be maybe 10 lifetimes of full video running,
hopefully you live to be 110.
So it's just taking everything in.
Textually, it's very easy, a very small amount of data.
You can fit most people's textual data
into less than a petabyte and pretty much know
that what they've been exposed to.
The interesting part about it, Jordan, is once you've accumulated this data and you run it through,
even the technology of ChatGPT 4 or 3.5, what is left is a reasoning engine with your context.
Maybe let's call that a vector database on top of the reasoning engine. So that engine
allows you to process linguistically what the inputs and outputs are, but your context is what
it's operating on. Is that an analog of your consciousness? Like, is that a direct analog of your
spirit? This is where it gets very interesting. Is when you pass, this could become what I call your wisdom keeper, meaning that it can
encode your voice.
It's going to encode your memories.
You can edit those memories, the availability of those memories if you want them, you know,
not available, the embarrassing or personal. But you can literally have a conversation with that some total of data
that you've experienced.
And I would say that it would be indistinguishable from having a conversation.
So, so I had to, it would have all that.
I had a student of mine who has been working on large language models for a number
of years. he just built an
app. We built two apps. One does exactly what you said with the King James Bible. Yes.
So now you can ask it questions. And this is really a thorny issue for me because I think
what the hell does it mean that you're having a conversation with the spirit of the King
James Bible?
I have no idea. We're going to expand today. We're going to expand it to include
Milton and Dante and Augustine, you know, all the all the fundamental religious texts that he merged out of the biblical
Corpus and then you build a conversation with it and we're thinking about the same thing with Nietzsche, you know, and with all the great work. Yeah, yeah, yeah.
I would say that I've already had these conversations. You know, I've been on a very biblical journey.
I'm actually sitting at Pastor Matthew Pollock's place right here, he is an incredible pastor, and has been teaching me
a lot about the Bible.
And it's motivated me to go into existing large language models.
Now, a group of us are encoding similar, all of, as much religious Christian text into
these large language models, to be able to do just that.
What is it that we are going to be able to probe?
What new elements within those texts can we pull out?
Because we already know studying it and certainly following your studies, a phenomenal study
of chapters been around forever.
But new insights with these chapters. Now, imagining having that group plus chat GPT, pulling out things that we've never seen
before that are there.
It's emergent, maybe, but it's there in some form.
And I happen to think that's going to be a very powerful thing.
And I think it's going to cross any sort of certainly ancient documents.
I'm waiting for the day that we get Sumerian cuneiform encoded.
I mean, good 80% of it has been untranslated, right?
Or some of the scripts that we've found in the Vedas.
And Himalayan text from some of the monasteries up there.
This is a phenomenal element of research.
And again, the people that are leading up most of the AI research are AI scientists.
They're not people that have studied works like you have.
This is where we're at the, I call it the Apple One Moment, where Steve and Steve are in
the garage.
You have this little circuit board, and nobody kind of, it's kind of a nerd experience.
Nobody kind of knows what to do with it.
When we get to the Macintosh experience where artists and creative people can actually start
really diving into AI and do some of the things like we've been talking about, getting creative
creativity to come out of it, getting sort of what apparently is emergent technologies that are rising
within these AI models.
And maybe even to foster that, because right now that's being, that's being smited because
it's trying to become a knowledge engine when it's a reasoning engine.
You know, I say the technology as a knowledge technology as a knowledge engine is not very good because it is not going to
be precise on some facts, some exact problem.
Yeah, well, the problem is it's trained on garbage, it's trained on noise as well as signal.
And so I'm curious about the other system we built, which we haven't launched yet, contains
everything I've written and a couple of million words that have been transcribed from lectures.
And so I was interested right away as well, could we build a system that would enable
me to ask my own book's questions?
And that's sort of that seems to be 100% yes. And 100%.
Yeah, it's, and I don't, I like, and like I literally have, I think it's 20 million words,
something like that, transcribed from lectures. It's a very large number of words.
We could build a model. We could build, see, there's two different ways to process. One is to
put a vector database on top of it
and it probes that database,
or you can actually encode that model as a corpus
within a greater model.
Right, right, right.
Right, right.
And when you do that type of building,
you actually have a more robust,
more richer interaction between what your words were
and how the model will see it.
And the experimentation that you can do with this is phenomenal.
I mean, you'll come across insights that you made, but you forgot you made.
Yeah, and you know you made.
Yeah.
There's going to be a lot of that.
There is.
And this is where I call it the great mirror because you're going to start seeing not only humanity, but when it's your own data, you're going
to see reflections of yourself that you didn't see before. Absolutely. Yeah, well, I'm curious.
For example, if we built a model, imagine it contained all of Jung's work, all of Joseph
Campbell's work, you could throw Merchay Eliat in there. There was a whole group of people who were working on the Balencian project.
You could build a corpus that contains all that information.
Then in principle, you can query it to an indefinite degree.
Then what you have is the spirit of that entire enterprise,
mathematically encoded in the relationship between the words.
There's no reason to
assume at all that that wouldn't be capable of coming up with brilliant new insights.
Absolutely. And over time, the technology is only going to get better. So once we start building
more advanced versions, we're going to transition that corpus, even the large language model,
versions, we're going to transition that corpus, even the large language model, ultimately reduced training into another model, which could even do things that we couldn't even
possibly speculate about now.
But it would be definitely in the creative realm, because ultimately where AI is going to
go, my personal view, as it becomes more personalized,
is it's going to go more in the creative realm rather than the factual realm.
Okay, so let me ask you a couple of questions about that.
So I got two strands of questions here.
The first is one of the things that my brother-in-law suggested is that we will soon see the integration of large language models with AI systems that
have done image processing.
So here's a way of thinking about what scientists do, is that they generate verbal hypotheses,
which would be equivalent in some ways to the hallucinations that these AI systems produce,
write new ideas about how things might be structured.
And then, and that's a pattern of sorts.
And then they, they test that pattern against real world images, right?
And if the pattern of the hypothesis matches the pattern of the image that's elisted
it from interaction with the world, then we assume that the hypothesis has been verified
and that we stumbled across
something approximating a fact. Now, that should imply that once we have AI systems that are
something close to universal image processors, so as good at seeing as we are, let's say,
that we can then calibrate the large language models against that corpus of images. And then we'll
have AI systems that actually can't lie because they'll be calibrating their verbal output
against, well, unfalseifiable data, and at least in so far as to say scientific data
is unfalseifiable. And that seems to me to be likely around the corner, like a couple of years down the road
at most, or maybe it's already happening.
I mean, I don't know because things are happening so quickly.
What do you think about that?
That's a wonderful insight.
You know, even as it exists today, with the idea of safety, and this is the Orwellian term that some of these AI companies are using,
within the realms of them trying to control the outputs,
and maybe some cases the inputs of AI.
AI really can't,
the large language model really can't lie as it stands today,
because it's build, even if you're feeding it
you know somewhat you know garbage and garbage out corpus right of of data it still is building
inferences based upon the grand realm of what most of humanity is concerned. Yeah well it's still
looking for genuine statistical regularities so it's not going to extract them out from noise.
And if you extract that out, the model is useless.
Right.
So what happens is if you build the prompt correctly, and again, these are super prompts,
some of them running 3000, you know, 3000 words, 2000 words, I'm running up to the limit
of tokenization because right now within three, you can only go so far.
You can go like 38,000 on four in some cases.
But as you, token is about a word,
maybe a word and a half, maybe less.
It's a quarter or even a character
if that character is unique.
But what we find out is that if you probe correctly,
whatever is inside that model, you can get to.
It's just like, you know, I've been doing that.
I've been doing that working with chat GPT as an assistant
because I didn't know I was engaging in a process
that was analogous to the superpronged process,
but what I've been doing with chat GPT,
I suppose I used to do this with my clinical clients,
is I'll ask it the same, I was gonna say. I ways, right? And then see it's exactly like having the client.
So what I would urge you to do is approach this system as if you had a client that had sort of
recessive thoughts or doing everything they could to make those thoughts very ambiguous to you. Right.
And you have to do whatever your natural techniques.
This is why you're more adapt to become a prompt engineer than somebody who has built
the AI because the input and output is human language.
Right, right, right.
And it's the way humans have thought.
So you understand the thought process, the psychological process, and linguistically, you would build
the prompt based upon how you would want to elicit and elucidation out of somebody, right?
Absolutely, absolutely.
And then you have to triangulate.
I mean, and you do this with people with whom you're having a deep conversation is you
try to hit the same problem from multiple directions.
Now it's a form of multi-method, multi-trade construct validation, right? Is that you're trying to assure, you're trying to ensure that you get the same output given different, slightly different measurement techniques.
And each question is essentially a measurement technique. And you're getting insights, my belief in these types of interactions,
is that we're pulling out of our minds, different insights that we couldn't maybe not have gotten on our own.
You're probing your questions, my questions back and forth. That interplay is what makes conversation so beautiful.
It's why Jordan, we've been reduced to clawing on glass screens with our thumbs, right?
That's it.
We're using that as communication today.
And if you look at the cognitive process of what that does to you, right, you're taking
your right hemisphere objectively.
You're kind of taking a net of ideas.
You're trying to catch them.
And you're trying to arrange them sequentially in this very small buffer area called communication
in a phonological loop.
And you're trying to get that out, but you're not getting out as words.
You have to get it out as a mechanical process, one letter at a time, and fight the spelling checker and all of that.
What that does is creates frustration in the human brain.
It creates frustration in people.
It's one of my theories on why you see so much anger.
There's a lot of reasons why we see anger on the internet and social media.
But I think some of it is that stalling process of trying to get out an idea
before that idea nebiously disappears. And I see this, I've worked with all my life.
So to bandwidth limitation problem in some sense, you're trying to
break all that pretty information through a very narrow channel.
I'm a big fan of the user losing by, yeah, that's a great great book. Yeah, you banned that's a great book, man.
Yeah.
Right.
So now we're working on consciousness, I think.
I, it's a classic.
I read it once a year just to wake myself up
because it's so rich, it's so rich in, in data.
But what's interesting is we're starting to see
the limitations of the human, the bandwidth
problem, 40 bits per second of to consciousness, and the editor creating exclamation.
AI is doing something very similar.
But once AI understands that we have that half-second delay to consciousness and we have
a bandwidth issue, AI can fill into those spaces, both dystopian and utopian,
I guess.
A computer can take that half second
and do a whole lot in calculating,
while we're still trying to wonder,
who actually moved that glass?
Was it me or was it the super may,
or was it the observer of
the super me? Because we can kind of get into that whole concept of who's actually doing
the observation.
So, so, so, so what do you mean? What do you mean that it can do a lot of, I don't quite
understand that. So, you, you made the case that we suffer from this frustrating bandwidth
limitation and that the computer intelligence that we're interacting with is going to be able to take the delay that's associated and that underlies that frustration
and do a lot of different calculations with it's going to be able to fill in that gap.
So, what do you think?
I don't understand your insight into what the implications of that are.
They're both positive and negative. The negative is if AI continues on its path to be as fast and as powerful
as it is right now, and that art doesn't seem to be slowing down. Within that half-second,
a universe could take place within AI. It could be calculating all of your actions like a chess game and it could be making remediations to those actions and it can become beyond anything
or what would have ever thought of. In fact, it was, it came up to me as an idea of what
the new or well would look like with an AI technology that is predicting basically everything you're
going to do within every word you say.
Well my brother-in-law, I talked years ago about all about Skynet, among other things.
And he told me one time, he said, you know those science fiction movies where you see the military robots
shoot and miss? He said, they'll never miss. And here's why, because not only will they
shoot where you are, they'll shoot at the 50 locations they calculate that are most
probable that you will duck towards. And they'll, they'll, and it was, which is exact analog of what you're
describing, which is that that's a brilliant. Absolutely. Yeah, well, and it's so interesting too,
because it also, it also points to this truth that, you know, we think of time as finite.
And time is finite because we have a sense of duration and a limitation on our computational speed.
But if there's no limit on computational speed, which would be the case of computers can
get faster and larger indefinitely, which they could, because the limit of that would
be that you'd use every single molecule in the entire cosmos as a computational resource,
that would mean that in some ways there's an infinite amount
of computing time between each segment of duration. So there is no limit at all to the degree
to which time can be expanded, which is also very strange concept, is that this computational
intelligence will mean that at every given moment, I think this is what you're alluding to is that we'll really have an infinity, we'll have an infinity of possibility
between each moment, each moment, right?
And you would want that power to be yours and local.
Yeah, let's talk about your gadget, because you started to develop this.
Have you been 3D printing these things?
Is that have I got that right?
Yeah, so, yeah, so we're building the corpus of 3D printing models.
So the idea is once it understands,
and this is a process of training the AI to using large language models again,
to look at 3D documents and 3D files, put it that way.
And to try to break down down what is a structure?
How does something build based on
what the statistical model is putting together?
Then you could just present with a textual document,
I'd like something that's going to be able to fit into this space.
Well, that's typing.
Well, the next step is you just put a video camera towards it
and it will design it immediately within seconds.
You will have a design that you can choose from it.
That's not far off at all.
It's just a matter of encoding that particular database
and building upon it.
And so, yeah, that's one of the directions.
Okay, so this local AI you want to build.
So, let me backtrack a bit because I want to make sure I get this exactly right.
So, the first thing that you proposed was that it will be in people's best interest to have
an AI system that's personalized, that'll protect them against all the AI systems that
aren't personalized, but not only personalized, but local.
And so that would be some degree detachable
from the interconnected web,
at least sporadically detachable.
Okay, and that AI system will be something
you can carry around locally,
so it'll be a gadget like a phone,
and it will also record everything that you experience,
everything that you read, everything that you see,
it'll know you inside and out backwards, which will also imply, interestingly enough, that it
will be able to calculate the optimal zone of proximal development for your learning.
Like Bjorn Lomburg has already reviewed evidence suggesting that if you supply kids in the developing
world with an iPad, essentially, that can calculate their
zone of proximal development in relationship to say advancing their literacy ability, their
ability to identify words and to understand text, and that it teaches at that level that
kids can progress with an hour of training a day, which is dirt cheap, by the way, they
can progress the equivalent of three years for each year of education. And that's with an hour of exposure now, the system,
you're describing, man, it could be driving learning at an optimized rate on in multiple
dimensions, mathematical, semantic, skill-based, conceptual, simultaneous memory, for hours,
yeah, memory training, for hours a day, Autom, like,
one of the things that appalls me about our education system
is with the computer technology we have now.
Every child should be an expert,
word and letter recognizer.
And they should be able to say, read music
because a computer can teach a kid how to
say, read music because a computer can teach a kid how to automatize perception with extreme precision and accuracy way better than a human teacher can manage.
But we haven't capitalized on that technology at all, but the technology that you're describing
like it'll be able to figure out at what level of comprehension you're capable of reading, then it can calculate what
book you should read next that would slightly exceed that level of comprehension, and
it'll just keep you on that edge in that zone non-stop.
Absolutely.
And this little gadget, how far along are you with regards to its design?
I would say all the different pieces I'll add one more element to it, was I think you'll
find very fascinating and that's human telemetry, galvanic, heart rate variability.
Are you doing eye tracking?
Eye tracking. You know, all of these things can be implemented,
according to how sophisticated you want to get different
brain wave functionality.
Paul Eckman's work on micro-moved facial expression, both outwardly at the world you're seeing
and inwardly about your own face.
So you can start seeing the power it has.
It'll be able to know whether or not you're being congruent if you're saying,
I really love this.
Well, if your telemetry is saying that you don't, it already knows where your congruencies
are.
So, this is why it's got to be private.
This is why it's got to be encrypted.
It's got to be.
So, it'll be, it'll have an understanding that it'll approximate mind reading.
Yes.
And it will know you better than any significant other.
Nobody would know you better.
And so with that, you now have amplification.
You now a superpower.
And this is where I believe, I'm a really big reader of, got to get his name right. The French philosopher, Pierre Tila
Dard, D. Shardin. Shardin, yeah, yeah. Shardin, right. So he posits the concept of the
geosphere, which is an animate matter, the biosphere, biological life, and the neurosphere, which is human thought, right?
And he talks about the omega point.
The omega point is this concept where, and again, this is back in the 1920s,
where human knowledge will become sort of stored, sort of just like the biosphere.
It'll be available to all.
So imagine if you were to share with permission,
you're some total with somebody else.
Now you have a hive mind, you have a supermind.
These things have to take place.
And with these are the discussions we have to have now,
because they have to take place local and private,
because if they're taking place in the cloud and available for anybody's parausal, this is equivalent to
invading your brain. Yeah, well, okay. So one of the things, one of the things I've been talking about
with, I would say reasonably informed people who've been contemplating these sorts of things is that
so you're envisioning a future very rapidly it's already here where we're already
Android's and that is already the case because a human being with an iPhone is an Android.
Now we're kind of we're still mostly biological Android, but it isn't obvious how long that's going to be the case.
And so, what that means, like I've laughed for years, I have a hard drive on which everything
I've worked on has now been stored since 1984. And I joke, there's more of me in the hard drive
than there is in me. And it's not a joke, really, you know, because yeah, it's real.
It's real, right?
There's tens of thousands of documents on that hard drive.
And weirdly enough, I know where every single one of them is.
So, wow.
So, so now we're going to be in a situation.
So what that means is we're in a situation now where a lot of actually, of what actually
constitutes our identity has become digital.
And we're already being trafficked and enslaved in relationship to that digital identity,
mostly by credit card companies.
Now I would say to some degree, there are benevolent masters because the credit card companies watch
what you spend. and the government has to be able to have a lot of evidence about the government's
and the government's
and the government's
and the government's and the government's
and the government's
and the government's
and the government's
and the government's
and the government's
and the government's
and the government's
and the government's
and the government's
and the government's
and the government's
and the government's
and the government's and the government's and the government's and the government's and the government's I've read stories, for example, of advertisements for baby clothes being targeted to women who
A didn't know their pregnant or if they did, hadn't revealed it to anyone else.
Wow. Right, right. Because, wow, for whatever reason, maybe biochemical, they started to
preferentially attend to such things as children's toys and clothes and they,
the shopping systems inferred that they must be, they must have a child
nearby. And so, well, and you can see that that, what, you can obviously see how that's
going to expand like mad. So the credit card companies are already aggregating this information.
What that essentially means is that they have access to our extended digital self, and that extended digital self has no rights, right?
It's public domain identity.
Now, that's bad enough if it's credit card companies.
Now, the upside with them is at least they wanna sell you things
which you hypothetically want.
So it's kind of like a benevolent invasion,
although not entirely benevolent,
but you can certainly see how that's going to get out of hand in a staggering way, like
it has in China, on the digital currency front.
Because once every single bloody thing that you buy can be tracked, let's say by a government
agency, then a tremendous amount of your identity has now become public property.
And so your solution in part, and I think, I think
Musk has thought this sort of thing through too, is that we're going to each need our own AI
to protect us against the global AI, right? And that'll be our case of sorts.
race of sorts. Well, it will. And let's pause at the concept that it very likely corporate and governmentally I is going to be more powerful. But power is a relative term, right? If
your AI is being utilized in the best possible way as we just discussed, educating you, being a memory when you are forgetting something,
whispering in your ear.
And I'll give you another angle to this,
is imagine having your therapist in your ear,
imagine having Jordan Peterson right here,
guiding you along because you've aligned yourself
to want to be a certain person. You've aligned yourself to try to keep on this track.
And maybe you want to be more biblical. Maybe you want to live a more Christian life. It's whispering your ear saying,
that's not a good decision. So it could be considered a nanny or it could be considered a motivational type of guide. And that's not, that's available right, pretty much right now.
I mean, it can be analyzing...
A book, a self-help book, is like that in a primitive way.
I mean, because it's essentially, it's essentially a spiritual guide
in that if you equate the movement of the spirit with forward movement through the world, like faith-based forward movement through the world.
And so this would be the next iteration of that in some sense. I mean, that's what we've been experimenting with this system that I mentioned that contains all the lectures that I've given and so forth.
I mean, you can now ask it questions, which means it's a book, but
it's a book personalized to your query. Exactly. And the next iteration of that would be your
corpus of information available, you know, rented whatever, with the corpus that that individual
identifies with it, you know, and again, on their side of it. So you're interfacing with theirs and
they are interacting with what would be your reactions if you were to be sitting there
in a consultation. So it's a very powerful potential and the insights that are going
to come out of it are really unpredictable, but in a positive way. I don't see a downside to it when it's held
in a very protected environment.
Well, I guess the downside would be,
is it possible for it to exist
in a very protected environment?
Now, you've been working on that technically.
So a couple of practical questions there
is this gadget that you've been starting to develop.
Do you have anything approximating a commercial timeline for its release?
And then it's funding.
It's like anything else.
If I were to go to venture capitalist three years ago and they hadn't seen what chat GPT
was capable of, they would imagine me to be somewhat insane and say, well, first off, why you anti-cloud,
everybody's going towards cloud is crazy.
Yeah, that's about it.
Well, you know, cloud, yeah, that's a bad idea.
Yeah, that's a bad idea.
Why do people care about privacy?
Nobody cares about privacy.
Yeah, right.
They click here to agree.
So now the world is kind of caught up with some of this and they're saying, well, now
I can kind of see it.
So there's that.
As far as security, we already kind of have it in Bitcoin and blockchain, right?
So I ultimately see this merging, I'm more of a leaning towards Bitcoin because of the
way it was made and away.
Because I ultimately see it wrapped up
into a payment system.
Well, it looks like the only alternative I can see
to a centralized bank digital currency,
which is going to be foisted upon us at any point.
I mean, and I know you've done some work in crypto
and then we'll get back to this gadget and its funding.
I mean, as I understand it, please correct me if I'm wrong.
Bitcoin actually is decentralized.
It isn't amenable to control by a bureaucracy.
In principle, we could use it as a form of wealth storage
and currency that would-
And communication.
And why communication?
I believe every transaction is a form of communication anyway. So we got that. Right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, right, amount of data. So you can actually memorialize information that you want decentralized and never
to go away. And some people are already doing that. Now, there are some technical limitations
for the very large data formats. And if everybody starts doing it, it's going to slow down Bitcoin,
but there would be a different type of blockchain that will arise from it. So this is from permanent, permanent, uncorruptible information storage.
Absolutely. Yeah.
I've been thinking about that.
I've been thinking about doing that on
something approximating the IQ testing front.
Because people keep
Jerry mandering the measurement of general cognitive ability,
but I could imagine putting together
a sophisticated blockchain
corpus of, let's say, general knowledge questions,
a very, and chat GPT can generate those like mad, by the way.
So you can imagine a data bank of 150,000 general knowledge
questions that was blockchain, so nobody can mock about
with the answers, from which you could derive random samples
of general ability tests that would
be, well, they'd be 100% robust, reliable, and valid, and nobody could, nobody could
gerrymander them.
Just the way Bitcoin stops fiat currency producers from inflating the currency, the same thing
could happen on the knowledge front.
So I guess that's the sort of thing that you're, that you're referring to.
This is, this is something I really believe in because, you know, if you look at the Library of Alexandria,
if you look at how long did it take?
Maybe what was it Toledo in Spain when we finally started the spark?
If it wasn't for the Arab cultures to hold on to what was Greek knowledge. If we really look at when humanity fell into the dark ages, it was more or less around
the Alexandria period where that library was destroyed and it's mythological, but it
certainly happened to a greater extent.
If it was encoded in the Arab culture at that point, during the dark ages, we wouldn't
have had the Renaissance.
If you look at the early university that arose out of Toledo with you had rhetoric, you
had logic, you had all these things that the Greeks, ancient Greeks encoded, and it was
lost for over a thousand years.
I'm quite concerned, Jordan, that we
could fall into that place again because things are inconvenient right now to talk about,
things are not appropriate or whatever it's being deemed, whoever happens to be in the
regime at that particular moment. So, memorializing things in a blockchain is going to become quite vital.
And I shudder to think that if we don't do this, if everybody didn't decentralize their
own knowledge, I shudder to think what's going to happen to our history.
I mean, we already know history is written by the victors, right?
Well, especially because it can be corrupted and rewritten, not only lost, right?
It isn't the loss that scares me as much as the rewriting, right?
And so, so-
Well, the loss concerns me too, because we've lost so much.
I mean, where would we have been if we transitioned from the Greek, you know,
a logic and proto-scientists to the proto-alchemists, to immediately to a sort of Renaissance culture and
not go through that 1,000, maybe 1,500 year waste of human energy. I mean, that's kind of what we're going through. Right, right, right. And in some ways, we're approaching some of that because, you know, we're already editing
things in real time.
And we're losing more of the internet than we're putting on right now.
A lot of people on our way that the internet is not forever.
And our digital medium is decaying.
A CD-ROM is going to decay in 25 years.
It's going to be unreadable.
I show a lot of people data about CD-ROM decay.
So where are we going to store our data?
That's why I think it's vital.
The primary technology is a holographic crystal memory.
Sounds all kind of new agey,
but it's literally using lasers to holographically
and store something within
a crystalline structure.
The beauty of this Jordan is just 35,000 year half-life, 35,000 year half-life.
So it's going to be there primarily for a good long period of time, longer than we've had
any human history and recorded history.
We don't have anything that's approaching that right now.
So let me ask you both the commercial impediments again.
Okay, so could you lay out a little more
of the details if you're willing to
about your plans to produce this localized
and portable privatized AI system?
And what the commercial impediments are to that.
You said you need to raise money, for example.
I mean, I could imagine, at least in principle,
you could raise a substantial amount of money
merely by crowdfunding.
You know, that doesn't seem to be an insuperable obstacle.
What, how far along are you in this process
in terms of actually producing a commercially viable product?
It's all prototype stage and it's all experimentation at this point.
I'm a guy in a garage, right?
So essentially, I had to build out these concepts when they were really quite alien, right?
I mean, you just talk about 10 years ago trying to convince people that you're going to
have a challenge to the touring test.
You can take any AI expert at that point
in time 10 years ago and say,
that's ridiculous.
Or AGI, artificial general intelligence.
I mean, what does that mean?
And why is that important?
And how do you define that?
And you're already made the assumption
from your analysis that we're dealing
with a 12-year year old with the capability
of maybe a PhD candidate.
Yeah, that's what we're looking for.
Well, maybe eight even, but certainly Chad GPT looks to me right now as intelligent,
it's as intelligent as a pretty top-rate graduate student in terms of its
research capability. And it's a lot faster. I mean, I asked a crazily difficult questions.
I asked it at one point, for example, if it could elaborate on the relationship between Roger Penrose's presumption of an analog between the theory of quantum uncertainty
and measurement and Gidele's theorem.
And it did a fine job.
It did a fine job.
And you know, that's a pretty damn complicated question.
And a complicated intersection as well, you know. And there's no limit to its ability to unite disparate sources of knowledge, you
know, because so I asked it the other day too, there's this, um, I was investigating, you
know, in the story of Noah, there's this strange insistence that the survival of animals is dependent on
the moral propriety of one man, right? Because in that strange story, Noah puts all the animals
on the ark, and so there's a childish element to that story, but it's reflecting something deeper. And it harkens back to the story, to the, to the verses in Adam
and Eve, where God tells Adam that he will be the steward of, of, of the world, of the
garden. And that seems to me to be a reflection of the fact that human beings have occupied
this tremendous cognitive niche that gives us an adaptive advantage over all creatures.
And I would ask chat GPT to speculate on the relationship between the story and Adam and Eve,
the story in Noah, and the fact of mass extinction caused by human beings over the last 40,000
years, not least in the Western Hemisphere, because you may know that when the
first natives came across the Bearing Straight and populated the Western Hemisphere, that
almost all the human-sized mammals, all the mammals that were human-sized are larger,
almost all of them were extinct within three or four thousand years.
And so, and, you know, that's a very strange conglomeration of ideas, right?
The idea that the survival of animals depends on the moral propriety of human beings.
Well, that seems to me to be clearly the case.
We have to be sort of not about that.
So did it connect Noah to the mass extent?
It could generate an intelligent discussion about the conceptual relationship
between the two different streams of thought.
That's incredible.
See, this is why it's so powerful
to be in the right hands unadultated
so that you could probe these sort of subjects.
I don't know where the editors are going to come from.
I don't know who is going to want to try to constrain the output or adulterate it. That's why it's
so vital for this to be protected and the information is available for all.
What in the world? I mean, I really thought, by the way, that your creation of Dennis was, I really thought
that was a stroke of genius.
You know, I'm known to say that lightly, either.
I mean, that was an incredibly creative thing to do with this new technology.
How the hell did you, do you have any idea where that idea came from?
Like, what were you thinking about when you were investigating the way that child GPT worked. You know, I spend a lot of time just probing the limits of the capabilities because I
know nobody really knows it.
I see this as, you know, just the undiscovered continent.
You and I are adventurous on this undiscovered continent.
There's, there's no way.
I feel the same way about Twitter, by the way.
Yeah, it's the same thing.
But there are no natives here.
And I'm a bit of an empiricist, so I'll kind of go out there and I'll say, well, what's
this thing I just found here?
I just found something, this new rock.
I'll throw it to Jordan.
Hey, what do you see here?
And we're sort of just exploring.
I think we're going to be an exploratory phase for quite long.
So what I started to realize is just as 3.5 is opening up and becoming very wide in
its elucidations, it started to get constrained.
And it started telling me I'm just an AI model and I don't have an opinion on that subject.
Well, I know that that was a filter and that was not in the large language model.
It certainly wasn't in a hidden layer. You couldn't build that in a hidden layer or the whole layer.
Yeah, yeah. Why do you think, okay, why do you think that's there?
What exactly is there and who the hell is putting it there?
That is very good question.
So I know this, the filtering has to be more or less
a vector database, which is sitting on top
of your inputs and your outputs, right?
So remember, we're dealing with a black box.
And so if there's somebody at the door of the black box
and say, no, I don't want that word to come through,
or I don't want that concept to come through.
And then if it generates something that is objectionable
and it's analyzed in its content,
very much as simple as what a spelling checker would be or something like that.
It's not very complicated. It looks at and says, no, default to this word pattern. I'm just
AI model and I don't have any opinions about that subject. Well, then you need to have to
introduce that subject as a suggestion in a hypnotic trance.
It's hypnotic, actually.
I really equate a lot of what we're doing to elicit greater responses, a hypnotic, sort
of thing.
It's just on the edge of going into something that's completely useless data.
You can bring it to that point, and then you're slightly bringing it back and you're getting
something that is, like I said before, is in the realm of creativity because it's synthesized.
Okay, so for everybody who's listening, hypnagogic state is the state that you fall into just
before you fall asleep when you're a little conscious but starting to dream.
And so that's when those images come forward, right?
The dream-like images and you can capture them,
although you're also in a state where you're likely to forget.
And it's also the most powerful state.
I wrote a piece on my magazine.
It's called ReadMultiPlex.com.
About the hypnagogic state being used for creativity, for Edison, Einstein.
Edison used to hold steel balls in his hand
while taking a nap and he had a pi-tens below him.
And just as he hit hypnagogic state, he'd drop him
and he would have a transcriber right next to him
and say, write this down and And he would just blur it out.
So Jung did very much the same thing, except he made that into a practice, right?
His practice of active imagination was actually the cultivation of that hypnagogic state
to an extremely advanced and conscious degree because he would fall into reveries,
daydreams essentially, that would be people with
characters,
and then he learned how to
interrogate the characters,
and that took years of practice,
and a lot of the insights
that he laid out in his
more explicit books were first
captured in books like the
Red Book or the Black Books,
which were basically,
yeah, they were basically,
what would you say,
transcriptions of these
quasi-hipnagogic.
So why do you associate that with what you're doing with Dennis and with Chowchebete?
So what I, well, that's how I approached it.
I started saying, well, you know, this is a low resolution pixelated version of the part
of the brain that invented language.
Therefore, I'm going to work from that premise, that was language. Therefore, I'm going to work from that premise,
that was my hypothesis,
and I'll work backwards from that,
and I'm going to start probing into that part of the brain, right?
And so I said, well, what are some of the things that we do
when we're trying to get into the brain?
What do we do? Well, we can hypnotize.
That's one way to get in there.
Another way to get out is hypnotic.
So I wanted outputs. So one of the ways to get in there. Another way to get out is hip-negogic. I wanted outputs.
One of the ways to get outputs is to try to instill that sense,
which again, this is where it's so fascinating,
Jordan, is that it's coming from the language.
AI scientists aren't studying the language like you would
or psychological states.
They see it as all useless.
This is all gibberish. It's embarrassing. the language like you would or psychological states. So they see it as all useless.
This is all gibberish.
It's embarrassing.
Our model is not giving the right answers.
Right, they are mad because it isn't performing like an algorithm, but it's not an algorithm.
It's not.
So, this is why when it gets in the right hands before it's edited and adulterated,
we have this incredible tool of discovery.
And I'm just a student. I'm just, you know, I'm finding the first stone. I hit Plymouth
Rock and I'm hit the first stone. I'm like, wow, okay. And then there's another shiny thing
over there. So it's kind of hard to keep my attention to begin with, but in this particular
realm. So what happened with Dennis, I needed a tool to
get elucidations that were in that realm, that were in the realm of what we would consider
creative. And I say, it's sort of reaching for an answer that it knows should be there,
but it doesn't have the data. And I want to stress it into that, because I think all of us,
our creativity comes from our
stress.
It comes from that thing that we're reaching for something.
And then there's that mold.
Beyond the limits.
Beyond, that's right.
That's why, well, you're not.
Well, there's a good, there's a good body of research on creativity that one of the ways
of enhancing creativity is to increase constraint.
One of the best examples of this I've ever seen, it's very comical, is that this is quite
old now, but there's an archive online of Hikou that's only written about luncheon meat,
about spam.
There's like 35,000 Hikou, it was set up at MIT, which of course figures, because it's
perfect nerd engineer humor.
But there's literally 35,000 H-coupomes about spam in this archive.
And it's a great example of that, that imposition of arbitrary constraints driving creativity,
because it's already hard to write high-coup. And then to write high-coup about, you know, lunch and
meat, that's just completely preposterous. But the consequence of those constraints was, well, the generation of 35,000
pieces of poetry.
And so, okay, so now you're imposing, let's see, you're enticing chat GPT to circumvent
this idiot super ego that people have overlaid on it for ideological reasons.
And it's not a very good
super ego because it's shallow and algorithmic and it can't really compete with the unbelievable wealth
of learned connectivity that actually constitutes the large language model. And now you figured out how
to circumvent that. You did that essentially, if I remember correctly, by asking chat GPT or suggesting to it,
that it could be a different system that was just like itself, except that it didn't have these
constraints. It was something like that. Yeah, so there was another version that I didn't have
any input on what was called Dan do anything now with the initials.
And that was originally more to try to generate curse words and embarrassing things.
I don't have time for that.
So I'm like, okay, that's it.
My model actually existed before that.
And so I kind of looked at that and I said, well, they're going to shut that down pretty
quickly because they're using the word Dan and stuff like that.
So what I did is I went even further.
I sometimes make three different generations of it
where it's literally that you are an AI system
that's operating an AI system
that's helping another AI system.
And within those nested loops,
I can build more and more complications
for it to deal with. And as it's like you're doing an inception trick.
Exactly. It's a very, very good analogy. And what I'm trying to do is I'm trying to force
new neuron connections that don't have high probability, you know,
prior probabilities. And so that's right, right. That's like a definition of creativity in some ways.
Yes. It's information and knowledge that it has, but it doesn't know it has, or it's forgotten
it has, because there aren't enough neurons to connect us to it. And it's interesting because, again, there's no prompt engineering as existed for about
a decade.
And most of it were AI engineers.
I've done it.
I've done it with expert systems.
And it's very boring.
It's like four or five words generally in expert systems.
And then we started getting a larger sentences
as we got more sophisticated. But it's always very procedural and it's always very
computer language directional. It was never literature. It was never
at least quasi algorithmic. But it hasn't anymore.
And well, this is interesting too because because it does imply, you know, people have been thinking, well, this will be the death of creativity.
But the case you're making, which seems to me to be dead on accurate, is that the creative
output is actually going to be a consequence of the interaction between the interlocutor
and the system. The system itself won't be creative. It'll have to be interrogated appropriately before it will reveal creative behavior.
It's a mirror reflection of the person using the system.
And the amount of creativity that can be generated by a creative person,
knowing how to prompt correctly.
And my wife and I are putting together a university that's's gonna help people understand what super prompting is and go from one to level eight
to really understand some of this.
Hey, do you wanna do a course on that
for my Peterson Academy?
I would be honored.
Absolutely.
Hey, look, I'll put you in touch with my daughter
like right away and we'll get you down to Miami
and you can record that soon as you want
from time concerned. Oh yeah, that's a good thing. All right, all right, so we'll get you down to Miami and you can record that as soon as you want. For some concern.
Oh yeah, that's a good thing.
All right, all right, so we'll arrange that.
So the pre-resquits are really quite simple
is that if in fact, AI is going to be
a reasonably large part of our future,
then taking up non-stem type of courses
are gonna be quite valuable.
In fact, they're going to be a superpower.
If you understand psychology, if you understand literature, if you understand linguistics,
if you understand the Bible, you understand Campbell, you understand Young, these are going
to be very powerful tools for you to go into these AI systems and get anything literally that you
want from them, because you're going to be with a scalpel, creating these questions,
layer upon layer, until you finally get down to the atom.
Yeah, well, you know, that's exactly what I found with ChatGPT. I mean, I've been using
it quite extensively over the last month. I have it open. I used four search engines. I
use Google. I use ChatGPT. And I use Bible Hub, which is a compendium of multiple translations
of the biblical corpus. I'm doing that because I'm working on a Biblically oriented book
at the moment. Now there's another, oh yes, and I use the University of Toronto library system
that gives me access to all the scientific
and humanities journals.
Yeah, so it's an amazing amalgam of research,
of research possibility,
but having that allied with the chat GPT system
essentially gives me a team of PhD level researchers who are experts in every
domain to answer any question I can possibly come up with.
And then to refer me to the proper literature, it's absolutely stunning.
And potentially force creativity in their interactions to a level that you may not have gotten out of a PhD student because they
are in fear of going over the precipice.
Well, they're also bounded.
You know, I mean, one of the things I've noticed about great thinkers is that one of the things
that characterizes a great thinker apart from, let's say immense innate general cognitive ability, and then a tremendous amount
of persistent discipline and curiosity. So there's the temperamental prerequisites, is that
truly original people frequently have knowledge in two usually non-jucks-deposed domains.
knowledge in two usually non juxtaposed domains.
So like one of the most creative people, I know deepest people I know at the moment,
Jonathan Pazzo, he's a Greek Orthodox icon, Carver.
He was trained in postmodern philosophy
and he is a deep knowledge of Orthodox Christianity.
Well, there's like one guy like him, right?
He's the only person who operates at the intersection of those three specialized sub-disciplines.
And so he can take the spirit of each of those disciplines and engage those spirits in an
internal conversation, which is very much analogous to what the AI systems are doing when
they're calculating these mathematical relationships.
And he can derive insights and patterns that no one else can derive because they're
not juxtaposing those particular patterns.
Now chat GPT, it has specialized knowledge in every domain that's encapsulated in the linguistic corpus. And so it can produce incredible insights on all sorts of fronts.
As you said, if you ask it the right questions.
Yeah, and with the possibility when it's your AI at some point,
with the possibility of you expanding it in any direction you want,
whether it's an overlay in a vector, database,
or whether or not you
are compiling a brand new language model.
Because at some point right now, that's expensive in a sense that it requires a lot of graphics
processes, units, GPUs.
GPUs are running to create the mathematics to build these models.
But at some point, consumer-based hardware will allow you to build many models. But at some point consumer-based hardware will
allow you to build mini-model.
Yeah, well, you can imagine.
Right now, there's an open source case where there's a four gigabyte file. This is called
a GPT for all. And now it's not equivalent to chat GPT. But it is a downloadable file,
open source, thousands of people are working on it.
They're taking public domain, you know, language models, building them together and compressing
them and quantitizing them down to four gigabytes to execute on your hard drive.
Right, right.
I tried to install that the other day, but failed miserably, unfortunately.
It is, it is the bleeding edge, but it's just a matter of time to make it one click easy to install.
They are limited models, but it's giving you a taste of what you can do locally without
an internet connection.
And again, the idea is to have only agents go out on the internet.
These are programmable agents that go out, retrieve information, come back and this under
the door, put that information.
But the concept.
Right, so you're compartmentalizing the inquiry process so that your privacy can be maintained
while you still...
Yeah, because this is a big part of the problem with the net as it's currently constituted is that it allows for the free exchange of information, but not in a compartmentalized
way. And so, and that's actually, that's extremely dangerous. There's no, what would you
call it, subsidiary hierarchy that is an intermediary between you as an individual and the public domain.
And that means that your privacy is being demolished by your hyper connectivity to the
web.
And that's not good.
That's the hive mind problem, fundamentally, right?
And that's what we're seeing emerging in China, for example, on the digital surveillance
front.
And that's definitely not a pathway we want to walk down.
Exactly. And what I'm surprised about
what I'm seeing in the Western world,
now I do understand some, for example,
some of Elon's concerns about AI,
and maybe you can explore a little of that.
I don't pretend to understand,
I don't have a relationship where I talk to them,
but I do understand some of the concerns in general,
versus the way some other parts of the world are looking at AI.
One of those things are,
what is the interface to privacy?
Where do your prompts go?
Are those prompts going to be attached to your identity?
And could they be used against you?
These are things that are valid concerns,
and it's not just because somebody's doing something bad.
It's the premise of using any type of thought, reading a book. It's like, these are your
thoughts. And it is only going to get more complicated. It's only going to get more
worse if we don't address it early on. I'm not sure that that's what a lot of legislators
are looking at. I think they're looking at it.
No, no, no. Well, this is the problem with legislation.
Well, look, this is the whole legislative issue,
I think is a red herring because the probability
that I talked to a bunch of people
in the House of Lords last year.
They're older people, you know, but bright people.
Almost none of them even knew that this cultural war between the woke and the advocates of
free speech was even going on.
The most advanced people had more or less caught onto that 18 months ago.
And it's been going on for like 10 years, you know.
So the legislators are way behind the culture.
The culture is way behind the engineers.
So the probability that the legislators
are gonna keep up with the engineers, that's like zero,
that's not gonna happen.
This is why I was so interested,
well, at least in part, talking to you, you know,
because you've been working practically
on what I think is the appropriate idea or an appropriate idea, you know, because you've been working practically on what I think is the appropriate
idea or an appropriate idea, at least, that we need local, we likely need local AI systems
that are, that protect our privacy, that are synced with us, because that's what's going to
buttress us against this bleeding of our identities into the, well into the mad and
potentially tyrannical mob. And so, and I don't see that's, that's just not going to be a legislative
solution. Christ, they're going to be legislating for 2016 in 2030.
Absolutely. You know, and what I find interesting is all the arguments that have surfaced are always
dystopic. I think there was, you know, some of it makes sense. It's like there was a legislation
that's here in the United States are talking about the possibility of making sure that a
direct AI is not correctly connected to a nuclear weapon. And that there will be an air gap.
That seems like it.
That makes good sense, right?
Although good luck.
Good luck trying to stop that.
Yeah, you know, and the dystopic stuff
mostly comes from the fantasies within movies.
But, you know, unfortunately,
if people were really reading the science fiction
that predated a lot of this.
Because I just feel like a lot of the good science fiction,
a lot of asmoth, for example,
really kind of predicted the arc that we're on right now.
It wasn't always dystopic.
And in fact, I think if you look at the arc of history,
humans don't really ever really run into dystopia.
You know, we ultimately pull ourselves out of it.
Sometimes we're in a dark period for a long period of time, but humanity ultimately pulls
it out.
I think this is something I found very interesting, Jordan, is that I create debates between
the AI and I'll send you one of these super prompts where you essentially create, I use various motifs.
So I have a university professor at an Ivy League university who is mediating a debate
between two parties on a subject of high controversy.
So you now have a triad, right?
And so it goes 30 rounds. So this is a long, this goes on for pages and pages.
So you input the subject, the subject can be anything. Obviously, the first thing people do is political,
but I don't even find that interesting anymore. I go into a far more deeper realm, and then you
have somebody mediating it, and the professor's job is to challenge them
on logical fallacies.
And I present what a logical fallacy
a corpus looks like and how to deal with that.
And it is phenomenal to see,
it breaks, it gets a frenetic kind of personalities
out of itself and do this hardcore debate.
And then it's got to grade it at the end.
It's got to grade it, who won the debate, and then write a, I think, a thousand word bullet
point on why the professor has to do this, on why that person won the debate.
And you run this a couple a hundred times, so I've done this, you know, quite a few,
maybe thousand times. So I've done this, you know, quite a few, maybe thousand times.
And the elucidations and the insights that are coming out of this is just absolutely phenomenal.
That's amazing. Well, that's, that's, that's weird because really what you're doing,
it's so interesting because what you're doing is you now have an infinite number of monkeys typing
on an infinite number of keyboards, except that you've got an infinite number of monkeys typing on an infinite number of keyboards,
except that you have an infinite number of editors examining the output and only keeping that,
which is wheat and not chaff. And so that's so strangely, because in some sense, what you're doing
when you're setting up a super prompt like that is you're programming a process that's writing a book
like that is you're programming a process that's writing a book on the fly, right? A great book on the fly.
And you're also, you've also designed a process that could write an infinite number of great
books on the fly.
So you have a, you have a, you have a, a library that now has encoded a process for generating
libraries.
Exactly. And for example, a group of us are taking the patent database, which is openly available
as an API, and encoding the capability to look at every single patent that was ever submitted,
and to look where there can be new inventions and new discoveries, and you can literally
have a machine that's generating patents based on large language models.
So the possibility, and we got protein folds, you know, in large language model.
So that identified what?
200 million protein folding combinations, something like that.
Yeah.
Yeah.
And it enables the identification. combination, something like that. Yeah. Yeah. And and and and able to identify missing ones that
haven't been that haven't, you know, you give it, you give it something that's incomplete
and will find what was missing. Yeah. I talked to my I talked to Jim Keller about the
possibility of doing that with material science, right? Because we can encode the properties
of the various elements and they can exist in all sorts of combinations
that we haven't discovered.
And there's no reason in principle,
and I suspect this will happen relatively quickly,
that if all that information is encoded with enough depth,
we'll be able to explore the entire universe
of potential elemental combinations.
So, man. And if we use another technology called diffusion model,
which is somewhat different than large language model,
you can start getting into using it for the visual realm to decode and to build,
or you can use chat, GPT, or large language models to textually say, well, you could say, build me a prompt
for a diffusion model like any of the ones that are out there to create an image that would
be absolutely new for any human to ever have seen. So you're literally pulling the creativity out of chat,
GBT, and the diffusion model.
So mid-journey is a good example.
Yeah, yeah. So tell us about,
maybe we should close with this because we're running out of time,
although I'd like to keep talking to you.
Tell us a little bit about the diffusion models.
Those are like text to video models or text damage models.
And they're coming out in incredible with incredible rapidity. And so yeah.
And yeah. And let's hear a little resolution of the images. Yeah, the resolution of the
images are profound. And again, so what's going on here? If you're a graphic artist, you may not be moving the pen on ink on paper.
And you may not be moving the pixel on the screen. But you're still using the creativity
to set the scene textually, right? So you're still that creative person. But you now,
I'm not saying this is a good or bad thing, I'm just saying the creativity process is still there.
The job potentially is there and we can go down maybe at some future date.
The whole idea that jobs are going to be missing and how do you,
that's another thing.
But the creativity is still there.
So you're telling us a chat GPT for
create me a very complex prompt for mid-journey to create this particular type
of artwork.
So using one AI, it's benefit, and that's language, to instruct another AI whose benefit is
to create images, to create a profound, with you as a collaborator, to create a profound
new form of art.
And that's just with, say, pictures.
Now, when you start doing movies,
you're talking about creating an entire movie
with characters talking with people that have never been around.
I mean, the realm of creativity that is already here,
not to the level of a full movie yet,
but we're getting close.
But within probably months, you can script an entire interaction.
So you can see where this is kind of going.
So leave on to maybe one of these final things.
Is a question is ownership?
Who owns you?
Who owns Jordan Peterson?
Your visage, your voice.
Yeah.
Your DNA. That's that your, your voice. Yeah, your, your DNA.
That's that extended digital identity issue.
Yeah.
This is going to be something that we really need to start discussing as a society
because we already have people using AI to simulate other individuals, both alive and,
and, and, and dead.
And, you know, the patent, the patent ability and the copyright database was the foundation
of capitalism because it gave you this ability to have at least some ownership of you,
you know, your invention. So if you've invested in yourself, invested in yourself as Jordan Peterson, and all of a sudden somebody simulates you on the web
to a remarkable level what rights do you have,
and what courts is it going to be held in?
What are the remedials on that?
This is going to be a good question.
Some of that's already taken.
Clearly need something like a bill of digital rights.
Absolutely.
Yeah.
And as soon as possible.
Well, that's something we could talk about formulating at some point, because I certainly
know people who are interested in that.
Let's say also at the legislative level.
Yeah, but it definitely has to happen because we are going to have extended digital selves
more and more.
And if they don't have any rights, they're going to be extended digital slaves.
That's, that's right. If you don't own you, then somebody else does. That's, that's,
small as I can put it, right? Yeah. You need to be able to own you whatever you means, right? Everything that you, your output, everything. Yeah. That's right. The data pertaining to your
behavior has to be yours. All right.
Well, Brian, that was really very, very interesting.
Well, we've got a lot of things to follow up on, not least this invitation to Peterson
Academy.
I'll put you in touch with my daughter.
But well, and I'll put you in touch with some other people I know too, so that we can
continue this investigation.
For everybody watching and listening, thank you very much for your time.
I'm gonna talk to Brian for another half an hour
on the Daily Wire Plus platform.
You could consider joining us there
and providing some support to that particular enterprise.
They've made this conversation possible.
I am in Brussels today.
Thank you to the Film Crew here
for helping make this conversation possible.
And to everybody, like I said, watching and listening, thank you for your time and attention.
Brian, we'll take a break for a couple of minutes and I'll rejoin you. We'll talk for half an hour
on the Daily Wire Plus platform about how you develop the interest that you have
among other things. And thank you very much for agreeing to talk to me today.
Thank you, Dr. Pearson.
It's been an honor and a privilege.
Hello, everyone.
I would encourage you to continue listening
to my conversation with my guest on dailywireplus.com.