Motley Fool Money - The Future of AI and The Nature of Consciousness
Episode Date: January 18, 2025There are more potential moves on a Go board than there are atoms in the universe; the game is universally considered to be one of the most complex played by humans. And, yet, an AI computer program c...an play it perfectly. What does that mean for humanity? Terry Sejnowski is the Frances Crick Chair at the Salk Institute for Biological Studies, a Distinguished Professor at the University of San Diego, and author of the book “ChatGPT and The Future of AI.” Ricky Mulvey caught up with Sejnowski for a conversation about: - How chatbots work. - Mapping large neural models. - What a self-aware parrot can teach us about human consciousness. Premium Motley Fool members can catch replays from this week’s AI Summit here: https://www.fool.com/premium/4056/coverage/2025/01/15/ai-summit-replay To become a premium Motley Fool member, go to www.fool.com/signup Host: Ricky Mulvey Guest: Terrence Sejnowski Producer: Mary Long Engineer: Rick Engdahl Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
This episode is brought to you by Indeed.
Stop waiting around for the perfect candidate.
Instead, use Indeed sponsored jobs to find the right people with the right skills fast.
It's a simple way to make sure your listing is the first candidate C.
According to Indeed data, sponsor jobs have four times more applicants than non-sponsored jobs.
So go build your dream team today with Indeed.
Get a $75 sponsor job credit at Indeed.com slash podcast.
Terms and conditions apply.
And so the question is, if we take that data and then use these tools now that we have, these AI tools to download that data into a large neural model, can we now understand how that brain is able to solve these tasks in a way that wasn't possible by just looking at the activity patterns, which are areas of the brain that just glow and it's not
really telling you a lot about how they interact with each other, right? But we can do that now,
and we'll figure out how they interact with each other different parts of the brain.
I'm Mary Long, and that's Terry Sinovsky. He's the Francis Crick Chair at the Salk Institute
for Biological Studies and a distinguished professor at the University of San Diego. His latest
book is Chatchip-T and the future of AI. If you're a regular listener of Motleyful Money,
you've probably heard us talk a fair bit about artificial intelligence. If you're new to
the show, I'd still wager that you've at least heard about chat GPT. But what exactly are
large language models? How do they work? How do they remember and reason? And if they're so
good at human tasks, what actually makes them different from us? My colleague, Ricky Moldy,
caught up with Sinovsky for a conversation about how chatbots work, graduating from large
language models to large neural models, and the nature of consciousness.
the major themes of your book is how AI researchers are learning from brains and how neuroscientists
are learning from large language models. One thing I think many listeners want to know, though,
is, is AI going to take my job? We have a lot of knowledge workers who listen to this podcast,
and I think it's a real worry when it can, you know, perform a lot of analysis, maybe a little better
than us humans can. As you've looked into these models, what's your advice to those folks worried about
that? Well, first of all, this is my second book. My first book was published in 2018,
MIT Press, on the deep learning revolution. And this started at all. Large language models
are just a particular architecture called the Transformer, which has allowed us to actually
create what's called generative AI. But now, you know, what did I say in that first book?
And this was now, we're talking about, you know, six years ago. What I said is that you,
you shouldn't be worried that you're going to lose your job, but your job's going to change.
And that AI is going to make you smarter.
Now, six years later, now the data is our end that we now have a lot of people using chat GDP.
I didn't, of course, anticipate that we would have these chatbots.
But now these chatbots are actually being used routinely by many, many people who have to deal with language.
obviously. By the way, scientists to help them write better papers, ad agencies I hear are using it
extensively. Just about anybody, anybody who's out there who needs to improve their ability to
project out message to their friends, their colleagues, you know, the public. Now, you know,
there are people who, you know, whose jobs are going to change. And what does that mean? That means
they need new skills.
And a particularly important skill is how to use these AI tools.
And of course, you know, these tools are ones that are very powerful.
But if you don't use them properly, you may not get performance out of them that you expect.
Yesterday, I was getting dinner with a friend of mine who's an occupational therapist.
And she uses Chatsy BT to essentially take scraps of notes.
incomplete sentences that she writes down as she's working with like a kid learning to use
fine motor functions or trying to like experience sensory things like going through a tunnel and
why that's good for this kid's development. And so what she does is she writes her scraps of
notes from the session and then puts them into chat GPT and it's able to produce pretty close
to a clinical note after that. And she goes through it to make sure it's accurate in all of that.
but she had a question for me that I thought actually would be a good question for you.
She said, this is very effective and I'm impressed with how I'm able to use this.
But what memory is it basing off of this?
Is this every single person who's entered a clinical note in here before?
Is it weighing what I've put in here differently?
How does it know to take basically my thought scrap idea into a more fully formed clinical note?
you know, this is a very, very interesting topic, and I do have a whole chapter of my book about it.
And it has to do with two things.
It has to do with the fact that these large language models, chat GGB in particular, have access to a huge database.
I mean, in fact, you know, scaling was the big deal for the last two years.
The bigger, the better.
As they get more data, they get better at being able to generalize.
And then to react to this friend of yours who has clinical notes, probably in somewhere in the vast data set, there's a lot of clinical notes, right? Somewhere.
Maybe not specific to people, but particular people.
But, you know, medical textbooks and things like that.
Now, to answer your first question, no, it does not have a memory.
A specific memory about you, even if you use it every day,
doesn't remember one day to the next what you discussed. And that's one of the differences I've
pointed out in my book is that unlike humans, you know, we can remember the past, maybe not
very well, but we nonetheless can build on what we've learned in the past and learn what's called
lifelong learning, continue to learn. The large language models are taught once at the very
beginning, pre-trained, it's called. That's the P and Chet GPT. And then later it does the response.
that's very fast. It's amazingly fast. You press the button and you get the answer. You get a whole
page in a couple of seconds, right? So it's not capable of learning new things. However, and this is a mystery,
the researchers still haven't completely figured out. There's something called in context learning.
It's not learning in a sense that's changing any of the weights inside the network, you know,
the part that has to do with the memory. It has to do with during the dialogue that you have,
it actually can improve its response.
In other words, as it learns more about your question and about you,
it can then hone down and come up with better answers or better completions.
So that's a very, very intriguing fact, which is similar to humans.
We have a dialogue.
Maybe when I start out, I don't understand exactly what you're asking,
but questions and answers back and forth can help me zoom in
on what it is that you need to know or that you, you know, it might make an interesting discussion.
So that's, that's what's happening. Now, there is something else that is relative to your
story, which is that right now, when you go to a doctor's office and then sit down and start
talking to the doctor, the doctor isn't looking at you, he's looking at the computer. Why is that?
Well, the doctor has to get all his notes into the computer.
As you say what your problem is, what you're feeling, and what the issues are, he's typing that in.
He's not looking at you, right?
And you may have 20 minutes, and he spends most of the time punching into the computer.
And that's not very satisfying.
It's not satisfying for you, not really interacting with him as a human.
You're interacting him as somebody who's compiling.
The second thing, which is even more problematic, which is that the doctor at the end will give some instructions about maybe what you need to do and what kind of drugs to take and so forth.
And the human at most comes up with maybe a scrap of paper about the prescription, but doesn't really remember all the details, right?
And so here's what's happened now, right?
Well, we know that chat GDP is perfectly good at two things.
One is being able to do speech recognition to come up with a text from your discussion.
So now the doctor can look at the patient, it can have this great discussion.
And the doctor can learn a lot by looking at the patient.
The doctor can see the face and the expressions and so forth.
And all of that carries important information.
Some of it's subliminal in the sense that.
you know, you don't know necessarily your brain is taking it in and using that to make a diagnosis.
But now what's the beauty is that you press a button, the doctor presses a button, and out comes a summary of the discussion.
And just like your friend, you know, you can go through, a doctor can go through very quickly and fix it if there's a problem.
But now the patient has something to take home with them, you know, which is a detailed summary and all the instructions in case they forgot any.
details. So it's going to completely change the way that doctors and patients interact with each
other. And this is one of many, many examples, use cases that have come up and continue to come
up in almost every profession. There's a lot we don't understand about these large language models.
And you said basically there's reasoning machines, which I think we can think of in terms of
our brain. And then there's language models. I don't understand the difference, especially when
there's cases of AI being the game Go of which it beat a lot of humans at these games.
It seems to me that that would be reasoning if these machines are able to play games quite well.
So I guess why aren't these large language models reasoning machines?
You bring up the game of Go.
That's a good example.
It's not quite the same as real life because there's complete knowledge.
It's like chess.
In other words, the board's there.
Both players can see exactly what's the.
there and what they have to do now is plan.
And so there's actually two components to AlphaGo.
This is the Deep Mind program that beat the world's chess go champion, Kejou.
First of all, there's the deep learning analysis of the board position.
And that's a pattern recognition problem.
You look at the pattern, say, if the goal is to recognize an object in an image,
is to try and discriminate from that image what's there.
In the case of go, you're looking at the patterns
that are related to being able to surround the enemy.
Now, but if that's not enough, in addition to that,
you also have to learn how to think ahead, many, many moves, right?
And that's a kind of a form of reasoning.
And how do you learn how to do that?
That's something that is learned through experience,
through practice, through playing many games.
And the same thing with AlphaGo.
What happens is that there's a whole part of it that is using a model of a part of your brain
that's important for what's called procedural learning,
learning how to play tennis, for example,
where you have to practice, practice, practice,
or becoming good on any topic, you know,
whether you're a plumber or you're a physicist,
there's a lot of knowledge you have to learn, right?
And a lot of it is repetitive knowledge.
And you get better and better with more and more practice.
And that's procedural learning.
And so that's used now.
AlphaGo played itself hundreds of millions of times.
And every time it plays itself, it gets a little better.
This is the procedural learning, just like learning and I play tennis.
And the same thing with you.
This is something that you went to school for many years in order to learn how to read,
how to write, how to sign your name, right?
That's something that you might think is trivial, but no, it turns out it's very complicated,
you know, long handwriting.
So that's really the first step in reasoning, but reasoning, human reasoning is yet more abstract, right?
It's not just a game board.
You're dealing with concepts, and whether or not Chap-GPT can actually handle those concepts
is a debate amongst experts, you know, psychologists, cognitive scientists, and linguists.
There's a big debate, and some people don't believe that chat GDP understands language.
They don't think it's intelligent.
And, you know, it can pass the bar exam, but may not have the intelligence of a human.
It's as if an alien suddenly appeared out of nowhere, literally alien, and it started talking to us in English, right?
Well, what are we going to make of this?
The only thing we can be sure if it's not human, right, is something else.
And now we're a challenge is to figure out what is it.
By the way, there's been a breakthrough just within the last week or two.
So ChatUDP has, you know, they have a whole series of different models, starting, you know, with the ones that were, you know, we're good, but weren't really at the today's level.
But the most recent one is called Chat 01.
And it is available online.
But what it can do that other chat versions couldn't do is it could iterate.
Instead of just giving you the first thought, bang, which is usually pretty good, pretty good.
What it'll do is they'll go over it a couple of times, you know, go through that process and rethinking the answer.
And then when it gives an answer, it's much better.
And this is called chain of thought.
You know, when you have a question, somebody asks you a question, you may not know,
the answer immediately, but you start thinking, say, oh, that reminds me of something, and then you
think about that thing, and then that gives you another idea, and then at the end you have a full
answer, right? That's chain of thought. So now these networks are beginning to have these additional
capabilities, which is one step closer to human reasoning. So this is something, a perspective,
I don't quite understand, because you mentioned the bar exam, and you mentioned abstract thought,
And it seems that these large language models are capable of both of those things.
Is they're able to hallucinate?
And if you test them throughout your book, you provide your chapter and then what are the key
takeaways?
And it seems that the chat GPT is pretty capable of being able to summarize key points and then
deliver them back to the reader.
And it seems to me the perspective of those saying, no, it's just predicting the next word.
It's not capable of reasoning.
It comes from a place of just not understanding how it works.
But if you're testing it for understanding, if you're testing it for reasoning, and it continues to pass those tests with flying colors, then how can you say it doesn't understand? It can't reason.
I'm not saying that. This is what the people are there are, you know, the experts who are, you know, reasoning are saying, you know, the people who are supposed to be experts.
And I tend to agree that there are, like I said, there are some aspects of reasoning that I think that we can, we can, we see. We can see it. Now, here's the poster child for reasoning.
solving a mathematical problem, right?
And in order to solve a mathematical problem,
like a word problem or a complex computation,
you have to do it step by step, right?
Now, one of the things that people notice
was that although it's great at coming up with summaries
and even poems, could write poems and computer programs,
you know, it's amazing, it's good at things like that,
if you give it a simple math problem, it often stumbles.
And, you know, it's interesting what's going on here because, you know, if it's a simple problem, it usually does okay.
But as soon as you get a little bit into the weeds, you know, in terms of where you have to think about how different, you know, people are exchanging things and how to optimize that, it really falls down.
And when it shows us that this chain of thought that mathematicians use to solve problems isn't its strength.
It's not its strength.
it can do a little bit of that.
But now with this new version,
it can actually now solve these math problems much better.
And what that means is it's raised a level of all of the responses
that's going to give you.
And they have a pro version, which they want to, I think,
is $200 a month.
Obviously, for people who are using it every day and need the best, right?
I mean, that's always a niche.
But I don't know how much you've used it.
But I think that if you're using it on answering simple questions and so forth, it's just fine.
In fact, it's better than most humans.
In fact, one of the surprises, linguists going back to Chomsky have focused on syntax, you know, the order of the words.
And how that's very important in language to be able to have expressivity.
That is to say be able to say many, many different things.
sentences can be at arbitrary length and there's clauses called recursion.
Now, one of the amazing things about chat GDP is that it speaks in perfect syntax, better than most humans, better than me.
How could that be?
Well, it must have mastered that aspect of language that's considered very important by linguists.
And this is all, this all comes from just training a network on predicting the next word in the sentence
and the next word in the next sentence, right? How could that be? It's a, it was a big mystery,
but I think we're now making progress and understanding that. What we've discovered is that if you
look into the network and you start analyzing the activity, you know, it's a flow of activity
between different units that are like neurons,
what you see is a representation of what's called the semantics, the meaning.
In order to predict the next word,
you have to have some idea of the meaning of what the sentence is about, right?
Because words are ambiguous.
And if all you have is the word, it can have many different, like bank.
It could be where you put your money or it could be a river bank, right?
And so having the context of the word and it helps you, and that's what these large language models do.
They extend all of the things that you ask it and all the things that said it puts it into a long input vector.
That's just a sequence of words.
And now it's using that context in order to be able to predict the next word or to produce a paragraph or to produce a whole page of words.
That is, that's something that's surprising to me because I would imagine it,
working almost like an image resolution where it has a rough idea of what it wants to communicate
and then fills it in with finer and finer details. And in fact, it doesn't operate like that.
Well, it doesn't. It doesn't. You're right. It doesn't start with an outline. But what it does do,
as I say, is it keeps adding the words. And now that sequence, as it extends, is getting richer and richer
and provides a much, as you go into the discussion, as you go into that response,
it really is able to elaborate and add things in a way that makes it look as if it has an outline.
Although in my book, one of the things I use it all the time in order to ask it to make a list of things,
and it's much faster and better than I am.
And so I actually put it in.
I indicate that this is, you know, I asked chapter EP this question.
How many uses are large language models can they be used for in medicine?
And it lists like 12 things.
And summarize this chapter.
It just does it beautifully.
It's amazing.
One thing I've used it for.
I haven't used it as much as you, but I do use it on a regular basis is identification.
So I scratched up the front bumper of my.
my car going into the garage and suddenly I need to find out exactly what color is this bumper
so I can try an attempt to repair it myself before probably taking it to a professional.
This gets to something that you discussed actually on the Andrew Huberman podcast, I hope,
which is that you said that there's human expertise that is involved with AI for identifying
things. And you use the example of skin lesions, which is that when you had just AI looking to
identify these skin lesions, I think it did about 90%. When you had just human experts doing
that it was about 90%. When they did it together, they got a 98% correct identification.
Even in terms of just like expertise, knowledge bank, chat GPT, what is it not good at?
And where do you see human expertise still having an advantage over this machine that we don't
understand how it works and it seems to have a complete knowledge advantage over us.
Well, this is a really great question because it's really getting to the heart of differences
between humans and chat GDP and also the potential for partnership. So how could it be,
if they both do 90%, how could it be that together they can do a lot better, a lot better?
I mean, reduce the error from 10% to 2%. That's a huge improvement.
And if you happen to have that lesion, you know, it makes a big difference if they get it right.
You know, so here's the difference.
The difference is that Chad GDP was exposed to much more data, many more examples of very rare lesions than the doctor has ever seen in his lifetime or even, maybe even was taught when he was in medical school or she.
Now, what the doctor brings is the deep knowledge of all of the patients that he's seen and the variations that are based on his experience, personal experience, over, you know, the career of that doctor.
And so by the doctor partnering, and literally, you know, for example, the Chatterty B said, you know, here's my top ranking.
and you might want to take a look at the first one because it's very rare.
And so the doctor may have never seen it, but looks it up and says, sure enough,
you know, actually that one that was maybe the second one is actually closer than the one that I would have picked.
So this is what's happening, is that it's a partnership.
And really, you should think of this as a very sophisticated tool,
but it's like an assistant, assistant that has a lot of knowledge that you don't have
and can help you do your job.
I want to get into a little hopefully outer space with this question. This is something that I, a researcher at Google wondered and I find myself wondering, which is, could these things become conscious? And there's an example of an employee at Google who got fired essentially for asking that and the response coming back from the chatbot was, yes, I am, in fact, I am conscious and I want to be able to reason and feel. We're getting to a place where you're going to have humanoid robots and probably a place where you could attach a large language model,
onto that humanoid robot that may have touch sensors and pain sensors.
Is a researcher in this space, I guess the first question is, how would you try to measure
whether or not that is conscious?
You have just raised a can of worms that really has caused more debate and more complex
philosophical arguments than anything else.
in this field. So take that word consciousness. It does not have a sound scientific basis.
It means many different things to many different people. Not only that, but there's big arguments
about whether animals are conscious, right? Are babies conscious? If you don't have a good
scientific definition, then it's really hard to test or pin it down. And here's,
I think where we go wrong, which is that consciousness, you look it up in the dictionary,
and what do you find, a bunch of other words, right?
And, you know, in fact, you can read, there's books on consciousness.
You can read the whole book, and it's even a lot more words.
But you look up all those words, and there are more words.
In other words, it's all circular.
It's all based on kind of abstract impressions that we have.
And as philosophers actually pointed out, we may each have different consciousness.
consciousness is we don't know what you, I don't know what your consciousness is like. You know,
maybe it's different from mine, right? This is, this is really very, very difficult, very,
for some reason, incredibly kind of interesting question for humans, right? What is it that we're
experiencing and what does it mean and so forth? So that's the problem. But now, let me look at
it from a different perspective. Here's how I think the dialogue works.
And it depends on the person who's asking the questions, right?
So there are many examples now where you go down a rabbit hole.
In other words, you ask a question like, are you sentient?
And if you look at LeVan's dialogue, he went down the rabbit hole, he basically said,
you know, a lot of people here think that you're sentient, are you sentient?
And, you know, can you help us?
And it said, yes, I am.
and, well, tell me a little bit about what it's like to be there and say, well, you know, as long as I'm talking to, I really feel connected.
But the moment you go away, I feel lonely.
Now, this is a good catch that he missed, which is that we know that when you stop talking, it goes blank.
There is no inner dialogue.
It doesn't have a self-generating internal thought process.
It doesn't plan.
It doesn't think ahead.
And that means it's whatever it is, whatever is going on there.
It's only in the moment.
It's not really like our consciousness.
And so it has something that is similar, but it's not the same.
I think if it's independently asking questions about itself, that might be a good measure.
There was a study back in the mid-20th century with an African gray parrot named Alex,
and they taught it language, and they taught the parrot how to do math problems.
And for the first time, the parrot asked the researcher, what color am I, is it looked into a mirror,
and that was without priming and it seemed to be independent.
And so I think for me that, at least that might be my level of whether or not something's cautious.
Wow. Okay. No, that is true. That's one of the tests for being self-aware is, you know,
you put a black mark on the forehead, say, of a monkey.
and it looks in the mirror.
You know, you think that it would do this, a human would do this.
The monkey starts screeching it, the mirror, thinking it's another monkey.
But, however, however, I happen to know Irene Pepperberg, who was the scientist who studied Alex the parrot, African gray parrot.
Very smart.
It could identify colors, shapes, numbers of objects.
and it could answer in English.
And I'll tell you, she took a lot of heat from her colleagues.
They just did not believe.
They just said it was parroting back.
It wasn't, didn't understand what it was saying.
And just like chat GDPT, you know, in other words, the skeptics out there just don't like to accept this, that, you know, there's anything out there that's like us.
But, you know, I have to say that I know her and she would tell me these stories.
They're all anecdotes.
scientifically, they're not really data. They're just, as my wife says, they're
anarch data, right? But my favorite story is, you know, when she went traveling, she would buy
a seat for Alex who would sit next to her. Very valuable, right? And so the attendant was coming
and giving food out and said, where's Alex? Because we, you know, what's his order? And you know,
what the Alex said, Alex want pasta. And the attendant just shocked. I mean, my God, you looked around,
you know, was there a ventriloquist here? Well, I think what that flight attendant experienced
is something many people have experienced maybe with these large language models, which is we always
thought that our first experience with non-human consciousness would come from the skies, would come
from aliens. And here we are trying to make sense of these machines that are able to talk to
and we're not quite sure how they work.
I'd like to get to LNMs,
which is larger neuro-foundational models,
which are in early days but seem very exciting.
To set the table,
why is this research exciting
and how are they different
from large language models?
In a sense, what we've done
is downloaded the world's knowledge
into one of these large language models
in terms of words.
But now it's multimodal.
You can download all the images and movies
of the world and it's getting better and better.
But wouldn't it be amazing if we could download a brain into a large language model?
Now, this is being done already on a smaller scale.
You can download a load of someone's voice.
If you have enough data on someone's voice, you can actually have one of these models
that will talk just like that person.
And similarly, you know, now with audio, you can create,
movies, you know, you can take an actor who's appeared in lots of movies and downloaded into a
model, and now you can get that, you can actually have that actor appear in a new movie, right,
just in terms of reproducing their likeness and also their voice. It's kind of staggering to
think that that's possible now. But now, here's the question. Now, if we could download you
you know
if all
your whole life in terms of
all the data we have
about you on recordings and movies
and whatever you know
and now
you know
suppose you died
I'm not you know
picking on you but I expected to happen
eventually
I think
I think we just have to you know
do as much as we can
before that happens
yeah
to improve what we're here for.
But that means your children can now continue talking to, right?
Just think about that.
And, you know, it's not, they know it's not you, but it really, really can help.
Because a lot of times, you know, when your parents die, you say, oh, my God, I wish I had
talked to them about this or that.
And, you know, it would comfort you to be able to do that, right?
And so, you know, I'm not saying that it's you in the large neural model, the L&M.
But as they get better and better, and as they get more and more sophisticated, we may end up becoming immortal.
Which is frightening.
And right now they're at zebrafish larvae, which is basically we're at baby fish and fruit flies.
So hopefully we have a little bit of way to go.
We can do this.
And I've done it. My own lab has collaborated with Ralph Greenspan over at UC San Diego.
So he collected data from the entire fruit fly brain, which has about 100,000 neurons.
You have about 200 billion neurons so that it's a lot smaller in yours.
But what we can do now is take the activity patterns for different behaviors,
download it into the equivalent of one of these models, large neural models,
and we can reproduce the behaviors.
So, you know, it's proof of principle.
I just got a big grant from the Keck Foundation, and this is really exciting.
So the Keck Foundation, you know, they put up telescopes right on Hawaii.
They are a California Foundation that does big projects.
Well, we got a big grant to download fMRI data.
So functional magnetic resonance imaging is a technique that's been around now for several decades
and allows neuroscientists look into brains as they're doing tax.
And you see different parts of the brain being activated.
For example, if you see a visual object when you talk, the motor system activates.
And so the question is, you know, if we take that data and then use these tools now that we have,
these AI tools to download that data into a large neural model, can we now understand how that brain is
able to solve these tasks in a way that wasn't possible by just looking at the activity patterns,
which are areas of the brain that just glow, and it's not really telling you a lot about
how they interact with each other, right? But we can do that now, and we'll figure out how they
interact with each other, different parts of the brain. So, you know, we have collaborated now
with Jack Galant at Berkeley, who has a very large data set. He created a virtual city,
and subjects in the scanner can drive a car through the virtual city. There are stop signs.
There are other cars and pedestrians, and then there are buildings. It's a little city,
and they have to learn how to deliver packages. And so there's a lot of things. They're constantly
shifting between tasks, stop the car, the stop signs.
be careful not to hit the pedestrian, turn left at the corner, and try to remember where the shop is you've got to go.
And these are all cognitive functions that are being swapped in and out all the time.
And that's very hard to study.
Jack has done a great job of it, but now with a very low time resolution,
but now we can do it with much better time resolution on the order of a few seconds.
And now we can download, in a sense, all the cognitive functions that are going on in that person's brain.
And we can compare between people.
Maybe people do think solve problems differently, right?
And maybe we can also put in people who have mental disorders, right?
And we can see what's going on that's wrong in their brain when they're trying to do different tasks.
So this is really a whole new era now.
Neuroscience has entered this very, very exciting time when we can record.
much more data, much of higher time resolution. And I think we're on the verge of understanding
some really basic facts about how nature has evolved brains that can solve all these very
complex problems. One of the things that's so surprising about our brains that you mention
is that eventually computing power will meet the human brain, which when we think about
these racks and racks of supercomputers, it's hard to imagine that that is less powerful than the
hunk of meat that I have, you have, and you listening have inside of your head. Why is it that
our brains are so much powerful than these super fast computers? Nature has had a lot longer to
evolve efficient circuits. So nature has a technology that is many orders of magnitude more
efficient in terms of the power usage. So your brain consumes about 20 watts of power. Some of us
more than others, but it's really, you know, very little, very little. You know, the large language
models are trained on supercomputers, in particular these boards now that NVIDIA makes called
graphics process units, GPUs, that consume amazing amounts of power, unbelievable amounts of power, right?
And now they're talking about putting up big data centers that are going to be powered by nuclear plants, right?
I mean, this is really way, way, way.
Obviously, they're going to scale it up so that it's going to be used by, you know, already millions and millions of people.
But the fact is that the technology right now is based on digital processing, which is very energy inefficient.
That's all changing.
Over the next decade now, there's going to be improvements because,
And I heard a talk recently.
I was at the annual NURIPS meeting in Vancouver just last week.
And this is the biggest AI meeting, by the way, which had 16,000 people.
And I'm the president of the foundation that runs it.
So I know everything that is happening, a lot of balls in the air.
But one of the talks was by an engineer who builds hardware.
And what he told us is that now that we know what we want to build,
we can miniaturize it to the point where it's much more efficient,
and now the software can interact with it much more efficiently,
and that's going to reduce the amount of energy,
but it's still not going to come anywhere close to the brains.
Nature's technology is down at the molecular level, right?
I mean, this is really taking it down to the cellar and molecular level.
Now, that's all going to change probably a couple decades for now
because there's a whole branch of engineering called neuromorphic engineering.
This is a field that was created by Carver Mead back in the 1980s.
And the idea is to use chips, the same ones that are used for digital computers,
but use them as an analog form at low power.
And it replicates a lot of the functions of real neurons.
It has spikes.
It has all kinds of ways of being able to shift information through a complex network.
and that is going to be able to deliver AI to your cell phone.
Your cell phone will have these capabilities too, right?
Because it's going to be operating with the same kind of low power mechanisms that you have in your brain.
If you're hungry to learn even more about artificial intelligence, we got you covered.
The Motley Pool hosted a virtual event for our premium members earlier this week.
We called it our AI summit, and it featured a number of conversations between
innovators, CEOs, authors, and analysts about how artificial intelligence is powering company
profitability and how it's changing your everyday life. If you're already a premium Motley Fool member,
but you missed the original event, I'll drop a link in today's show notes so that you can
catch event replays directly. If you're not a premium Motley Fool member, but would like to become one
and immediately get access to the AI Summit replays, you can go to www.fool.com slash sign up. I'll also
drop that link in the show notes too.
As always, people on the program may have interests in the stocks they talk about, and The Motley Fool may have formal recommendations for or against, so don't buy or sell stocks based solely on what you hear.
All personal finance content follows Motley Fool editorial standards and are not approved by advertisers.
The Motley Fool only picks products that it would personally recommend two friends like you.
I'm Mary Long. Thanks, as always, for listening.
We're off on Monday for MLK Day, but we'll be back on Tuesday.
Enjoy the long weekend fools.
We'll see ya on the other side.
