Motley Fool Money - The State of the AI Arms Race
Episode Date: August 31, 2024When ChatGPT launched in late 2022, it was the first – and only – exposure most of the world had to AI. Not yet two years later, there’s already a lot more competition. Jeremy Kahn is the AI E...ditor at Fortune Magazine and the author of the new book, “Mastering AI: A Survival Guide to our Superpowered Future.” Alex Friedman caught up with Kahn to talk about the current AI landscape. They also discuss: Bill Gates’ initial hesitancy to invest in OpenAI. Where LLMs go from here. Developments in biotech. Host: Alex Friedman Guest: Jeremy Kahn Producer: Mary Long Engineer: Dez Jones Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
This episode is brought to you by Indeed.
Stop waiting around for the perfect candidate.
Instead, use Indeed sponsored jobs to find the right people with the right skills fast.
It's a simple way to make sure your listing is the first candidate C.
According to Indeed data, sponsor jobs have four times more applicants than non-sponsored jobs.
So go build your dream team today with Indeed.
Get a $75 sponsor job credit at Indeed.com slash podcast.
Terms and conditions apply.
You know, what is it the human does best and what is it,
the machine can do best and, you know, let's let each be sort of preeminent in its own realm
and pair the two together. I think if we think about it more like that, then we are able to kind of
master AI and we will be able to kind of reap the rewards of the technology while minimizing
a lot of the downside risks. I'm Mary Long and that's Jeremy Kahn. He's the AI editor at Fortune
Magazine and the author of the new book, Mastering AI, a Survival Guide to Our Superpowered Future. My colleague,
Alex Friedman caught up with Kahn earlier this week to discuss the current state of the AI
arms race and to take a look to the future. They also talk about what convinced Bill Gates
to move forward with Microsoft's initial open AI investment, how LLMs are being used to shorten
clinical trials, and the changing relationship between man and machine. So you are the Fortune
magazine AI writer, an editor, and you are a tech reporter before this. At what point did you first hear the
term artificial intelligence, and when did you really start taking it seriously?
I guess I first heard the term probably sometime in 2015.
Even before I had become a tech reporter at Bloomberg, I was doing some finance coverage
and working for a magazine Bloomberg had that I was doing a story about London's little
tech hub, kind of emerging tech hub.
And at the time, people said the most successful exit, but in some ways the most disappointing
exit from the London tech scene was this company called DeepMind, which I knew very little about,
but it had just been acquired a couple years before by Google for $650 million, which was the
best exit that the London Tech Hub had at the time. But people were upset because they thought
that this could actually potentially be a huge future company, and they thought maybe it sold
out too early. I didn't know anything about DeepMind, but I started to look into it, and that's when
I first sort of heard about artificial intelligence. And then a few months after,
writing that story, I got a chance to move over to the tech reporting team at Bloomberg,
and then I actually started covering AI at that point. And that was basically the beginning of 2016.
So you've now been covering AI for years. So I'm curious, after Chad GPT was released,
were you surprised by the reaction and the adaptation of the technology? Or was this something
you've been waiting for for a long time?
Well, yeah, I think we'd all, all of us who'd been kind of following this for a while were wondering like when will this kind of breakthrough into the general public consciousness.
But I was surprised that it was kind of chat GPT that was the thing that did it.
And I was surprised by the reaction that chat GPT.
I think in retrospect, I probably shouldn't have been.
But, but yeah, because I've been following it for so long and it seemed like, you know, the technology was making fairly constant progress.
but OpenAI, which I'd been following as well for years, had previously,
this is months prior to chat GPT being released,
had created a model called GPD3 Instruct,
which was a version of their GPT3 large language model,
which itself had been out even earlier than that.
But it was one that was much more easy to control.
And one of the things you could do with the Instruct model
was sort of have it function as a chatbot, have it engage in dialogue. But Open AI had not
sort of released this as a kind of consumer-facing product. Instead, they'd made it available to
developers in this little thing they had called an AI playground, this kind of sandbox they had
that developers could use their technology in. And they let some reporters play around with it,
and I had played around with it a little bit and thought, that was kind of interesting, but, you know,
I didn't really, I didn't think it was going to be a huge thing. And then when ChatGPT initially came
out, it kind of looked like the same thing. I thought, oh, this is just like an updated version of
this GPD3 instruct model. But actually, I think the simpleness of the interface and the fact
they made it available freely for anyone to play around with, you know, just made the thing go viral.
And it was the first time people realized that they could actually interact with this AI model
and that you could do almost anything with it. And I think the fact that it was designed to be in
this dialogue through this very simple interface that looked.
like a Google search bar, you know, made all the difference. When the GPT3 and Struck model was out,
it was actually much harder to use. It had all these dials that you could control the output,
which were great things for developers, but actually made it much more confusing for the average
person to use. You tell a great story in mastering AI about Bill Gates's skepticism about Microsoft's
huge investment in Open AI. Why was he so skeptical and how did Satya Nadella get Gates to change
his mind. Yeah, so Gates had been a big skeptic of these large language models. He thought they were
never going to work that he had, you know, they were not the path forward to super powerful AI.
They seemed too fragile. They didn't get things right. He had played around with some earlier
versions of Open AI's technology. Open AI created a system called GPT2, which was the first system
that could kind of write a bit like a person. But if you asked it to write,
more than a few sentences, it kind of went off in strange directions and stopped making sense.
And he played around with GPD3 and he thought GPD3 was slightly better, but it still had some
of the same problems. And it couldn't answer. In particular, Gates thought the real test of a system
would be if it could solve hard questions from the AP Advanced Placement Biology test.
And he had played around with GPD3 on this and it had failed on those AP biology test questions.
And as a result, he just really didn't think it was going to go any.
anywhere. But so Sachin Adela, you know, he knew this and when he had, he let the open AI guys know that this was the
case, that Gates was skeptical and that Gates in particular had this interest in AP biology. And then
one of the things that Open AI had done when it created this even more powerful model called GPT4,
which is now out and is the most powerful model currently out, but before it was released,
one of the things that OpenAI did is it had gone to Khan Academy, which is, you know,
this online tutoring organization that is a nonprofit.
And they had asked if they could partner with Khan Academy.
And it turned out one of the reasons they wanted to do this is that Khan Academy had
really good data on AP biology test questions, like how, you know, it had lots of examples
of those questions and lots of examples walking you through how to solve those questions
successfully and answer successfully.
And they made sure that GPD4 was trained on those questions and answers from Khan Academy.
And as a result, GPD4 was able to totally ace the AP biology questions.
And so when they brought that system back in to try it out with Bill Gates and he tried
his AP biology questions on GPD4, you know, it completely aced them.
And Gates was blown away.
And that's what really convinced Gates that large language models maybe were a
path towards super powerful artificial intelligence.
You know, since then, Gates has rode back from that a little bit.
He said he thinks, you know, that this is a big step in that direction, but probably won't
take us all the way to systems that can really reason as well as humans can across a whole
range of tasks.
But it definitely impressed him and kind of convinced him to allow such an Adela to continue
to invest in Open AI.
How do you think Microsoft's $1 billion initial investment at OpenAI impacted the development
of generative AI?
and I guess the overall AI business landscape.
Yeah, I mean, it was hugely important
because it allowed Open AI to go ahead and train first GPT3
and then later GPT4.
And it was really those models that helped kind of create
the landscape of generative AI systems
that have come out from competitors and from researchers.
Without that investment, it's not clear what would have happened.
There were other people working on large
language models, but the progress was much slower. There was no one that had devoted as much
emphasis to them as Open AI. And I think without that billion dollar investment from Microsoft,
it would have been difficult for that to happen as quickly as it did. We're recording this
interview at the end of August 24. I'd love to hear your current analysis of the big tech AI
arms race that's been taking place over the last decade and kind of where you think it's headed.
Yeah, I mean, it's fascinating. There's definitely a race.
and it's not over yet.
And it's unclear, you know, who's going to win.
But it does seem like the competitors are familiar ones
in that they're mostly these really big tech companies
that have been around for the last two decades
and kind of dominated the Internet and mobile era.
So it's, you know, for the most part, it's Microsoft,
it's Google, it's meta.
And those three in particular.
And then maybe kind of trying to catch up
is Apple and Amazon.
And those companies really are the ones that are at the forefront of this.
And then you have this one new entrant, which is Open AI,
but even Open AI is very closely partnered with Microsoft.
So that's basically kind of the constellation you have.
And you have all these companies that are racing towards ever more powerful AI models,
basically around the same kind of architecture,
which is based on something called a neural network,
which is again a kind of software loosely based on how the human brain works.
And within neural networks,
they are all using something called Transformers,
which was a system that Google actually invented in 2017
and had started to implement kind of behind the scenes in Google search.
It helped basically clarify what users' intent was when they were searching for things
because it could understand natural language much better.
But Google did not scale up the systems as much as Open AI did, at least initially, and did not try to create systems that could generate content and write the way OpenAI did.
But of course, once ChatGPT came out, Google very quickly was under all this pressure to catch up.
And I think at this point, they've shown that they can catch up and have caught up.
And Gemini, which is the Google's most powerful model, is very close, if not completely competitive with Open AIs GPD.
on some metrics, it may even be ahead.
Then, you know, there's some other kind of players in this race.
There's a company called Anthropic that was, it's smaller, that was founded by people who broke
away from Open AI.
That's kind of closely aligned with Amazon at this point, and it's very much part of Amazon's
efforts to try to catch up in this race.
They have a model called Claude that's very competitive and powerful.
Meta has jumped into this with both feet, and it's got, and it's taking this approach
that it wants these models to be open source.
and it wants everyone kind of building on its technology.
And it thought the best way to do that was to kind of offer the models for free.
It doesn't have a big cloud computing business that is trying to support
by offering models that are proprietary.
Instead, it thinks it's going to benefit the most by open sourcing these models.
But it's created a model called Lama, you know, that's very powerful, also equally competitive.
And it's just interesting to see where this is going to go.
models keep getting larger. They're multi-modal now, meaning that they can take in audio and video
and output audio and video and still images as well. They can reason about, you know, what they're
seeing in imagery and in videos. They can engage in very natural conversation over a mobile phone
or, you know, through audio. So the models are very interesting. It's not clear that they're
going to overcome some of these fundamental limitations.
Like you may have heard about something called hallucinations where models, you know,
make up information that seems plausible but is not accurate.
It turns out as the models have gotten more powerful, they haven't necessarily been
hallucinating that much less.
And some people think that's a fundamental problem that we're going to need some other
technique to solve before we actually get to this kind of holy grail of the AI field called
artificial general intelligence.
Again, that's kind of AI that could think reason like a person across almost any
cognitive task. It's not clear how close we are to that, but we're clearly, you know, a lot closer
than we were before chat GPT came out in late 2022. In your book, you talk about how Apple was
slower than Microsoft or Google and rolling out AI. And since you sent mastering AI to print,
Apple has released their version of AI, creatively called Apple Intelligence. And that's been in large
part driven by a partnership between Apple and Open AI. So I'm curious, what do you think about
Apple's rollout of their own AI platform.
Yeah. So Apple was behind, and I think they needed to catch up.
And I think Apple's instinct just always try to do everything in the house.
And they had been trying for years to work on advanced AI models of their own.
They were not as successful in part because I don't think they ever devoted quite the
computing resources to it.
And then it's also, I think they had a problem with sort of hiring some of the best talent,
actually, even though Apple is a very good reputation, but I think in particular, among the AI
researchers that really needed, they really needed to get ahead in this game, they were not seen
as at the cutting edge, and then it became a kind of self-reinforcing problem. So they ultimately
decided to partner with Open AI, which I think in some ways was an admission that they were behind.
That has allowed them to kind of get back in the game, though. I think they have so many devices
out there. They have a huge distribution channel, and distribution channels do matter.
and that's an advantage that they know they have
and they're trying to leverage it.
We'll see what happens.
I think there's a chance that people will want to use
whatever Apple's offering just because they like Apple products
and they already are kind of embedded in the Apple ecosystem.
So it's a pain, as everyone knows,
so switch your phone or switch your different operating system for your laptop.
So I think most people don't want to do that.
And if they can have a product that's pretty good or very close to sort of top of market without having to switch devices, that's what they're going to go for.
And Apple's been smart by partnering with Open AI, which does have the leading models in the market.
Apple's also taking this approach that is very much in keeping with their own strategic position around user privacy and data privacy, which is they're going to try to keep as much as possible any data that you're feeding to an AI chatbot or AI system.
on your device and not have it transmitted through the, you know, over the, over your Wi-Fi or
over your phone network to the cloud, because that introduces all kinds of security concerns
and data privacy concerns.
So they've said they're only going to hand off the most, the kind of hardest queries
to open AI's technology.
And ultimately, they may try to have something that runs completely on device.
The way AI is developing, the most powerful models tend to be very large and have to be run
in a data center, so you have to use them over the cloud. But people are very quickly figuring out
how, within six months, how to shrink those models down considerably. And in some cases,
be able to mimic some of the capabilities of the largest models with models that are small
enough to fit on your phone. And I think Apple's kind of betting that that trend's going to continue
and that for what most users are going to want to use a digital assistant for, what they can put
on the phone is going to be sufficient. What do you think about the partnership between Apple and OpenAI
and what this means for the space,
especially considering the large stake
that Microsoft has an Open AI?
Yeah, I mean, I don't know how stable a partnership it is.
I can't imagine Microsoft's thrilled about it,
given its rivalry with Apple.
But, you know, it's a funny world in Silicon Valley.
There's a lot of frenemy relationships.
There's already quite a lot of tension
in the Microsoft OpenAI relationship
because Open AI sells services to some of the same corporate customers
directly that Microsoft is also trying to sell to.
And Microsoft wants those people to use,
open AI services, but on its own Azure Cloud, it doesn't want them necessarily buying those
services directly from OpenAI. So you already had that tension. And then the Apple relationship
just sort of adds to that tension. But it's not clear also how long lasting that Apple Open
AI relationship will be. I don't think Apple necessarily wants to be in a position where it's
dependent on OpenAI for what is going to be maybe the most important piece of software that's on your
on your device. And while the device, you know, Apple's primarily a device company, it's always known that
like software helps sell those devices and help cement people to those devices. And I think if that glue
or that cement is being provided by third party, that's going to be problematic for Apple strategically
in the longer run. And so Apple is trying very hard still to develop its own models that will be,
you know, competitive in the marketplace. It just hasn't managed to do so yet. And that's why I think
it had to partner with Open AI. But how long
lasting that partnership will be, we'll see.
Most people know OpenAI and ChatGPT.
What comes next after ChatGPT?
Where are we headed?
Well, I think the next thing we're going to see in the very near term is what they call
AI agents.
So this will be, you know, probably be an interface that looks a lot like ChatGPT,
but instead of just producing content for you, you can prompt the system to go out and
take action for you.
And it can take action for you using other software or.
sort of across the internet. And it will become sort of the main interface, I think, for most people
with kind of the digital world. Right now, you can ask ChatGPT to, like, you know, suggest an
itinerary for a vacation, but you still have to go and book the vacation yourself. What these
new systems will do is you can, it will suggest the itinerary, and then you can say, okay,
that sounds great. Go out and make all those bookings, and it will go out and do that for you.
It may go out and research things for you and then take actions that you want it to
take. It might go out and negotiate on your behalf. There are already some systems out there that
are doing insurance negotiations on behalf of doctors to get pre-approvals for patients. And I think
that's kind of an example of where this is all heading. And then within corporations, you're going to
have these systems that will perform lots of tasks for you across different software that right now
have to be performed manually by people, often sort of cutting and pacing things between different
pieces of software and, you know, doing something with a thing you create. And that's all going to be
streamlined by essentially these new AI agents. Other than these agents, what are some of the other
trends in AI that get you the most excited? Yeah, so agents are interesting. I think in order to have
agents that are really going to be effective, we're going to have to have AI that is more reliable
and has, you know, better reasoning abilities. And there's certainly some hints that that is coming.
You hear sort of tantalizing rumors and stories that suggest that we're getting closer to agents that really will be able to reason much better than today's large language models have been able to.
We'll see where that goes.
I mean, there are some people in some AI researchers who really doubt that this will be possible with the current types of architectures and algorithms we have and that we're going to really need new algorithms to achieve.
that kind of reasoning ability. We'll see. But I think that's really interesting. I'm very excited
about what AI in general is going to do for a certain big sort of fields of human endeavor. One is
one is sort of science and medicine. I'm very excited about AI being used to discover new
new drugs essentially to treat conditions. I think we're going to make tremendous progress
in curing diseases and treating diseases through AI in the next couple of years.
There are ready systems today that are a bit like large language models that you can kind of prompt a natural language to give you the recipe for a protein that will do a particular thing.
It will bind to a particular site. It will have a certain toxicity profile.
And that's going to tremendously speed up drug discovery.
And then across the sciences, you see people using AI to make new discoveries.
I think there's potential to discover new chemical compounds, which may have big implications for sustainability and our fight against climate change.
change, I think, you know, we're going to see big breakthroughs in science. And then in medicine,
more generally, I think also coupling AI with more wearable devices will give us lots more
opportunities for personalized medicine. So that's one of the areas I'm most excited about. And the other
one I'm really excited about is actually the use of AI in education where I think, despite the panic
among a lot of teachers when chat GPT came out, that everyone was just going to use it to cheat,
I think really if we go a few years ahead and look back, we're going to see this tremendous
transformation of education where now every student has a kind of personal tutor that can walk
them through how to solve problems and if designed the right way, not give away the answer,
but kind of use a Socratic method to lead the student to the answer and to really teach the
student.
You mentioned biotech companies really being able to do some cutting edge research to develop new
treatments. Are there any biotech companies in your mind right now that are leading the way?
Yeah, I mean, some of the ones I really like are small, kind of private ones.
I talked about a company called ProFluent in the book.
There's another company called Lab Genius.
That's very good.
But those are kind of smaller companies.
I think if you look at the bigger ones that are kind of publicly traded, you know, bio-entic is, you know, which is famous for the, it's work on the COVID vaccine, but it's also invested very heavily in these AI models and has done some really amazing stuff.
I've been very impressed.
I heard one of their lead scientists give a talk at a conference just a couple of months ago,
and it's very impressive what they're doing using kind of these same kind of large language model-based systems to discover new drugs.
So I definitely, you know, I think they're one to watch.
But the whole industry is kind of moving in this direction.
So, you know, recursion labs is one that's out there, and they're doing lots of interesting stuff.
They're also publicly traded.
So I would just, you know, watch the whole space in general.
What is it in particular that those companies are doing that you find so interesting in terms of how they use AI?
Well, I think it's just that they are using these large language model-based approaches to discover new compounds and to accelerate all the pre-clinical work needed to bring a drug to clinical trials stage.
There's only, you can't really shorten the clinical trial stage that much.
There's places in clinical trials where AI can help as well.
It can help, you know, select the best sites for clinical trials.
as it can help potentially run the clinical trial slightly more efficiently.
But you can't really shortcut the clinical trial process
because it's absolutely necessary for human safety
and to make sure things work.
But there's a lot that happens before a compound
can even make it to clinical trial.
And most of that can be kind of accelerated or shortcutted
through the use of these new generative AI models.
So I think looking at companies that are really invested heavily
in those approaches is interesting.
And the pharmaceutical industry has actually been very slow to adopt AI.
If you look at big pharma companies, they've been very slow.
A lot of their data is very siloed.
They've been very wedded to traditional drug discovery techniques, which are kind of more
human and intuition-led.
So they're now kind of, I think, playing catch-up mostly through partnerships with these
smaller venture-backed private companies.
Switching gears, what aspects of AI keep you up at night?
So, I mean, there's lots of risks that I'm worried about.
And they're probably not the ones that get the most attention.
when I go on podcast like this, I almost always get asked about sort of mass unemployment,
which is at risk I'm not really worried about.
I think we are not going to see mass unemployment from AI.
I think some people, there's going to be disruption.
I think some people may lose their jobs, but I think on a net basis,
we will see, as we have seen with every other technology,
that there will be more jobs created in the long term than are lost.
The other one I get asked about a lot is, of course, the existential risk of AI
that's going to somehow become sentient and kill us all.
And I think that's a very remote possibility, not in the capacity of systems that we're going to see in the next five years.
And I think we're starting to take some sensible steps to kind of take that risk off the table.
And at least I hope we take those steps.
So those are not the ones I'm most worried about.
I really worry about our overuse of this technology in our daily lives and how that may strip us of some of our most important human cognitive abilities.
and that would include critical thinking.
I think it's just too easy when you get a kind of very pat capsule answer from a chatbot
or generative AI search engine, you know, where it gives you a whole summarized answer
to just accept that answer as a truth and not to think too hard about the source of the information,
even more so than a Google search, where you still have links and you still have this idea
that the information has some kind of provenance.
And you have to think a little bit about where is this information coming from.
When you get these capsule summary answers from an AI chat,
but I think it just, the tendency is not to think too hard about it.
And I worry about us losing some critical thinking skills there.
I worry about the loss of writing ability,
because I think one of the dangerous things about generative AI is it sort of creates a world
where it's easy to imagine that writing is somehow separable from thinking.
And I don't think the two are separable at all.
I think it's through writing that we actually refine our thinking and refine our arguments.
And if you enter, if there's a world where people don't write anymore, they just have a couple,
jot off some bullet points to give to the chatbot and have it write, you know, the document,
you know, for us, then I think our arguments will get weaker and, you know, we're going to lose a lot of our writing and thinking ability.
I also worry about people using AI chatbots as kind of social companions.
There's already a significant subpopulation of people who do this and, and become very reliant on kind of AI as companion bots.
And I worry about that because I think, again, it's not a real relationship with a real person,
although these chat pots are pretty good at simulating a real conversation.
They actually have no real wants or desires or needs.
And they're trained generally to be very pleasing to people and not to challenge us too much.
And I think that's very unlike a relationship with a real person who does have needs and desires
and isn't always pleasant and sometimes isn't a bad mood and certainly isn't always trying to please us.
And I think some people are going to find it, oh, well, why should I bother with the real human relationships?
Because there's so much messier and complicated and harder than a relationship with the chatbot.
And the chatbot gives me everything I need in terms of being able to offload my feelings to them.
And, you know, it gives me affirmation.
And that's what I want.
And I worry that we're going to have a generation of people who increasingly do not seek out human contact.
And I think we're going to have to guard against that danger.
And I think we actually have time limits on how long you can have a, you know, use of
an AI system as a companion chat bot, particularly for children and teenagers.
So I worry about those risks.
I worry about the consolidation of power to some extent in the hands of just very few companies.
I do think that's a concern.
In general, I think there's a tendency with this technology to create kind of winner-take-all
economics.
And for the most part, that means the kind of biggest firms and biggest companies out there
right now who have the most data, which they can use to refine AI systems,
and create systems that are therefore more capable than others, they will accrue more and more
power.
And I think we need to be worried about that a bit.
Those are some of the risks I worry about most.
One last question.
Given all of those challenges, what does it mean to truly master AI?
Yeah.
So I think mastering AI is all about kind of putting the human at the center of this and thinking
very hard about what do we want humans to do in our organizations, in our society.
what processes should really be reserved exclusively for humans because they require human empathy.
I mean, I talk a lot about in the book that one of the challenges here with AI is that we will
put AI into places where it really doesn't belong because the decisions are so dependent on human
empathy, like in a judicial system or, you know, you want to be able to appeal to a human judge.
You do not want the judge simply blindly following some algorithm.
And I worry that, you know, increasingly we're going to be in a world where we put AI systems
in places where they're acting as judges and arbiters on human things where empathy is required.
And these systems don't have any empathy.
I also worry that we're going to look at these systems as a direct substitute for humans in lots
of places within businesses and companies when actually we get the most from them when we use
them as compliments to human labor.
So when they're assistance and when we look at them as, you know, what is it the human does best and what is it the machine can do best?
And, you know, let's let's let each be sort of preeminent in its own realm and pair the two together.
I think if we think about it more like that, then we are able to kind of master AI and we will be able to kind of reap the rewards of the technology while minimizing a lot of the downside risks.
As always, people on the program may have interest in the stocks they talk about.
and The Motley Fool may have formal recommendations for or against,
so don't buy ourselves stocks based solely on what you hear.
I'm Mary Long.
Thanks for listening.
We'll see you tomorrow.
