Moonshots with Peter Diamandis - Top Minds in AI Explain What’s Coming After GPT-4o | EP #130
Episode Date: November 12, 2024In this episode, Peter is joined by leaders in the "BEYOND GPT MODELS — WHAT IS THE DECADE AHEAD?" panel at the 8th FII Summit to discuss how AI will impact industries beyond large language models. ...This includes: Dr. Kai-Fu Lee, Chairman & CEO, Sinovation Ventures, CEO, 01.AI Richard Socher, CEO & Founder, you.com, Co-Founder & Managing Director, AIX Ventures Prem Akkaraju, CEO, Stability AI Recorded on Oct 30th, 2024 Views are my own thoughts; not Financial, Medical, or Legal Advice. Learn more about the Future Investment Initiative Institute (FII): https://fii-institute.org/ _____________ I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: Blog Learn more about my executive summit, Abundance360: https://www.abundance360.com/ _____________ Connect With Peter: Twitter Instagram Youtube Moonshots
Transcript
Discussion (0)
Welcome to another conversation on AI. I don't think we get enough of this conversation going.
I do want to thank Richard Attias and the FII team for really increasing the conversation this
year on AI because I think there is no greater topic of import on the financial side, on the
leadership side, education side, medical side.
It's transforming everything.
We have three incredible CEOs here who are representing a variety of different parts
of the AI emergence.
I'm going to start by asking each of them to take just one minute, introduce themselves
and what they're doing.
And then we're going to jump into where is this going?
How fast is it going?
How big is it going to get?
You know, we'll ask the question, what is after chat GPT?
Prem, just begin with yourself.
Awesome.
Thank you.
Thank you, Peter.
Um, I'm Prem Makaraju.
I'm the CEO of Stability AI.
We are one of the leading open source image, video and 3D models in the world
and past GPT, little pictures are worth a thousand words and we're making quite a few of them and in
fact 80% of all the images that were generated by AI last year in 2023 were driven by our model
stable diffusion.
Amazing.
Richard.
Hi, everyone.
Really excited to be here.
My name is Richard Soscher.
I'm the CEO and founder of U.com, Y-O-U.com.
It's a productivity engine, which is the next generation
after a search and an answer engine.
So we really make people more productive
across a whole host of different kinds of organizations,
from hedge funds to universities to companies, insurance companies and so on.
Publishers, news agencies and almost everyone else in between who has sales, service, marketing,
research, analysis and so on.
I also run a venture fund called AIX Ventures that invests in early stage pre-seed seed
AI companies and startups.
I've been very fortunate that when I was a professor
at Stanford, I had two students who created
this cute company called Hugging Face.
And less than that, a five million valuation
worth one and a half billion now.
So, funds doing good.
That's bragging.
That's just bragging.
I wish I could brag like that.
Dr. Kai-Fu Lee.
Hi, I've been working on AI for about 43 years.
I was two at a time, in college when I started AI.
And I think that may have started before my colleagues
were born.
But I actually worked on machine learning.
And I have a PhD Carnegie Mellon.
And I have worked at Apple, Microsoft, Google.
Some of you may know me with my books, AI Superpowers and AI 2041.
My part-time job is I run sign-in innovation ventures, which invest globally.
And then my full-time job is I run 01.AI.
It's a generative AI company.
We build a large language model.
We're currently ranked as the third company
with the highest performance, only next to the best models
from OpenAI and Google.
And you can find it online.
We're also building consumer and enterprise products.
We're based in China, but our products
are accessible globally.
And also also we extensively
do open source as well.
So incredible, and first of all, Kai Fu is a legend and one of the greatest leaders globally
in this field, so very honored to have him on here.
Prem, I want to start with you.
You very famously were able to recruit James Cameron onto your
board and since Stability is creating video and is creating sort of the future
of Hollywood, I am curious about two things. One, did Jim get it right with The Terminator?
And secondly, there's been a lot of conversation
about the disruption of Hollywood
that we're gonna have AIs creating the future
of all movies, all content and so forth.
So you said beyond GPT models,
images worth a thousand words, talk to us about what this future is.
What is going to happen in the visualization world of TV and Hollywood?
Love it. So, did Jim get it right with Terminator? Let's hope not, I guess.
But what a great movie it was. And I love when he actually jokes about it.
He says, I told you guys, like, you know, this is coming.
And now it absolutely is here.
And why would someone like him get involved in Stability?
Yeah, great question.
So I had the great fortune of working on Avatar 2 with him
when I was the CEO of Weta Digital
before I joined as CEO of Stability.
And that movie took over four years to make.
And that's because there was fully rendered.
And I think if you fast forward to five to 10 years from now,
the vast majority of film and television and visual media
as we know it today is not going to be rendered.
It's going to be generated.
And in fact, in Avatar, there were certain shots, there were certain, uh,
that took 6,000, 7,000 hours of compute time to render one single frame.
Thousands of hours that literally can be reduced down to minutes now.
So I think Jim just wants a whole lot of life back.
And when you think about like the creative process, we all watch films,
we watch movies, we love them from the time we've born to our last memory, it's
a commodity we never get sick of. We never not want to watch it.
And so there's this insatiable appetite out there in the world to consume
stories and to create stories. And I think that we should just accelerate
that. The problem with the film production process is time and
money so what he really wanted to do is rip those things out so we can move from
a render to a generated model. Are we gonna see a situation where we're ever
gonna have AI's generating entire movies because it knows my preferences, what I
love and it's like the perfect movie for me.
You know, personally, I kind of hope not. I don't think that actually the creative process, I think, needs to start with a human. And I think that human needs to dictate these tools
in separate agents to actually make that story. And so I'm hoping that you'll probably want to
hear stories that other people
want to tell you.
Alright, well, let's take a different direction then.
Sure.
Am I going to see Marilyn Monroe and, you know, all stars of the past coming back? Is
there a need for human actors if you can generate absolutely lifelike actors and actresses perfectly?
I mean, I can't see a situation where they're still around.
Yeah, I think that it's actually quite,
it's faster when you're talking about
the film production process,
it's actually easier to just shoot plates on an actor
and just shoot real photography and get their performance.
I think that's the visible layer of production.
People gravitate toward it a lot.
I think that AI will enhance those performances.
I think the physicality of a director with a camera
and an actor in front of it is a very important part
of the creative process.
And I don't think that that's gonna go away too soon.
And in fact, I think about the things
that aren't gonna change just as much
as I think is gonna change.
But I do think after they take one take,
the director's gonna say, I got it. Because they're gonna be able to do what you're talking about, which is going to change. But I do think after they take one take, the director is going to say, I got it.
Because they're going to be able to do what you're talking about, which is manipulate
that performance.
May I ask one more question to you before I move on?
What is the most dramatic change we're going to see in film and TV 10 years from now as
we see digital super intelligence?
What's the craziest vision of what we're going to see in entertainment?
I think we're going to see on the magnitude of five to 10
to 20X more content being created.
I think we're gonna see a variation of time
where it's gonna be a two minute.
Like you said, you may wanna have 20 minutes
before you go to bed, you wanna see a movie,
but that's, you'll have different type of time signatures.
And I think that you're gonna have an explosion
of content creation, an explosion of number of artists
in the world.
I'm going to come back in 10 years and see if you're right about that.
OK.
Richard, a lot of your work was instrumental in the early days of bringing neural nets to natural language processing.
So what do you see as the next frontier beyond NLP?
So just explain if you would what NLP is and where is it going next?
Yeah, natural language processing, NLP used to be a sub-area of AI and it has, I think,
influenced pretty much every other area of AI and there are lots of different algorithms
you could train. In 2010, I had this crazy idea to train a single neural network for all of NLP.
In 2018, we finally really built the first model that invented prompt engineering where
you can just ask one model all the different questions you have.
And over time, of course, you can ask questions not just over text but also over images.
And so I think next one of the answers to the panel's main topic of what's after chat GBT
is that we have many more multimodal models.
You'll be able to have conversations over images.
You have seamless inputs and outputs in not just
the modality of text, but also programming,
which is a huge unlock.
Visual, videos, images, voice, sound.
But one really interesting modality
that not many people have quite realized yet is that
of proteins.
Proteins are essentially the basic Lego blocks of all of biology.
Everything in our body is governed by proteins.
And you can create a protein just like you can ask a large language model to write a
sonnet for you or a poem for your wife.
You can ask an LM to create a specific kind of protein that will only bind to SARS-CoV-2
or only bind to a specific type of cancer in your brain.
And what that means is that will unlock a lot of different aspects in medicine.
So I'm extremely excited about the future of LM's going into different modalities.
And we're seeing that with DeepMind's products in Alpha Prodeo and such.
So we had a conversation in back, but I didn't hear the answer.
And the question is basically, is there an upper limit to intelligence?
And we've talked about, and we just did a conclave on digital superintelligence
and how fast we're going to get there and what does it mean.
As we think about AI becoming more and more intelligent, yes, I was speaking to Elon,
he said, okay, 2029, 2030 equal to intelligence to the entire human race.
Is it just, you know, a million times more and then a billion more, and then a billion times more, and then a
trillion times more? Is there an upper limit to intelligence?
Yeah, so really interesting question. So just to talk about alpha fold in Google for a second,
as you mentioned it, like that was really interesting to understand how proteins fold
because that will help you understand how they are likely to function and interact in
your body. What we did in 2020 is actually create the first LM
to generate a completely new kind of protein.
And it was 40% different to any naturally occurring protein.
And it actually, we synthesized that in the wet lab,
this was at Salesforce Research.
What did it do?
There were two scientists there.
And it was an antibacterial lysozyme type of protein
that basically has antibacterial properties.
And just to put that into perspective.
2020 was really close to COVID-19, so make sure you weren't...
Got to be careful what you say online sometimes.
But what was interesting is that multiple startups have now started from this line of
research and I think it's hard for people to fathom how much that can change medicine.
In terms of upper bounds of intelligence, it's a really interesting question. Can it just keep going and going and going? I think you
have to basically look at the different dimensions of intelligence. There's language intelligence,
visual perception intelligence, reasoning, knowledge, extraction, and a few others. Physical
manipulation and just, I'll show you just one example.
So I don't want to talk about this for hours.
But visual intelligence.
For a long time, people have looked
at just the electromagnetic frequency
spectrum of human vision.
And there, classifying every object on the planet
is actually not that hard.
And the upper limit is classifying
all the objects on the planet.
And we're probably going to reach that, and we're not too far away from it.
But that's just human vision.
AI could eventually see all the way down to gamma frequencies and see and try to perceive
atoms, right?
And there you actually start to hit limits of physics, like quantum limits of what can
actually be observable, and you can go all the way into seeing
massively larger scale things at the universe level
and how many different sensors do you have
and you can process all of that information.
And AI could have billions of sensors that go out
and then you get into really interesting limits
of the speed of light cone of like observable.
So I could talk about for hours. It's a really tough subject, but in some cases we are astronomically far away from those upper bounds
and in some cases we already get pretty close.
Fascinating. You talk about work productivity as U.com's objective.
What does that mean? And I guess the question is the same.
Is there any limitation on work productivity
that we're going to see, given the fact
that I can command AI agents and robots to just do
anything and everything and just self-improve along the way?
It seems like we're going to hit sort of an infinite GDP
at some point.
Yeah, there are some areas of AI where
I can actually
get into a self-training loop if there's
a simulation of something.
And anything that can be simulated,
AI can solve everything in that area.
For instance, chess, the game of Go,
you can perfectly simulate it.
Hence, the eye can train and play with itself
billions and billions of times, create
almost infinite amounts of training data, and hence solve every problem in that domain.
What are other domains that we can perfectly simulate is programming.
If you can, programming languages can be run, and then you can simulate the outputs, obviously
in the computer, and then the eye can get better and better and eventually get super
human in terms of programming.
But where I can't simulate things infinitely many times is in customer service.
You can't have billions and billions of customers ask about all the different things that can
go wrong with the product that you're sending.
And so in those kinds of areas, the limits are going to be on data collection.
Can you actually fully digitize a process?
I often joke plumbers are probably the safest
from AI disruption because no one's even collecting data
on how to do plumbing, right?
And you like crawl somewhere, get different pipes,
no one's having GoPro and 3D sensors and robotic arms
and so on collecting data for that.
So that will take much, much longer.
I think in terms of work productivity,
a lot of us are going to become managers.
A lot of current employees that are individual contributors are going to
have to learn to manage an AI to do the kinds of work that they do.
It turns out managing is also a skill.
Not everyone is a good manager from day one.
You have to really explain to the AI how you do a certain job.
What we've seen with, for instance,
a really large cybersecurity company called Mimecast
is they've had 200 seat licenses using their product
and then we did a workshop with them
and actually explained to all the different groups,
like this is what you can do.
And some of the marketing can say,
well, I usually get this long product description
and then I have to describe it
for these different industries in the email campaign
and I have to write three tweets and three LinkedIn messages, all this stuff.
And we're like, well, just say that to this agent and then the AI agent does it for them.
They're like, wow, now it's like six to 20 hours of work every other week just got automated
by describing this workflow that I used to do manually to an AI agent.
And I think that will change pretty much all work and pretty much every industry.
Thank you, Richard.
Kai Fu, I can go in a thousand different directions here.
First of all, your venture funds innovations, which is how many billions of capital AUM?
We manage about $3 billion.
About $3 billion.
And you've been one of the most prolific AI investors.
I've had the pleasure to visit you multiple times in China and thank you for your amazing hospitality. You've now become an entrepreneur
And you're running both a company in China and a company in the United States
Why did you do that?
Well, because this this time is for real. Imagine, this was my dream.
This was practice before?
Well, this was my dream when I went to college,
that AI was nothing.
No one knew what it was,
but I felt this was the thing I needed to do.
Then we went through multiple winters of AI,
where there's disillusionment and I had to do other things.
About seven, eight years ago,
we saw with deep learning, it became clear.
It would create a lot of value.
So, but at the time I didn't really see it becoming AGI.
So I was an investor, we actually created 12 AI unicorns
in sign of Asian ventures.
But this time with generative AI,
the speed at which it's growing is just
phenomenal.
So you could help yourself.
Yeah, I felt if I just invested, I'd be missing out.
I would be in the back seat.
I want to be in the driver's seat.
By the way, everybody, I hope you feel the same, right?
I'm very clear about saying there are two kinds of companies at the end of this decade,
companies that are fully utilizing AI and everyone else is out of business.
And I fundamentally believe that is true.
You've written a number of books, AI Superpowers, I commend to all of you.
So since that was published, what's the biggest changes in the global AI race?
And it is an AI arms race going on.
Well, it is and isn't, because the companies in China
are largely competing against each other
for the China market, and they're generally not.
I don't mean nationally to nationally,
but it is between companies around the world.
Yeah, so you mean Chinese companies.
What are their characteristics?
So in my book, AI Superpowers, I described
the American companies are generally speaking
more breakthrough innovative.
They come up with new things and then the Chinese companies are better at engineering,
execution, attention to detail, doing the grunt work.
User interfaces.
User interfaces, building apps.
So in the case of mobile or deep learning, we saw that Americans
invented pretty much everything, but China created a lot of value, arguably more given
technologies that were largely invented in the U.S. So now we're in this generative AI,
again, invented by Americans, and we're in a unique position where the technology is disrupting
itself very quickly in the US and elsewhere.
So it arguably is still the age of discovery and US ought to win.
But then the Chinese companies are able to watch the innovations, make some themselves,
and then do better engineering and deliver solutions.
So the company I'm building, zero 01.AI, is doing exactly that.
We don't claim to have invented everything or even most things.
We learned a lot from the giants in Silicon Valley, OpenAI,
and others.
But we think we build more solidly, faster, execute
better.
So an example was I talked about how 01.AI now is the third best modeling company
in the world, ranking number six in models measured by LM Sys and UC Berkeley. But the
most amazing thing, I think, the thing that shocks my friends in Silicon Valley is not
just our performance, but that we train the model with only three million
dollars and GPT-4 was trained by 80 to 100 million and GPT-5 is rumored to be
trained by about a billion dollars. So it is not the case. We believe in scaling
law, but when you do excellent detailed engineering, it is not the case you have
to spend a billion dollars to train a great model. This is really important for the audience here because there's a lot
of parts of the world that don't have access to you know a hundred thousand
H you know H100 clusters right and the question is oh my god can I really
build a business or a product in pick-your country with a small number of GPUs.
And I think the constraint on GPUs forced you to innovate.
Can you speak to that?
I think it's really important.
We talked about that on our last podcast together.
Yeah, I think as a company in China,
first we have limited access to GPUs
due to the US regulations.
And secondly, the Chinese companies are not valued
what the American companies are.
I mean, we're valued at a fraction of the equivalent
American company.
So when we have less money and difficulty to get GPUs,
I truly believe that necessity is the mother of innovation.
So when we only have 2000 GPUs, well,
the team has to figure out how to use it.
I, as the CEO, have to figure out how to prioritize it.
And then not only do we have to make training fast,
we have to make inference fast.
So our inference is designed by figuring out
the bottlenecks in the entire process
by trying to turn a computational problem
to a memory problem by building a multi-layer cache by building a specific
inference engine and so on but the bottom line is our inference cost is
ten cents per million tokens and that's 1 30th of what the typical comparable
model charge and where's it going where Where's the 10 cents going? Yeah.
Well, the 10 cents would lead to building apps
for much lower cost.
So if you wanted to build a U.com or Perplexity
or some other app, you can either pay OpenAI $4.40
per million tokens, or if you have our model,
it costs you just 10 cents.
And if you buy our API, it just costs you 14 cents.
We're very transparent with our pricing.
Yes, Richard.
There's a really interesting paradox called Jevons Paradox
from the previous Industrial Revolution.
A lot of smart people back then were working on making
more efficient steam engines that use less coal.
They thought, oh, if we make the steam engines more efficient,
we're going to need less coal.
But instead, we needed more steam engines everywhere. And I think
that's exactly what's going to happen. We're currently in the Jevons paradox of intelligence.
We're just going to use intelligence in many more places. Everyone is going to have their
own assistant, their own medical team that understands everything about them instead
of being restricted by intelligence being very, very expensive.
Yeah, I totally agree. I want to clarify. I'm not saying there's a fixed workload. We're
making it cheaper. Right, right. I'm saying we're enabling a workload much, much larger.
Correct. I want to ask one closing question to all of you. We have people here who have
daughters and sons or nephews or brothers and sisters. what's your advice to someone who is 20 years old listening to this
or through this, what's your advice to someone at the beginning of their
sort of academic and professional career given what you know is going on in AI right now? Prem?
I think it's don't waste your time learning how to code. Because I think the new language is gonna be English.
And I think that absolutely learn as fast
as you humanly possibly can on all AI,
in all AI modalities.
And I think if you,
and then once you find your passion,
I think you're gonna then find a very narrow AI
to empower you to do what you're really set out to do.
Thank you, Prem. Richard?
I'll disagree. I think you should still learn how to program. I think that is how you get to really
understand how this technology works at the foundational level and how it becomes less
magic and more something that you can yourself modify and construct with. But you need to combine
computer science and programming
with another passion that you can actually apply all of that intelligence to.
And ideally, the younger you are, the more you learn the foundations, math, physics,
the sciences.
I think-
I'm going to cut you off because I'm being yanked.
I want to have Kai-Fu's final word here.
Okay.
I actually agree in this agree with both of you.
I think people should follow their hearts, right?
If you dream of becoming a fantastic programmer
and you can do it, you should do what Richard says.
If you think programming is the way
they make the most money, no,
then you should follow what Prem says.
Ladies and gentlemen, please give it up
to these three amazing CEOs.
Thank you.
Thank you.
Thank you Peter.
Thank you.