a16z Podcast - Bringing AI to the Masses with Adam D’Angelo
Episode Date: March 20, 2024Generative AI has initiated a transformative shift, reshaping our world in unprecedented ways. In a16z's AI Revolution series, we engage some of the most impactful builders in the field of AI discussi...ng and debating where we are, where we’re going, and the big open questions in AI.In this episode, General Partner David George chats with Adam D'Angelo, the CEO and founder of Quora, wade into this fast-moving AI landscape, and specifically touch on how building infrastructure for creators can democratize AI. Adam, who is now building AI aggregator Poe and is on the board of OpenAI, has long been paying attention to this AI wave. He recounts this evolving fascination, and together Adam and David explore the dynamic synergy between humans and AI, highlighting the critical role of experimentation for founders in the AI realm.As a reminder, this conversation comes from our AI Revolution series, which you can dive into more deeply at a16z.com/ai. Resources:Watch the the full interview: www.a16z.com/AIRevolutionFind Adam on Twitter: https://twitter.com/adamdangeloFind David George on Twitter: https://twitter.com/DavidGeorge83 Stay Updated: Find a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
Continuing on the current paradigm, if you just play this forward,
there's so much further that it can go.
We're building a network for people and AI to all share knowledge together,
and sometimes the people will be getting knowledge from the AI,
and sometimes the AI will need to get knowledge from humans.
There's just such a huge space of needs people have,
and such a huge space of different inputs you can combine,
try to address those needs.
Humans are always going to play some role.
There's knowledge that people have in their heads
that is not on the internet and is not in any book.
And so no LLM is going to have that knowledge.
Hello, everyone, this is Steph.
If you didn't recognize the voice before mine,
that was Adam D'Angelo.
He's the co-founder of Cora,
now building AI aggregator Poe,
and so much more,
including being on the board of OpenAI.
We do have a very timely episode for you today,
so I won't keep you waiting any longer.
There's A16Z Growth Fund General Partner, Sarah Wang,
with a proper introduction of what's on deck.
You'll also hear her reference our AI Revolution series,
which you can dive into more deeply at A16Z.com slash AI.
As a reminder, the content here is for informational purposes only,
should not be taken as legal, business, tax, or investment.
investment advice or be used to evaluate any investment or security and is not directed at any
investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates
may also maintain investments in the companies discussed in this podcast. For more details,
including a link to our investments, please see A16c.com slash disclosures.
Generative AI has kicked off a paradigm shift that is already transforming our world.
In our AI Revolution series, we talk to the people who are actually building the technology
to understand where we are, where we're going, and the big open questions in the field.
Our guest this episode is Adam D'Angelo.
Adam has built and grown companies that have connected billions of people across the globe.
He was the CTO of Facebook until 2008 before founding Cora in 2009.
After being interested in AI for decades, Adam joined the board of OpenAI in 2018.
And now, he's using his experience.
scaling some of the biggest consumer companies in the world to build PO, a platform that brings
AI to the masses. In this conversation, Adam speaks with A16Z general partner and my colleague,
David George, about building AI infrastructure for creators, the multi-modal, multimodal future of AI,
how AI will shape knowledge sharing on the internet, and much more. There's a lot to dig into here,
so let's start at the beginning of Adam's journey in AI. He's going to take us back to his college
years in 2005.
I was very excited about AI early on in my career.
I remember trying to build some AI products in college, actually.
And it was just very difficult.
The technology just wasn't there.
It wasn't at the point where you're going to be able to make something that was
ready for consumers.
And meanwhile, I just watched social networking start to boom.
And you can actually look at a lot of social networking technology as almost an alternative
to AI.
So instead of trying to get the computer to do everything,
you could just connect people with other people over the Internet
who could do those things in the same way that globalization can be a substitute for automation.
Social networking, you could think of it as letting people access everyone else in the world.
For entertainment, for fun, for communication, for whatever you want to do,
I think it was an incredibly powerful technology.
And given that AI wasn't quite there yet,
that was the main thing that there was to do to apply all the technology to. So I first got
interested in social networking and then through my experience at Quora, we started out with a
product that was entirely human driven. So people would come and ask questions and they would
put topics on them and other people would sign up to answer questions and they would tell us about
what they knew about by tagging themselves with these topics and we would try to route the
questions to the people who knew about the particular topic. And it was all manual. But we knew
that at some point we're going to get to the point where software would be able to generate
answers. We ran some experiments using GPT3 to generate answers and compare them to the answers
that humans had written on Quora. And a lot of the time, GPD3 could not write as good of an
answer as what the best human answer was that had been written. But it could write an answer
instantly to any question.
And the constraint for Quora had always been
the amount of time
that high-quality answer writers had
to answer questions.
And so the thing that was really new
about LLMs was the ability
to, at extremely low-cost,
generate an answer instantly to any question.
Through that experience,
we realized that a chat kind of experience
where you can write a question
and get an answer instantly from AI
was more likely to be
the best paradigm for interacting with AI
as opposed to this kind of like publication paradigm that Cora had at.
Yeah, of course.
And so based on all that, we landed on building Po as a new chat-oriented AI product.
I think many people will be familiar with Po.
But explain just for us how does the product work,
how do you find it in the first place, how do you interact with it?
In the same way that Cora aggregates knowledge from many different people
who have knowledge and want to share their knowledge,
we want Poe to be a way for people to access AI from,
many different companies and many different people who are building on top of AI.
And so you can come to Poe and use it to talk to a very wide variety of models that are available today.
And then we have all these other products that people have built on top of these models.
And we've got an open API where anyone can hook in.
So anyone who's training their own model, so like any of these research teams, anyone who's doing fine-tuning,
they can take their model and put it on Poe.
And what we allow is for them to reach a big audience quickly.
So we thought about as a company, CORA, what is the role that we're going to play in this new world with AI and what are the strengths that we had?
And what have we learned over the past 10 years building and operating Cora?
And there's actually a lot of this kind of like consumer internet know-how that's important in getting a product to mass market.
So this is things like building applications across iOS and Android and Windows.
Windows and Mac, localization of the interface,
AB testing, subscriptions,
all these other kinds of small optimizations
that you need to make a good consumer product.
We want PO to be a way for anyone who's creating AI,
whether it's one of the big labs or an independent researcher.
We want it to be a way for them to get that model
and make it available to mainstream users all around the world.
There's a lot that you just said that I would love to go deeper on.
So one of the things that you said, you sort of listed off all the models that you make available.
There's one theory which is one model, one company is going to provide everybody the solution that they need for everything.
There's another theory which is there's going to be tons of different models for different use cases.
The world's going to be multi-modal and multimodal.
The theory behind Poe is that the future is going to be multimodal and multimodal.
Why do you think that's the case?
I think nobody knows how the future is going to unfold, but,
we think that there's going to be a lot of diversity
in the kind of products that people build on top of these models
and in the models themselves.
I think there are a lot of trade-offs involved in making one of these models.
You have to decide what data are you going to train on it,
what kind of fine-tuning are you going to do,
what kind of instructions is the model going to expect you to give as a user?
What kind of expectations do you want to set with your users
about what to use the model for?
And I think in the same way that the early Internet had this huge explosion of different applications, I think we're going to see the same thing from AI.
So early on in the Internet, the web browser came along and made it so that anyone who was making an Internet product, they didn't need to build a special client and get distribution to people around the world.
They could just build a website, and this one web browser could visit any website.
Sure. And in the same way, we want Po to be a single interface that can be used so that people can use that to talk to any model.
We're betting on diversity just because there are so many talented people around the world who are going to be capable of tuning these models.
You can tune the open source models today. There's also products from opening eye and Anthropic.
And I think Google's close to having something where you'll be able to fine tune all these models.
And everyone has their own data sets. Everyone has their own special technology that they can.
can add to the models. And I think through the combination of all of this, we're going to see a
very wide diversity of things you can do with AI. There's two things that I'd like to maybe go deeper on
there. So one is the idea of what constitutes the product itself. What is it today? And then what is it
going to have to become? And then secondly, the idea of the long tail, right? Bet on the long tail,
incentivize them, give them a platform, abstract away a bunch of the infrastructure that they don't
know how to build and harness really what they're great at, right? So on the first,
What is the product? Today, the AI model, many people will probably say, is largely the product.
What are the advances that you anticipate seeing that are going to change the way people interact with these,
enable new kinds of products being built? One way of thinking about that is, are the model providers themselves going to be the ones that build all the products?
If you're a large model creator and you have tens of employees that you can allocate to building a consumer product and you have the culture to do that, you can go direct to consumer.
and you can build a good product.
I think most of the people
who are training these models
are not in that position.
If you want to take your model
and bring it to consumers all around the world,
you've got to think about
you need an iOS app,
you need an Android app,
you need desktop apps,
you need a web interface,
you need to do billing
in all these different countries,
you've got to think about taxes,
and there's just a lot of work
and you raise some venture funding.
You could either spend some of that funding
on hiring out
a whole team and developing all those competencies,
or you can spend that on making your model even better.
And I think different startups will choose different paths here,
but I think for a lot of them,
the right path is going to be to just set up an API
or plug into the PO API and use that to get to a lot of consumers very, very quickly.
Yeah, talk about the role that the sort of long tail of creators then plays,
and like how do you want to engage with them,
and what's the incentive for them to want to build on top of PO?
as opposed to other places.
Yeah, so we have a revenue sharing program
that allows people to get paid
as a result of people using their bots on PO.
It costs a huge amount of money
to provide inference for these models.
And so almost no other platforms
provide this kind of revenue share today.
So if you have a model that requires a lot of GPUs
to do inference on,
then this is really your best place to come
and you can have a real business.
You can cover your inference costs
and make more.
And we think a ton of innovation is going to come from these companies.
There's other companies that are building things on top of some of the big models,
so say from OpenAI, and in that case, they have to pay the Open AI inference cost,
which is another sort of source of need for money.
And so the Poe revenue share model works in the same way where it'll let you afford your cost
that you're then paying on to any other inference provider.
Yeah, absolutely.
What are some of the really fun and interesting things that creators have already built on top of Poe?
A lot of people right now are excited about image models.
We have stable diffusion, SDXL, and then we let users go and do some prompting to customize it to provide art of a particular style.
So there's these like anime style SDXL bots on Po. Those are popular.
There's this company called Playground. They're making a product for people to edit images, but in the process, they've created a pretty powerful model, and they have that model available on Poe.
and that's gotten pretty popular recently.
Yeah, it's so cool that you can have a long tail of these creators
make their own sort of opinionated style of these base models.
But I think there's something to that where you provide this sort of infrastructure and support
and then let the users or creators do what they do best.
Yeah, and it's super early days right now,
but I think what we're going to see over just the next year or two
is going to be incredible.
This will go from being sort of useful.
to some people right now to being something that's just critical to many different tasks that
anyone is going to try to accomplish. Yeah, there's a really good analogous company that you and I
both know very well, which is Roblox, right? Early days, creators were on there building games,
and they were pretty basic the early days, and it was a lot of kids learning how to build games,
and then it sort of graduated eventually to people who were able to earn a living. So I think
the ideal for you would be you build enough scale. They can build large enough audiences to
actually be sort of professionals of what they're doing.
Yeah, and we're spending millions of dollars already on inference.
It's mostly going to the large model providers right now,
but we want to let as much of that as possible go off to these independent creators.
Cool.
I want to shift topics and get maybe a little bit more like conceptual AI.
You were CTO at Facebook at the time where social was emerging,
and then right when the platform shift to mobile was taking place, right?
So I'd love your thoughts on what are the similar?
to the shift of mobile in this AI wave,
and what are some of the big differences?
Yeah, you know, I think it's very hard to say.
I think with Cora, we were a little bit slow to adopt mobile.
Mobile was one of the things on our list of many priorities,
and it needed to be the number one priority,
and we needed to make tougher trade-offs to prioritize it.
We needed to do things like hire a set of different people
who were going to focus on it
and really have a period where we've released no new features
and we were just simplifying things
because the mobile UI called for a different experience.
When you have such a critical change
in the platform structure,
you need to rethink so much
that it's only going to happen
if you have this very strong kind of top-down leadership.
And so you've done it differently this time around.
Yeah, yeah, yeah.
So, yeah, talk about some of the organizational changes
and what you've done to actually refocus yourselves
on the big thing that's right in front of us here.
Yeah, so I think the first thing is identifying this trend
and then starting off doing some experimentation early on just to learn.
And that didn't require any kind of strong, decisive leadership
as much as it just required paying attention to the market.
But then from that experimentation, that got us enough conviction
that in our case we said, hey, too much of the CORA product
has been built up around this publication model
that is sort of fundamentally premised on the idea
that expert time is going to be scarce.
and the AI, the LLM time is not scarce in the same way.
And so we need to rethink that this was in, I think, August of 2022.
We got to this conclusion that chat is the right paradigm for this,
and we need a new product.
We don't want to just trying to retrofit everything into Quora
as we thought we're going to move too slowly.
So we had a small team to start working on Poe based on that.
Talk about the relationship between Quora and Poe,
and how do you actually envision
that changing in the future. And then maybe there's even an extrapolation of, okay,
Cora and Poe and like human experts and AI experts answering questions. Do they do it in the
same place? Is it a different way of interacting? Yeah, yeah. We'd love to have all of this
as integrated as possible. I think if you think about maybe the relationship between Facebook
and Facebook Messenger, these are two products built by the same company, but they share a lot.
I think that Po and Cora might evolve to a similar kind of relationship.
We'd love to get more of the human aspects of Cora into Po.
We'd also love to get the whole Cora data set into the Poe bots.
And we're also working, we've launched some of this already,
to get some of the Po AI to generate answers that are available on Cora.
As these models continue to scale up,
the quality is going to go higher and higher to the point where it actually will be as good as human,
quality in a lot of cases.
And so the core of paradigm actually, I think,
becomes more appropriate for AI as the cost of inference
gets higher.
Gets lower, yeah, and model quality gets better.
Yeah, yeah.
So we'll see what the exact right relationship is,
but we think of this as we're building a network
for people and AI to all share knowledge together.
And sometimes the people will be getting knowledge
from the AI, and sometimes the AI will need to get knowledge
from humans.
And we'd love to be as much of a conduit
for that as possible.
Yeah, and Cora or Po, depending on how they interact,
is a place you get answers.
And sometimes your answer is going to come from an expert
and sometimes it's going to come from AI.
Yeah, right?
What do you think about just the internet?
Like, you extrapolate that out.
Are people going to be engaging with this collection of bots
that have different personalities and different expertise?
And will those be interspersed with real humans?
Will, like, real humans be interspersing in the AI?
What do you think actually happens?
Personally, I think that humans are always going to play some role.
There's knowledge that people have in their heads that is not on the Internet and is not in any book.
And so no LLM is going to have that knowledge.
Yeah, like Andreikopathy called the LLM's a lossy compression algorithm of the Internet.
Yeah, and it's like, it's just a of the Internet.
There's experts that know a lot of stuff that's not that, right?
Right, right.
So I think there's a lot of potential in the kind of interplay between humans.
and the LLM's going forward.
LLM says this problem with hallucinations right now.
And I think the rate is going to go down as the models get better,
but it's never going to get to the point where it's 100% perfect.
And so I think there will be a lot of value placed on the idea
that the source of your information, which human said it
or which publication originally printed it.
And I expect that that is going to lead to some kind of product
or some kind of user experience
where the LLM is helping you sort through your sources
and quoting exact experts or exact sources
as opposed to just synthesizing it all
and giving you something where you can't exactly trust
where it came from.
Yeah.
And is that a new technology that gets built outside of the models themselves
or do you think that that's incorporated inside of the model?
I could see it going either way.
I mean, if you just look at a model,
the raw model doesn't have access to these,
other databases where it can get
exact quotes. And so
it'll have to be some
augmentation of the model, but how
tightly integrated into the model
it'll be, I think we don't know yet.
Yeah, I agree. I think that's going to be critical.
It's one thing like, we've started out with these use cases
of companionship and creativity
and, like, hallucinations
are a feature of that, right?
That makes it more fun and exciting,
especially when you get into business use cases
or more utility-type stuff.
It's obviously needed. What are the other big advances?
that you're excited about just broadly in the AI space for language models?
I'm personally the most excited just about scale.
Just continuing on the current paradigm,
if you just play this forward, there's so much further that it can go.
And you think the scaling laws will hold, are holding.
So far they have held,
my prediction would be there are some issues that need to be overcome.
But there's just this incredible industry,
so many talented people right now
who are trying to make this technology advance,
and there's so much money behind it,
the force that's there to help overcome any road bumps that we hit
is so massive.
So I expect that it's just going to continue.
I think there will be road bumps,
and there are issues that we need to be worked around,
and there will be breakthroughs that probably need incredible creativity.
But we have many of the smartest people in the world,
the most determined people in the world,
the most talented people in the world,
all focused on this problem
and I think
we're going to just continue to see
the kind of exponential growth progress
that we've had so far.
I think they'll go on for many years.
We talked about the last shift, right?
Like the mobile shift that you lived through
and some of the lessons that you had from it.
What do you think ultimate market structure
looks like in the Gen. AI space?
In order to train these frontier models,
you need billions of dollars of capital
and you need many years
of investment in infrastructure.
There's a very small set of people who can do that.
And so that's leading to this world
where there's only a small number of players
that can be on the frontier.
And so right now it's opening eye, Google,
maybe Anthropic, maybe Meta can be there.
Those who can get there,
I think it's going to be good business.
You'll be able to make a lot of money.
You can have good profit margins.
You'll have to work very hard to stay on the frontier
to keep up, but it's not a commodity.
I think when you go six months behind the frontier, definitely one year, it's brutal.
There's just way too many people that are able to get the capital and the resources to train those models.
And so it's going to be either fully open source or there will be too many different competitors for anyone to make a good business at that point on the pure technology.
I do think there will be very good businesses at that level that you're not using frontiers.
model but are combining some other kind of unique thing with the models. So it might be that you're
providing some tool that the model can use or you have some unique data that you're using
for fine tuning or there might be some unique product you build around the model. And then that
ends up being the source of competitive strength. So I think there's going to be this kind of choice
where you're either competing on scale by being on the frontier or you're competing on some
kind of like feature differentiation, and in that case, you don't need a frontier model.
And in some cases, you'll have both.
So, you know, you might be able to use the opening I API and combine that with some
unique tool that you're providing, and that could be a good business as well.
Yeah, once you get beyond the foundation models, you get to more traditional forms of
business differentiation, competitive differentiation, like competitive advantage, sources of
modes and things like that, which I think totally makes sense.
Yeah, and I think what's interesting about this is that it's evolving.
So things are moving so quickly.
And so every six months, the frontier moves forward.
And so the frontier players, they have to invest more capital,
but then they have much more powerful models that open up even bigger markets.
But then the open source one year back frontier, that's also moving forward.
Yeah, the markets that that can address are getting bigger and bigger.
And so I think every year that goes by, we're going to have this much larger market
that can be addressed by the technology and all the products that are built on top of it.
So, yeah, that sort of brings me to another topic which is related to market structure, which is incumbents versus startups.
In the seat that we're in, we hope the startups always win.
But in the last cycle, and maybe just from a B2B lens here, like in the last cycle, SaaS and cloud, there were a bunch of things that made it really difficult for the incumbents to actually innovate.
There was a business model innovation and new talent and technology required, which opened the door pretty widely to startups.
There's a take out there now on AI, which is, this time is different, and the incumbents are the real winners, right?
Because the technology is available by simple API, you can plug it right in, and they have distribution, so they should be the winners.
And if you just sum up Microsoft and Google's business apps and all these things, it's probably somewhere between $10 and $20 billion of revenue over the next one to two years.
I'm curious if you have a take on that, if that's consistent with how you see it or if you see it differently.
Yeah, I think it's going to vary.
So definitely the incumbents, they're going to have actually.
to the technology, and they're going to have distribution.
And so that's a big advantage that they have.
I think the opportunities for new players in this wave
are more in the cases where the kind of product
you want to build around this technology
is somehow fundamentally different than what was built before.
And so, as an example, the hallucination problem,
that's in some ways a good thing for startups
because a lot of the existing products out there
have zero tolerance for anything that's going to have a risk
of producing something wrong.
And so you can see this with, I think,
with perplexity getting share from Google right now,
Google can't just go and put something on all their search results
where it has a few percent chance of being wrong.
That would be a huge problem for them.
Perplexity, that can just be the expectation
when you're using that product,
that it's almost always right, even though there's a small chance that it's wrong.
I think that same thing is actually going to play out in a lot of other cases where the products
you build around this, they need some kind of fault tolerance, and there needs to be a user
expectation that everything is not perfect.
And the cost advantage can be so great for this, right?
If you take a highly paid person like a lawyer and you run it through an LLM, which it costs
cents versus a thousand bucks an hour, maybe you just should have a really high fault tolerance
and you just have to double-check a lot of the work,
and that's just a different workflow.
It's a way of engaging, right?
Yeah, and so you have these entrenched companies
that maybe have a very strong brand
of never making a mistake or never messing up
or always being reliable.
And a startup can just come in and say,
okay, well, this is going to cost a tenth or a hundredth the price,
but it's going to have a small chance of getting things wrong.
And a lot of people would prefer that,
but it's a real problem for the incumbent
because they can't compromise their brand.
Yeah, that's a great point.
I guess just to close it out, I'm sure a big part of the audience here is founders who are building in probably earlier stage than you.
What advice do you have for people building in AI?
I think what I would do if I was starting a new company right now is just spend a ton of time playing with the models and playing with integrating them with different things.
There's so many different inputs you can give to the models.
You can make scrapers that ingest data from anywhere.
You can get data from the user's local screen.
You can get data from voice.
And there's just such a huge space of needs people have
and such a huge space of different inputs you can combine
to try to address those needs.
I think it's very hard to just think top down
about where there's demand in the market.
I think experimentation is really the way to go to generate ideas
and to set up a startup that's going to be able to build something really valuable.
Yeah, have a place in the world.
for sure. Awesome. Well, thanks for being here.
Yeah. I appreciate it. This is fine.
And thanks for sharing the time.
Yeah.
If you like this episode, if you made it this far, help us grow the show.
Share with a friend or if you're feeling really ambitious, you can leave us a review at rate
thispodcast.com slash A16c. You know, candidly, producing a podcast can sometimes feel
like you're just talking into a void. And so if you did like this episode, if you liked any of our
episodes, please let us know. We'll see you next time.