a16z Podcast - Where Does Consumer AI Stand at the End of 2025?
Episode Date: December 29, 2025As 2025 comes to a close, consumer AI is entering a new phase. A small number of products now dominate everyday use, multimodal models have unlocked entirely new creative workflows, and the big labs h...ave pushed aggressively into consumer experiences. At the same time, it is becoming clearer which ideas actually changed user behavior and which ones did not.In this episode, a16z consumer investors Anish Acharya, Olivia Moore, Justine Moore, and Bryan Kim look back at the biggest product and model shifts of 2025 and then look ahead to what 2026 may bring. They discuss why consumer AI appears to be trending toward winner-take-most, how subtle product design choices can matter more than raw model quality, and why templates, multimodality, and distribution are shaping the next wave of consumer products.Where do startups still have room to win? How will the role of the big labs continue to change? And what will it actually take for consumer AI apps to break out at scale in 2026? Resources:Follow Anish: https://x.com/illscienceFollow Olivia: https://x.com/omooretweetsFollow Justine: https://x.com/venturetwinsFollow Bryan: https://x.com/kirbyman01 Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
For most the year, less than 10% of ChatchipT users even visited another one of the big LLM providers.
When you open Gemini, it has a pop-up, says, we got a nanobanana.
Would you like to do something with it?
A little pain where you have to type something.
I don't know what to do.
Eat our product nuances that I think makes people actually take the first thing.
The models have gotten to the level of quality that you can build a real scalable app on top of them.
And so the hope is 2026 will be a huge year for consumer builders.
As 2025 comes to a close, consumer AI is starting to look very different than it did at the beginning of the year.
A small number of products now dominate everyday usage.
New multimodal models have gone viral and the big labs have pushed harder than ever into consumer experiences.
To take stock of the year, the A16 Z team, Anisha Charya, Olivia Moore, Dristine Moore, and Brian Kim
break down what actually worked in 2025 and what didn't.
They discuss which model launches and interfaces change user behavior,
why small product details matter more than raw model quality
and whether the consumer AI market is trending toward win or take most.
The conversation also looks ahead to 2026, where there is still room for startups,
how templates and multi-modality are reshaping creation,
and why this may finally be the moment when scalable consumer AI apps break out.
Today we're talking about who won consumer AI in 2025.
This was arguably the year that we saw the big model providers,
Open AI and Google, most out of everyone, make a major push of their own into consumer,
both in terms of new models they release, but also in terms of new products, features,
and interfaces that target the mainstream user.
You might wonder, why does it matter who is in the lead here?
There are some early signs that the general LLM assistant space might be trending towards
winner take all, or at least winner take most.
So only 9% of consumers are paying for more than one out of the group of chat, GPT,
Gemini, Claude, and Cursor. And for most of the year, less than 10% of ChatchipT users even visited
another one of the big LLM providers like Gemini. If we had to call it now, ChatchipT is currently
in the lead by far at 800 to 900 million weekly active users. Gemini's at an estimated
35% of their skill on web and about 40% on mobile. And everyone else significantly trails this.
So Claude, Grock, Perplexity are all about 8 to 10% of the internet.
the usage. But, especially in the last three to six months, things are changing very quickly.
With the launch of new viral models like Nanobanana, Gemini is now growing desktop users
155% year-over-year, which is actually accelerating even as they reach more scale, which is pretty
crazy to see. And ChatsyBT is only growing 23% year-over-year. And we're starting to see players
like Anthropic almost specialize within consumer owning different verticals like the hyper-technical
user. So today we've brought together the A16Z consumer team to recap what we saw this year
from the big model companies in consumer and also to predict what might be ahead of us in
2026. Cool. Well, thank you, Olivia. It's been a super fun year. If we kind of wind the timeline
back to last January, maybe we should start with what we saw launches, products, what worked,
what didn't. So Justine, tell us what you saw this year. Opening eye at Google, what are you paying
attention to? What have you changed your mind on? Yeah, those two in particular had a ton of consumer
launches, like Olivia mentioned. From a model perspective, I would argue their most viral models this
year, at least among consumers, were in image and video. So for OpenAI, it was the chatGBT
4-0 image, the Ghibli moment, which is crazy that that was this year. It seems like this is a year.
It feels like it was years ago. And then SORA, obviously, SORA 2. And then for Google, it's VO, V03
and V03.1. And then Nanobanana and Nanobanana Pro in image models, which went
insanely viral, probably comparable to, if not beyond, the Ghibli moment for Open AI.
I think in terms of the product layer, what we saw was Open AI tended to keep more things
in the ChatGBT interface, so like Pulse, group chats, shopping, research, tasks, all of these
features launched inside ChatGBT as the core. The exception there is obviously SORA as a
standalone video app, whereas Google tended to launch more things as standalone products. So they did
ship a lot through like Google AI Studio and Google Labs and Gemini and the plethora of Google
surfaces there are to launch a product. But they would also ship things as standalone websites
that you could go to and visit, which basically allowed for a more custom interface for each
type of product, not just the kind of chat entry, chat exit or image video exit.
So just you have a question for you on that. So it felt like 18 months ago we were talking
about Mid Journey and most of the multimodal models were defined by ascetics and realism.
Is that still true? What changed this year?
Yeah, I think there's definitely different styles still.
And I think mid-Journey, when you talk to people really deep in image and video,
it still kind of stands apart for this, like, aesthetic sensibility that a lot of the models
don't have if you don't know how to prompt for it.
But I would say this year in particular, we made a lot more strides on realism
and also on reasoning within both image and video, like all of the little details
that make an image where a video actually seem real.
For example, if you have a person walking and talking, the people in the cars in the background, if they're on a street, should be moving in the correct direction, like they shouldn't be morphing and looking strange.
In image, we were able to have multiple input images and text and sort of reason across all of those uploads to create like a cohesive design or something like that, which was not something we saw happening last year, for sure.
Yeah, I remember when we were excited about having a letter show up correctly in images.
and now we have insane infographics.
Yes.
We can just put up amazing YouTube video
and say, give me an image that explains us.
That's incredibly different.
Nano Banana Pro can even generate like market maps.
Like I have a telemarked up in the space.
And it either has or we'll go do the web research
within the image model, which is crazy
to get the correct list of companies
and then pull their web logos, which is insane.
I know.
There's one benchmark left that the reasoning image models have not cracked.
I tested GPT Image 1.5 yesterday.
They sometimes struggle with both reasoning and multi-step reasoning.
So what I've been testing is you upload a picture of a monopoly board and you say,
remove the names of all the properties and replace them with names of AI labs and startups.
And GPT image 1.5 is actually the closest, but it's very hard for them to do all of those steps.
Remove it, come up with the new names, put all of the new names in the correct places,
make sure there's not overlaps or one thing you mentioned three times and another.
big player you never mentioned. So there's still some room to go on the image e-vall.
It's interesting that, especially from the image model from chat GPT, where you can actually
see prismference of like it carries a character over into multiple image generation, the same
style. Yeah. And I thought that was like, oh, like this is actually very interesting where
storyboarding. Totally. Makes you want to generate more. Yeah. You know, for me, it felt like the most
underhyped aspect of nanobanana was the integration with search because it feels like there's realism
which is physics and sort of other things that feel like we're on Canney Valley,
there is reasoning, which is apply modifications that are adherent to what the user asked for.
But then there's also sort of accuracy.
And for me, a good example of this is product photography.
If you say, hey, generate a photo of this album cover or a historically accurate photo of this moment in time,
you have to actually have the search integration.
And that was sort of non-intuitive, but it is actually very useful.
Totally.
Yeah.
It's kind of like the V-O-3 moment when I don't think it was intuitive,
to people that video would be cracked necessarily by bringing audio together with video in the same place.
And that ended up being the thing that made AI video go viral.
Like since V-O-3, and now SORA maybe dominates, but like since V-O-3, my social feeds have been like full of really realistic.
I counted.
About one-fifth of my feeds are AI-generated.
Amazing, yeah.
What do you guys do?
There's so many launches this year, and many of them went well, like VEO and Nano.
What do you think is underhyped or products that you think didn't get enough attention?
Brian?
It's a good question.
I think underhyped pulse of the world
is probably still underhyped
and we're talking about Open AI, Google,
which to me fall under productivity category.
So if you actually think about,
if you go to App Store today,
five out of top 10,
productivity apps are all Google.
It's insane.
And ChashiP is number one.
So we're talking about a productivity category
where it helps you do things.
And I feel like a lot of people
are trying this from a different angle,
like how do I actually ingest your data
or your schedule, your email
to make it more helpful and give more proactive
and notification to you?
I think a lot of people are working on it.
Given the frequency of people using chat GPT,
which I think is what 25 times a week,
pretty good, pretty good,
three to four times a day.
It feels like it's a really good position
to actually give you proactive nudges
and summary and help your life in general.
So I feel like the Everything app
was always this myth in the Western world.
I think Open AI is trying to move in that direction
where it's ingesting enough.
People are going there enough
to start giving really useful, proactive nudges.
And I think that's a space that I'm excited about.
It's interesting.
But are you a DAU?
I am not a DAU.
A pulse?
Well, not a false.
Similarly, I tried Pulse for a while
and have kind of largely turned off of it.
But I would agree with you that I feel like Pulse
and a couple other examples
that Open AI launched this year
are kind of new primitives or ideas that feel underhyped,
but because the execution is a little off.
I think it's an execution.
The usage is off.
Another example that I would give,
which is similarly like personal contacts
would be their connectors.
So now you can, and you can do this on Cod as well,
you can connect your calendar, your email, your documents.
And so hypothetically, you could say to chat,
you know, read all of my memos over the past six months
and, like, summarize what's most interesting,
least interesting.
I think when that works, it's really exciting.
I have found it to be a little bit unreliable so far,
but I think as the models get better,
they have a real chance to kind of own the pro-sumer workspace
if they get that right.
Prosumer is perfect category because we talk about it sometimes,
but 99% of people don't run their lives on calendar.
Yeah.
We do.
Right.
So that's when I'm thinking about the actual average frequency
of using chat GPT.
And look, if it's 24 times a week,
that's pretty good place to start.
Yeah.
Olivia, I feel like you're the ultimate power user.
What are you still using? What's your stack?
It's a great question.
From all of the larger model companies, actually, I would have to say the thing that I'm still using the most
and was maybe the most impressed by this year was the Perplexity Comet Browser.
And I don't and was not using Perplexity as my core general LLM assistant.
I use ChachyBT and Claude much more.
But I think they really executed on it in a first class way in terms of both the agentic model within the browser,
but also perhaps more importantly, all of the work.
flows that you can set up that allow you to basically run the same task over and over,
either at a preset time or when you trigger it on a certain webpage. So that to me was a
really exciting launch. And if you look at the data, like the spike at launch and the sustained
traffic for a comment was actually much higher than for Chatchibati's own browser launch, Atlas,
which is kind of crazy given how much more distribution Chatsubit has than Perplexity.
But I think they also launched an email assistant this year. Perplexity did. And they made a couple
acquisitions of really strong agentic startups.
And so what I would love to see from them next year is like more of these
dedicated prosumer interfaces.
I feel like that would be an awesome direction for them to kind of double down in.
They do feel like the startup that has the biggest breadth of ambition.
Yeah.
You know, alongside the labs and sort of big tech.
Like it's very, very impressive just the number of things they've shipped this year.
Yes, definitely.
What, you know, one thing I wanted to ask you justine was sort of Gemini feels like
it's having a real moment because of all the image and video models.
Do you think it can overtake chat GPT?
Is there truly that much demand for these types of models?
I think, yeah.
So what I've seen basically is there is always nearly infinite demand
for like the best in class image or video model
because then you have a mix of tons of different people
seeing it and wanting to use it.
You have, like if you're using it professionally,
if you're marketing or an entertainment or storyboarding or whatever,
you always want to be using what's at the forefront of the field.
and so you're totally fine to go somewhere other than chat GPT and SORA to get access to VEO.
Even if you're an everyday consumer, so many new viral trends are created around new capabilities of the best in class image and video models.
And so that ends up driving users into different products that they may have never tried before.
Like you might be downloading the Gemini app or accidentally ending up on Google AI Studio, which I know they're trying to make me more for developers to use Nanobanana Pro, which,
lot of users, I think, experienced in the past couple of months. Yeah. The interesting thing about
Gemini, to me, is, like, hypothetically, they benefit from the massive Google distribution
advantage. Like, if you look at Android, Gemini is at, like, 50% of chat GPT scale on mobile,
whereas on iOS, it's, like, 17%. So, like, clearly something is working there. They launched
a little Gemini widget within Chrome recently that encourages you to use it. They're launching it
within Google Docs and Gmail and other things.
Yeah.
But I think that most, the average person is still just using one AI product.
And ChatGBTBT is like the Kleenex of AI.
Like it is the brand that has become.
Exactly.
Yes, yes.
And so I think the Gemini still has a pretty big hurdle to overcome just in terms of that.
Yeah.
But if they keep doing what they're doing on these amazing viral consumer creative tool launches
and model launches, like they could get there next year.
I'm thinking about this, it's really interesting when you look at Gemini, which is everywhere.
Yeah.
Yeah.
But yet nowhere to some extent, right?
You don't like, you know, when you look at the actual usage, people still think of the Kleenex.
Yep.
And they go to chat GPT.
But the interesting thing also is on the product sensibility.
So this morning I had like two panes open, open AIs image model and Google's Gemini and basically use an image functionality.
when you open Gemini, it's a blank screen.
It has a pop-up, says, we got nanobanana.
Would you like to do something with it?
And a little pane where you have to type something.
Yeah.
I don't know what to do.
Yeah.
Chatty-Pt, you go in, and it has a very TikTok-like style of, like, here's a trending themes
that you might want to generate, and you click on, I want a sketch pen or whatever.
And then it just, like, use one other picture, and it creates something amazing.
And then it says, would you like a holiday card?
Would you like a blah, blah, blah, blah, blah.
These are product nuances that I think makes people actually take the first step to generate it.
And then once you have it, you have character consistency.
So you keep going.
Right.
So that's interesting in that I think OpenAI and Chatchipi team has proven that there is deeper product sensibility.
Yeah.
But then this is a funny thing.
Maybe a little non-coacher thing to say.
But, you know, I worked at Snap.
So when you look at Meta versus Snap, famously, Evan Spiegel was chief product officer.
officer of meta.
Yeah.
Yeah.
I wonder if there's a world
where the chat GPT team
that innovates on the product front
again and again,
Google with distribution,
looks at them like,
that's cool.
Let's just,
let's just stay and integrate it
and keep going
and actually play that game.
The interesting thing there is
that Image's Pain
just launched yesterday
when we're filming this.
In chat GPT.
In chatGBT.
Brand new.
And it took them,
like they had image models
for years and it took them
that long to come up
with a separate,
relatively basic interface for generating images, I would almost argue the application layer
companies, like the CREAs, the Hedras, the Higgs field of the world, popularized that
template format and did it first and did it better.
And then they are chat DVD's product people and then maybe the chat chip chip.
So it's a supply chain of product ideas.
Exactly.
Always.
Well, maybe going in a slightly different direction, BK, I'm very curious for your take on
opening eyes social features because it does feel like that's something that you
really have to get product execution right on, but also network design. You know, there's
some efforts around SORA, too. We should talk about that. There's also group chats within chat
GPT. You're our sort of social guy or have been historically bullish, bearish. Where's your
head at? Barish for now. Okay. And the reason to me is twofold. Historically, we look at sort of,
it's funny. I look at products based on what I call it inception theory. You go like three to four
layers down to figure out what the one liner is, which is like, I want my dad to love me.
And so, you know, when they think about products,
is that for you or for the world?
That's for me as well as for a lot of people.
Okay, yes, yes.
And so I look at some of the, you know, products like ChachyPT,
ultimately when you peel the onion five times,
I think essentially is it helped me be better.
Like, help me get that information.
Help me be more productive.
Help me be more efficient.
And then when I think about social features,
meta, Instagram, what have you,
or even TikTok, the two layers of information
or the emotion that is trying to address to me
is for TikTok, entertain me.
I want my clown, entertain me.
And then the other layer is,
I'm lonely, I want to be seen,
I want to connect with people.
And to me,
these are pretty two different parallels
in the product direction.
And Open AI's product is incredible.
It's magic.
It's amazing.
But it's ultimately a see-me or help me category,
which essentially is why
it's the number one in productivity category.
Yeah.
Now we're trying to take this and shove it in people's life and say, guys, connect, connect better and like actually feel like you're being seen.
And even the group chat function, which I love, it'll be so good to plan a trip and like actually have that common pain.
But I think it still stops at probably end count of two to three people planning something in a help me way.
Yeah.
Versus, oh, I feel like I understand Anish so much better
because I've sort of done that.
So largely over time, I think that's the reason of that division.
But that is not to say you can build a separate product
that completely sort of addresses that.
I think Sora, so we talked about group chat,
Sora too was the other big social push this year
from all the consumer AI giants.
Which was basically like a TikTok feed,
but all AI generated video and you can make cameos at your friends.
The cameos was a very good bet.
Yeah.
It was a strong bet.
Yeah.
But I think what we've seen is like in the retention data and how we're seeing it used is it was massively successful as a creator tool.
Like now my feed is probably two-thirds a-islop, if not more.
And over 50% of it is now SORA, whereas before it was like all V-O and some cling.
But it has not been as successful as like a social app.
Consumption.
Yeah.
People are like a small number of creators are creating a ton of content.
and then bringing it out to like TikTok, Instagram, X, Reddit,
where it's going massively viral.
But it doesn't seem like there's as much consumption happening in the app.
Yeah.
As much remixing, as much commenting, especially as there was initially.
You know, in a funny way, the way I think about it is like SORA's competition or analogy isn't actually TikTok.
Mm-hmm.
It's actually a funny way.
Is it almost like a creative tool?
Yes.
Interesting.
Yes.
Olivia, what's really?
Well, I was going to say, like, I think it goes back to your earlier point, which is, like, the kind of motion that drives social apps is both these, like, positive and negative feelings of like, oh, I'm publishing this thing of myself that's kind of sensitive or that I want people to think it's this or that or this other thing.
And so that's kind of what drives participation on the app.
Yeah.
The status game.
Yeah.
It's exactly the status game.
And when it's AI generated content and people know it's not real, like,
a real representation of you as a human being,
the status game is lost a little bit.
Absolutely lost.
Yeah.
I think the status game comes then with,
can you prompt something very cool?
Yeah.
But that's a different type of product.
And that's why I think it goes viral on like Twitter
and all these other existing platforms.
I mean,
my sort of counterpoint or bull case for SOR II
is actually think the status game was about humor,
more than anything else.
And humor is the intersection of knowing how to prompt
and sort of being culturally aware.
Yeah.
So I think that if they iterated on that,
that's like a direction that nobody has captured before.
Yeah.
Yes, but if you can export those videos,
isn't it true that like TikTok with SORA videos on it
is strictly better than SOR?
We talked about it so much where like the ultimate social product
is where consumption and creation both live together
and that the output of it is not native to other platforms
like TikTok, like YouTube shorts.
So what do folks think of the challengers?
You know, we're talking about sort of to.
I mean, meta, it's crazy to talk about Meta as a Challenger.
I guess in this context, they are.
But I think Claude, Perplexity, Grock are the more obvious names for challengers, Olivia.
What's your take?
I love Claude.
I talk to Claude all the time.
Claude is somewhat replaced Chachybt for me as my general LLM.
I think Claude is opinionated in an interesting way.
I also love Claude because I'm willing to invest time into building out AI workflows.
I think Claude actually launched a lot of really powerful things this year around like artifacts and skills.
where you can essentially set up tasks or workflows to run over time.
I do think the reason it hasn't hit the mainstream yet
is even the way they built those things
is geared towards a technical user or an engineer.
I think they tried to make skills as easy as they could to create,
and it still was not anywhere near easy enough for the mainstream consumer.
Another example would be they were actually the first of the big players
to kind of launch file creation, slide deck,
creation, editing, and they branded it as like file generation and analysis or something.
And it was like a toggle feature within a setting bar of a setting bar or something.
So like very few people used it.
And yet to me, it's still the best product across all of them and doing that kind of complex
work.
So I love Claude, but I think if they want to be a true mainstream consumer product, they need
to dumb it down even more in terms of accessibility.
There was that survey you found recently of U.S. teens.
Yes. Yeah, there's, I think it was three times more U.S. teens have ever used character AI than have used Claude.
Yeah.
So I think that shows that like the P.O.S. It's a pretty broad thing.
Yeah.
Clod is beloved amongst tech people, but outside of tech people, I think they are maybe struggling to pick up relevance.
It is interesting, though.
Like if you look at the sort of aesthetics, the product design, the craft, like three things that Anthropic did were MCP, skills, and
command line interface,
Claude Code.
Like those are three
surprising bets,
especially Claude code.
I would have said
command line interface really?
Like, is this the way
that people want to interact with?
I thought you were going to talk
about taking over air mail
and the thinking cap.
Yeah, that too.
They're a consumer.
So three you think of like,
where is a thinking cap?
Yeah.
But it's sort of very high-minded design.
Yeah.
It's sort of like versus mass market
or maybe that's apologetic
on their behalf.
But I think it is that it's opinionated
and it's great.
Yeah.
Yeah. I do need to hear Jocene's take on both META and GROC, as I feel like they both had fascinating years in different ways.
Yes.
So META hired all those researchers. I think their strongest models are actually not consumer-facing models. It's their Sam 3 series.
So like the segment anything for video, for image, and for audio. And basically like for video, for example, you can upload a video and you can describe a natural language, like find the kid in the red T-shirt.
and it will find and track that person across the entire video,
even if they're coming in and out of the frame,
it will let you apply effects like blurring them out
or removing them or whatever.
And you can imagine a similar thing with audio with different stems
and then with image with different objects in an image.
I think we're going to see next year,
hopefully some incredible consumer products built on top of those models,
but today they're more of a playground for developers
than they are a consumer-of-desing products.
Given just like the DNA of the company.
Yeah.
So the one good consumer feature, I think they've launched this year with AI, is the Instagram AI Translations, where when you're uploading a reel now, you can opt in to enable translations, and it will clone your voice, translate it into five different languages, apply the translation with your voice, so, you know, and then re-dub with the lip sync.
Wow.
And so it basically makes it seem like you're a native speaker in whatever language.
So I would love to see more of that stuff come to the meta products.
Grog, I think, has had, so GROC had a crazy year with like the companions,
with all of the LLM progress and the coding progress.
I think their image and video progress is probably the steepest slope I've seen of any of the companies.
Like, it was probably like six months ago they didn't even like have image and video models.
And they're shipping so fast to launch new features.
Like it was initially just image to video.
They added text to video.
They added audio.
Then they added lip sync with speech.
Then they added 15 second videos.
Like they're just not slowing down the speed of progress.
And Elon has made a bunch of statements about like wanting more interactive video game type content out of GROC and wanting movies out of GROC by the end of next year.
So let's hope it continues to go at that pace.
Do you feel like it's a pinster movement where like on one hand there's like a very infrastructural model layer of,
like, let's get to the, let's top the L.M. Marina charts.
And then the other one is like, let's go, Annie.
It's like a little bit of like a bifurcated move.
Right.
Like the entertainment and the like smart.
Absolutely.
But entertainment in a way that like we're talking about, you know,
anthropic and chatty's general population.
But you just said character AI is way more popular.
Yes.
So then like how do we think about that?
And I think, you know, it's a very interesting strategy in my mind.
And Grock, like in the image.
video app. Since pretty early on, they've had templates of popular things. Like you're standing
somewhere and suddenly like a thing drops, a rope drops from the ceiling and you grab onto it
and it like swings you out of the scene. Like some really good ones that go viral regularly on
TikTok and other places. Yeah. Really, really interesting. Well, so maybe switching gears from
25 to 26. What are some of all your predictions for next year? What do you think we'll see
hardware, models, commerce we haven't spoken about yet,
or what do we think we'll play out?
I think, I know this is, we're talking about consumer,
but one of the things that's been really maybe underrated for me
about ChachyPT that we might see more of next year
is they've really made a push into the enterprise,
both with the traditional enterprise licenses
and then working with specific companies
to even like train models for them.
And I think when we think about the fact that
most consumers only use one general LLM product,
chat CBT enterprise usage, they publish a big study, but it's up somewhat like 8 or 9x
year over year.
Yeah.
And so if we're entering a world now where people have to use chat CBT for their company
or as part of their work, that could really translate into consumer usage.
Yeah.
Or maybe they become the workspace with the connectors and some of the other things that they're
investing in and someone else owns the consumer consumer use cases.
Yeah.
I think to that and we have to talk about their.
push into apps. And I think whether or not that works is going to be kind of the defining question
for them next year. Yeah. And I think that we've all discussed the importance of the apps SDK and
the apps directory as they're calling it. And it's going to be a huge new channel for a consumer.
I think what's less discussed is it's hyper-relevant to enterprise. So I think where chat GPT shines is
where it's able to operate across a number of tools for one workflow. And if you think about the number
of things you do in your sort of business day-to-day that operates across many tools, it's most of those
things.
Yeah.
So I think that will have very interesting implications for the SaaS ecosystem, and it's a part
of the app store we're not talking about as much.
Yeah.
Yeah.
Maybe less of a prediction, but thinking through 2025, and we talked about all the big move
from big labs, and from the startup point, I think one of the biggest trend we've seen
is app generation.
And I think there is a real world where we see the big labs with the distribution and the
frequency of usage of people coming in to start saying, look, like maybe there is a common
type of product and apps that we could actually help you generate within the confines
of the big lab products.
Yeah, yeah.
I think that's like one of the interesting thing, which, you know, again, going back to
the supply chain of ideas and research, maybe that's one thing.
And again, nothing groundbreaking.
But as we know, the Ghibli broke the internet.
My cousin who knows nothing about tech sent me.
a Ghibli photo.
Well, let's not send this to your cousin then.
And I think that goes to show that templates matter.
Yeah.
That style matters.
Yeah.
And I think of a video and like it's pretty freaking good.
Yeah.
And it's possible that we're already at a point that it's not necessarily just about the capability of models of the big labs.
But the stylistic things, a template.
Think of TikTok.
The large capability largely still the same, music, trend, dance, go.
Except the trend and format keeps on changing.
It keeps it extremely fresh.
So I feel like there's a real world where the repurpose or our team, or what have you,
can start thinking about ways to actually really build in video first products into these lab models.
And I think the cost will go down enough for people to try it out.
And I'm excited to see that.
Yeah, I think what I'm most excited about is sort of along those lines.
basically everything becoming multimodal,
like I call it like anything in to anything out,
which is basically initially,
especially with these image and video models,
it was you put in a text prompt
and you get an image out or a video out.
You couldn't really do much with it.
And now we started to see this with the image edit models
with like nanobanana and with flux
and with the new OpenAI model
where you can put an image in now
and get another image out.
You can put an image in with a text pair in a direction
or put an image with the template
and another reference image and get another image out.
What happens when you can put a video in
and get images out that are related to
or the next iteration of the video.
Or you can put a video in
and a text prompt about what you want to edit
and get the edited video out.
From my conversations with the labs,
a lot of them are trying to basically combine
in all these largely separate efforts they've had across, like, text reasoning and intelligence,
the LLM space and image and video into, like, what if we can put merge those all into, like,
a mega model that can take a lot different forms of content and produce much more.
I think it's also going to have huge implications for, like, design.
Yeah.
Because if you think about it, a lot of design is combining images with text, with video,
with different elements in kind of interesting ways.
Yeah.
I guess if I think about like a macro level prediction, I think it's actually going to be more of the same in that when we talk about what all of the labs have launched in consumer, they've done a great job with models and they've done a great job with incremental things that improve the core experience of using like a chat GPT or Gemini.
Yeah.
In my opinion, we've gone through dozens of things that they've launched or tried as new consumer products or new consumer interfaces like group chat, like Pulse, like Atlas, like SORA, Google has had a long tail, like Stitch, gems, Opel, Dopple, tons.
Yeah.
None of those are really working.
And I think it's because it's not the core competency of these companies anymore to build opinionated standalone consumer UI.
Out of all of those, I think the product that's working the most is like, no.
notebook L.M. And that's one of like maybe 20 things that Google has tried or experimented
with. So I think it's actually very positive for startups in that consumer startups and that
the models will keep getting better, which the startups can use. And they'll keep, you know,
they'll make chat CBT better and better. But I don't necessarily think that chat CBT like
verticalizes into all of these other amazing use cases or products. And there's still room for
startups to be building there.
I have a yes and to that where absolutely. However, however, when the input and
the output is text where ChatsyPT and Gemini of the world shine the most, no matter how
deeper you go, no matter how specific you think your text output is going to be, essentially
given the frequency of the usage of the main Big Lab products, I think it's going to be really
hard to stitch that and get that away from that usage if your product is mainly text and text
out. Yeah. So I do think you have to be creative around what is the angle that you can like
go steal people away from you know i love that you use the word opinionated because i think that for
labs certainly for big tech and perhaps increasingly for labs the priorities get set in their
promo committee always and if you're a pm and it's always the sort of mid-career pms and i've been one of
these and you like the incentives are always to get promoted and the way to get promoted is to build
something safe that extends a core metric and a core feature so building opinionated products is a
very risky way to manage your career you know because they're probably not going to work they're probably
going to have a bunch of implications for legal and compliance, and the CEO might yell at you.
So I just think that they are so structured to do incremental things.
The more founders do opinionated things, the more advantage they are.
I think, honestly, the big thing we haven't discussed here, too, is compute, which is the labs
have this inherent tension between there's a limited amount of compute, and they either spend
it on, like, training models or they spend it on inference.
And even with inference, there's this split between, like, the entertainment Ghibli use cases
and the, like, coding intelligence use cases.
I think XAI is probably the only model company
that is not bottlenecked on compute from my understanding
whereas the others have to make really like serious
and significant calls of like if we let
if we release nanobanana and go super viral
like it may slow down the next like big LLM
we're trying to push forward
whereas startups who focus on the app layer
don't have that problem because there's no tension there.
Absolutely. Yeah.
We've talked about this before.
I also think that there are categories in which being multi-model
is just allows you to deliver a better proposition to the customer.
And the labs in big tech are always going to be sort of definitionally first-party model only.
So I think as all the models get better, perhaps 80% of what you need can be received from a single model.
But for the power users, and so much of AI is a power user story.
You know, you always said that like, well, power users are just power users.
And I think that's true in a pre-AI world.
But now the kind of depth of value and the depth of monetization is so much higher that maybe all of AI is actually a power user story.
You know, and everyone else is just traffic.
Yes.
Yeah.
Which is why we're also seeing, like, consumer products for the first time ever have more than 100% revenue retention.
Yes.
And that's separating the good from the great, from the exceptional in consumer AI.
And to be clear, how that happens is they charge for usage often in addition to a subscription.
So you can use beyond whatever your quota is for the month, given your subscription and pay more.
It's either upgrade of the tier or actually buying tokens or more usage.
Yeah.
it's that's what differentiates it like you know if you told me pre-AI we see a consumer company
with 100 plus retention and money and like that that doesn't make any sense doesn't that compute yeah
yeah no pun intended exactly exactly well guys okay maybe let's talk about start the specific
recommendations like after this pod what are the products people should download or the features
or the models what should folks be using today I guess on the multimodal point I think one
really under-hyped product that people should check out, not because they'll use it every day,
but because it shows sort of what is possible when you combine like an agent with image with text
is Pameli. So this is like the Google Labs product where you put in the URL of your business
and it has an agent, go to the website, pull all of the product and brand photos, summarize
what it thinks your brand's aesthetic is, what it stands for, what kind of customers it's
targeting, and then it will generate three different ad campaigns for you. And it will generate
not only the text, but it will generate like the Instagram posts. It will generate the flyer.
It will generate like the photo of your product and this, you know, whatever, wherever it thinks
it should be based on your customer. And very cool product would be hard to become a giant
standalone product within Google, I think, but shows sort of the future of what happens if we
combine agents with generation models that have sort of really deep understanding of context
that an image model or a video model normally wouldn't have.
Startup products, though.
Do you have a favorite startup product?
Oh, so start up.
In creative tool?
Yes.
In creative tool?
Yes.
I mean, we're investors in Korea, so this is bias.
But I think they've really done an exceptional job of being the best place to use every
model or every quality model across every modality.
and also building more of the interface on top of these models.
Like, I now prefer to use Nanobanana Pro on Crea
because Crea allows you to save elements,
which are essentially characters or styles or objects,
that you can like at tag to reprompt
versus having to drag the same image reference
into Nanobanana over and over again.
That's a good one.
I suppose it falls under a startup category, again, shilling companies.
But, you know, the one that I use the most is actually,
11 labs reader. And the reason is, we've seen an explosion in podcasts, and there's, I think, a reason
for that, right? People are a lot more on to go. The reading capability of us reading, I think,
is going down over time. And so, you know, let's not fight their reality. Let's embrace it.
And, okay, so then, like, let's actually find a written material translate into listening and
do that. And I used to be a power user of tools like pocket. You know, I didn't have time to read
everything that I wanted to read. And it's a saving behavior, right? You're going around and saving
all the things you eventually want to consume. But I think what I do now is similar where I go get all
the things I want to read and I just put it, either PDF it or put it on an 11 reader. And just like
once in a while when I'm going to walk and have like three, four minute, you know, 1.5x speed or
2x speed and just listen to one of these and get the gist of it. So I think that's been a good way
to use a little bit of time as a sort of semi-normal person.
Yeah. Well, first of all, I love this question because I am strongly opinionated that by far the best way to get up to speed on AI is just to try a ton of products. And you get opinionated really quickly. Yeah. Justine and I, actually, for the whole month of December, are on Twitter publishing one new consumer product a day for people to check out. So that's one way. I'll name three others that I think are super maybe relevant or interesting that people can plug into their workflows. So one would be gamma for slide deck generation.
You can go text prompt to slide deck.
You can go document a slide deck.
I use it for everything.
Also, the slides are flexible sizes.
So you're no longer like editing every little pixel in your Google slides to get it to fit into one, which is great.
Granola for no taking.
You might not have any meetings over the holidays, but in the new year.
And it just gets better and better than more meetings you have on it because it has the context of what you talked about before.
And then lastly, I'm still going to plug, try the comment browser.
If you want to try kind of an AI native workspace, I think.
think that's one of the most accessible ones to start with.
I mean, for me, I've spent my whole year obsessed with coding and AI code.
It's just been so tremendously fun.
I, by the way, Brian, would take the other side of your argument that the big labs or big tech
will win an app generation.
I think they just lack the focus.
Products like Opel have been, you know, released with a whimper, and they're one model only.
So I didn't say they will win it.
I think we will see them doing it.
Yes, yes, I think that's true.
But I think for the pure consumer side, of course, Wabi is really fun and really capable.
And I think they are creating the right sort of constraints on app generation so that you can get a really satisfying functional result.
And I think so far there's been a lot of overpromising an app generation, which has discouraged the early users.
I also think if you haven't tried, you know, GPT-52 in Codex or in cursor, it's worth trying.
Even for non-technical people, it's just amazing.
I think almost being technical is sort of a constraint
because you have an pre-existing idea
for what these models can do
and they can do a lot more.
And I'm hearing increasingly about people
doing knowledge work and writing essays
and cursor instead of just writing code.
Wow.
Just one thing I'm going to do at the end, ear end.
It's just to plug in like a popular trend
I've seen on TikTok where there are people who said
what is the most unhinged thing I said this year.
Okay.
And it actually does a review of all the things that you said.
But I think similarly it'll be a good thing
I'm going to do this at the year end
Tell me how to live a better life next year
Yeah
Give me, give me actual unvarnished
Opinions and some directions
And I think it'll be helpful
I love that idea
I'm going for a worse life next year
Fantastic
Let's go full DGN guys
Any closing thoughts
That wasn't a lot of
I mean
The obvious one is we are very actively
investing in consumer companies
And I genuinely, I think a lot of people say this.
I genuinely believe that the models have gotten to the level of quality
that you can build a real scalable app on top of them.
Wabi is a great example of this.
And so the hope is 2026 will be a huge year for consumer builders,
not just like consumers as, like consumers being consumers of a product.
Yes.
Well, thank you all for a super fun year in consumer and AI.
We'll be back with more next year.
And Merry Christmas.
This is a wrap.
Happy holidays.
Happy holidays.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like, comment, subscribe, leave us a rating or review, and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcast, and Spotify.
Follow us on X at A16Z and subscribe to our Substack at A16Z.com.
Thanks again for listening, and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only, should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
Thank you.
