a16z Podcast - Apple’s Big Reveals, OpenAI’s Multi-Step Models, and Firefly Does Video
Episode Date: September 14, 2024This week in consumer tech: Apple’s big reveals, OpenAI’s multi-step reasoning, and Adobe Firefly’s video model.Olivia Moore and Justine Moore, Partners on the a16z Consumer team, break down the... latest announcements and how these product launches will shape the tech ecosystem, transform AI-powered experiences, and impact startups competing for attention in a fast-moving market. Resources: Find Justine of Twitter: https://x.com/venturetwinsFind Olivia on Twitter: https://x.com/omooretweets Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
It's not what a human curator probably would do, but the weirdness is almost a feature instead of a bug.
Apple feels like the only company that could actually make that happen since they have kind of permissioning and access across all the apps.
Is our model that's trained on this very limited data, like truly the best model for making that image or video?
Putting my investor hat on, this is not a case where I'm necessarily worried for startups.
This was a big week in technology.
For one, Apple held its annual flagship event in Cupertino,
releasing new hardware models of the iPhone, AirPods, and watch,
plus a big step forward in Apple Intelligence.
We also had new models drop,
including OpenAI's 01 models, focused on multi-step reasoning,
and Adobe sneak peek of their new Firefly video model.
So in today's episode, we break down all this and more,
including new AI features from Spotify, getting 70% week-over-week feature retention,
IQ versus EQ benchmarks, and of course what all of the signals about what's to come.
This episode was recorded on Thursday, September 12th, in our A16Z San Francisco office
with deal partners Justine and Olivia Moore.
Now, if you do like this kind of episode, where we break down the latest and greatest in technology,
let us know by sharing the episode and tagging A16C.
Let's get started.
As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments, please see A16C.com slash disposures.
Earlier this week on Monday, Apple unveiled a bunch of new products, iPhone 16, Airpods 4,
the Watch Series 10, but also Apple Intelligence, right? So a bunch of hardware, but it seems like
they're upgrading the software stack. Olivia, you got early access to this? Is this right?
Yes. It was actually open to anyone, but you had to go through a bunch of steps. So with some of
these new operating systems, they offer early access to what they call developers. But if you
snoop around on your phone a little bit, anyone is able to download it and get access a few
weeks ahead of the rest of the crowd. I think access to iOS 18 should be launching September 16th
for anyone with an iPhone 15 or 16. Okay. So you've been playing around with this for two weeks or so,
Apple Intelligence. Yeah. What did you find? What are the new features? Maybe just break them down
first. And then where did you find the really inspiring, wow, this might change the world kind of stuff?
And then where maybe do you think things are falling short? So Apple Intelligence,
is a set of new AI-powered features that are native to the iOS operating system.
So they're already built in to the Apple apps and to the phone itself.
And we've heard that they might charge for it down the line, but at least right now,
it's free to everyone who has iOS 18, which is really exciting.
A lot of the features, honestly, were things that have been available via standalone apps
that you had to download and maybe that you had to pay for for a couple years.
One of the classic example is what they call visual intelligence, which is actually
actually just uploading a photo of a dog and getting a report on what dog breed it possibly is
or uploading a photo of a plant, which is nice to not have to have a separate app, but is it really
game-changing?
Probably not.
Similarly, they have a new photo editor where you can intelligently identify a person in the
background of your nice selfie and one click to remove.
Is it helpful?
Yes.
Is it that much better than all of the tools available via Adobe and other products to do
it in a more intense way, I would say probably not. I think we both felt the probably most game-changing
and exciting features were actually around media search because everyone has hundreds, if not,
in our case, thousands or tens of thousands of photos and videos saved on our phone. And I think iOS has
slowly been getting smarter about trying to auto-label people or locations. But this is a really
big step forward. So now in natural language, you can search either by a person name or descriptive,
by a location, by an object, and it searches not only photos, but also within videos, which is awesome.
Our mom texted me earlier this week asking, do you have that video from a couple years ago in Maine
of the seal popping out as we were kayaking? And I was able to use the new Apple Intelligence
to search and find it. Yes, it was like 30 seconds into a two-minute video.
Exactly. And it found it. Yeah, and you would never see it in the thumbnail of the video because
it's just like the ocean. Right. But I probably would have just a good.
ignored her text before because I would have known I could never find it. And this time I could search
like kayak main seal and it pulled up the video right away. That's amazing. I mean, I like to joke
how many gigabytes, terabytes, petabytes of just food photos exist somewhere on all of our
phones that we'll never see again. So it sounds like that was maybe the most useful feature that
you found. Yeah. And it also lets you create things with that media in a new way. Everyone remembers
those sometimes creepy, sometimes charming memories videos where it tries to remind you of what
you were doing, you know, one day, two years ago or some trip that you took. Now you can actually
in natural language input like put together a movie of all my fun times at the beach and it does
that. So I think that's something that Apple is uniquely positioned to do since they're the one
that kind of owns all of that media. That was pretty exciting. The one maybe disappointment for
me or the thing that feels yet to come is like a true AI native upgrade to Siri.
It feels like, especially since the launch of, for example, chat GPT voice, Siri feels so
antiquated as kind of a voice companion.
And they made some improvements to Siri, like she's better able to maybe understand if you
interrupt yourself in a middle of a question.
But it still is not action-oriented.
I would love to be able to say like Siri, call an Uber to this address.
and have her do that.
And Apple feels like the only company that could actually make that happen
since they have kind of permissioning and access across all the apps.
I mean, I feel like Siri, it almost only comes up when you don't want Siri to show up.
But there were a few other updates, right?
Notification summaries, maybe the kind of upgrades that you would only see on your phone
because that's where you have that route access.
Do you guys have any thoughts around you're talking about this maybe like next evolution of AI native software on these devices?
like when will we get that from Apple and maybe any thoughts around what that might look like?
It does feel like this release was a big step forward in a couple ways.
They could have done things like the object removal or the better search and photos for a long time and they had just not done it.
And I think a lot of people felt like they're just choosing not to do it.
They're just choosing to let the third party app ecosystem do these sorts of things.
But I think these releases show that they are serious about doing some of these things internally and having them be native to the.
iOS ecosystem. I personally will be really curious to see if they do more around the original
purpose of the phone, which is calls. Historically, there's very little you can do around a call,
right? Like you make a call, maybe you can merge calls, maybe you can add someone, but like they
have not wanted to touch it much. And I think the new call recordings and transcripts feature
is pretty crazy because historically they've made it impossible to do a voice recording or
anything like that when you're making a call. And now they're actually enabling, hey, we're
going to have this AI co-pilot that makes little noise at the beginning that sits and listens to your
calls. And eventually you could see them saying, hey, there was a takeaway from this call, which
was schedule a meeting. And like in your Apple calendar, it should be able to do that and send
the invite to someone. Yeah. So now if you launch a call, you can press a button on the top left that
says record. It does play like a little two to three second chime that the other person will hear that
says this call is being recorded. But then once the call is completed, it saves a transcript
down in your Apple notes as well as some takeaways. I think the quality is okay, but not great.
I would imagine they'll improve it over time. But again, there's so many people and so many apps now
that have a lot of users and make a lot of money from things that seem small, like older
people who have doctors appointments over the phone and they need to record and transcribe the calls
for their relatives. That actually does a shocking amount of volume.
And so I think this new update is showing them maybe pushing towards some of those use cases and allowing consumers to do them more easily.
Yeah, and maybe just to round this out, this idea to both of your points, there are so many third-party developers who have historically created these apps.
I mean, you mentioned the ability to detect a plant.
If you go on app magic or data.a.I, like you can see there are pretty massive apps that that's their single use case, but it works.
People need it.
What happens to those companies? What does it signal in terms of Apple's decision to capitalize on that and maybe less so have this open ecosystem for third parties?
Yeah, I think it kind of raises an interesting question about what is the point of a consumer product often?
Like, is it just a utility, in which case Apple might be able to replace it or does it become a social product or a community?
Say there's a standalone plant app and then there's the Apple plant identifier.
You might stay on the standalone plant app if you have uploaded all of these photos of plants that you want to store there and now you have friends around the country who like similar types of plants and they upload and comment like it becomes like a Strava for plants type thing, which sounds ridiculous, but there's massive communities of these like vertical social networks.
And so I think there's still like huge opportunity for independent consumer developers.
the question is just like, how do you move beyond being a pure utility to keeping people on the product for other things that they can't get from like an Apple?
Yeah, I agree. I think putting my investor hat on, this is not a case where I'm necessarily worried for startups.
I think what Apple showed through this also is they're going to build probably the simplest most utility oriented version of these features and they're not going to do an extremely complex build out with lots of workflows and lots of customization.
So, yes, they might kill some of these standalone apps that are being run as cash flow generating side projects, but I don't see them as much of a real risk to some of the deeper venture-backed companies that are maybe more ambitious in the product scope.
If we think about utility, right, one of the ways that you drive utility is through a better model.
So maybe we should talk about some of the new models that came out this week. We'll pivot to OpenAI first.
As of today, as we're recording, they release their new 01 models, which are focused on multi-step
reasoning instead of just answering directly.
In fact, I think the model even says, like, I thought about this for 32 seconds.
The article they released said that the model performed similarly to PhD students on challenging
benchmark tasks and physics, chemistry, and biology, and that it excels in math and coding.
They even said that a qualifying exam for the International Mathematics Olympiad, GPT-40, so the previous model,
correctly solved only 13% of problems, while the reasoning model that they just released scored
in 83%. So it's a huge difference. And this is something actually like a lot of researchers have
been talking about, right? This next step. I guess opening thoughts on the model and maybe what it
signifies, what you're seeing. Yeah, it's a huge evolution. And it's been very hyped for a long
time. So I think people are really excited to see it come out and perform well. I think even beyond the
like really complex like physics, biology, chemistry stuff, we could see the older model.
struggle even with basic reasoning. So we saw this with the whole, like, how many
ours are in strawberry fiasco. And I think what that essentially comes from is these models are
like next token predictors. And so they're not necessarily like thinking logically about,
oh, I'm predicting that this should be the answer. But like if I actually think deeply about
the next step, should I check that there are this many arts and strawberry? Is there like another
database I can search? What would a human do to verify and validate whatever solution they came up
with to a problem. And I think over the past, I don't know, year, year and a half or so,
researchers had found that you could do that decently well yourself through prompting,
like when you asked a question saying think about this step by step and explain your reasoning
and the models would get to different answers on like basic questions than they would get
if you just ask the question. And so I think it's really powerful to bring that into the models
themselves. So they're like self-reflective instead of requiring the user to know how to prompt
chain of thought reasoning. I agree. I think it's really exciting actually for categories like
consumer ed tech where actually a huge in some months like a majority of chat GPT usage is by people
with a dot edu email address or have been using it to generate essays. But historically it's been
pretty limited to writing history, those kinds of things because as you said, these models are just
famously bad at math and science and other subjects that require maybe deeper and more complex
reasoning. And so a lot of the products we've seen there, because the models are limited,
have been the take a picture of my math homework and go find the answer online, which is,
you know, fine and good. And a lot of those companies will make a lot of money. But I think we have
an opportunity now to build deeper ed tech products that change how people actually learn
because the models are able to correctly reason through the steps and explain them to the user.
And when you use it today, you can see the steps in which it thought about something.
So by default, it doesn't show you all the steps.
But if you want or need to see the steps, like for a learning process, you can get them.
I did test it out right before this with the classic consulting question,
how many golf balls can fit in a 747.
And?
And 01, the new model, got it completely correct in 23.
seconds. I tested it on 4-0, the old model. It was off by 2x, 3x. And it took longer to generate. So very
small sample size, but promising early results there. No, it's important. I think I saw you tweet
something about this recently, this ed-track angle or slant on this. A lot of people are up in
arms saying this technology is being used in classrooms. And I think you had a really interesting
take, which was like, okay, this is actually pushing us to force teachers to educate in a way
where you can't use this technology and you have to think and develop reasoning.
It's funny. I found a TikTok that was going viral that showed there's all these new Chrome extensions
for students where you can attach it to canvas or whatever system you're using to take a test
or do homework. And you just screenshot the question now directly. And it pulls up the answer
and tells you it's A, B, C, or D. And in some ways, it's like, okay, cheating. Do you really want
to pay for your kid to go to college to be doing that? But on the other hand, before all of these
models and these tools. Most kids were still just Googling those questions and picking multiple
choice. And you could argue a multiple choice question for a lot of subjects is probably not actually
the best way to encourage learning or to encourage the type of learning that's actually going to make
them successful in life. Or to even assess true understanding. Like when someone does a multiple
choice answer, you have no idea if they guessed randomly, if they got to the right answer but had the
wrong process and they were lucky or if they actually knew what they were doing. Yeah. And I think the
calculator comparison has been made before in terms of AI's impact on learning. But similar to the
fact that now that we have calculators, it took a while, it took decades, but they teach kids math
differently and maybe focus on different things than they did pre-calculator when it was all by
hand. I'm hoping and thinking the same will happen with AI where eventually the quality of learning
is improved. And maybe because it's easier to cheat on the things that are not as helpful for true
understanding. Right. And I mean, if we think about this just came out today, is this a signal of
what's to come for all of the other models, or at least the large foundational models? Or do you
see some sort of separation in the way different companies approach their models and think about
how they think per se? It's a great question. I think we're starting to see a little bit of a
divergence between general intelligence and emotional intelligence. And so if you're building a model
that's generally intelligent and you maybe want it to have the right answers to these complex
problems, whether it's physics, math, logic, whatever. And I think folks like OpenAI or Anthropic
or Google are probably focused on having these strong general intelligence models. And so they'll all
probably implement similar things and are likely doing so now. And then there's sort of this newer
branch of companies, I would say, that are saying, hey, we don't actually want the model that's
the best in the world at solving math problems or coding, we are building a consumer app or we
are building an enterprise customer support agent or whatever, and we want one that feels like
talking to a human and is truly empathetic and can take on different personalities and is more
emotionally intelligent. And so I think we're reaching this really interesting branching off point
where you have probably most of the big labs focused on general intelligence and other companies
focused on emotional intelligence and the longer tail of those use cases.
It's so interesting. Do we have benchmarks for that? As in, there's obviously benchmarks for
the how does it do on math? And because we're not quite at the perfection in terms of utility,
that's what people are measuring. But have you guys seen any sort of? I haven't. So I feel like
for certain communities of consumers using it for like therapy or companionship or whatever,
if you go on the subredits for those products or communities, you will find users that have created
their own really primitive benchmarks of like, hey, I took these 10 models and I asked them
all of these questions and here's how I scored them. But I don't think there's been like
emotional intelligence benchmarks at scale. A Redditor might create it. Yeah. I would not be
surprised. Yes. Maybe after listening to this, reach out definitely if people are building that.
I think that also relates to the idea that these models ultimately in themselves aren't products.
They're embedded in products.
I think, Olivia, you shared a Spotify Daylist.
Yeah.
Tweet about how that was just like a really great way for an incumbent because all of the
incumbents aren't trying to embed these models in some way.
You said it was a really great case study of how to do that well.
Yeah.
So Spotify Dayless, we both love this feature.
I'm going to bring out mine to see what it says.
This is a risky move.
It is risky.
It's high stakes.
I never share my Spotify wrapped because basically it's just an embarrassment.
Well, but that's part of the emotional.
Gentle, wistful Thursday afternoon.
That's actually much better than it could have been for you.
Yeah.
I get a lot of scream.
I say that lovingly.
Yeah.
So basically what Spotify Dayless does is it's a new feature in Spotify that kind of analyzes all of your past listening behavior.
And it curates a playlist based on the types of music, emotionally, genre-wise, mood-wise that you typically listen to at that time.
And it makes three a day, I think, or four a day by default.
Yes.
And that switches out every six or so.
hours. Exactly. And the feature was very successful. So Spotify's CEO tweeted recently. I think it was
something like 70% of users are returning week over week, which is a really, really good retention,
especially since it's like not easy to get to within Spotify. Like you have to go to the search bar
and search day list. Mine is pinned now if you click it enough. That's fair. That's not surprising.
It's really fun. And I think why it works so well and so many other incumbents have just tried to tack on
a generalist AI feature, but this one is great because it utilizes existing data that it has
on you, executed in a way that doesn't feel invasive but instead feels delightful. And it's not just
like a fun one-off novelty, but actually the recommendations are quite good. So you will end up
listening to it fairly often. And that's why I think people come back week over week, as well as it
still has that novelty of like, it said something crazy about me. I'm going to screenshot on my
Instagram and make sure my friends know that this is how I'm feeling right now. Yeah, the daylists have
gone totally viral among Gen Z teens in particular. They're posting all over like TikTok and
Twitter like the crazy words in their daylists. I think what Spotify does is it takes the data,
it runs it through an LLM and asks what's sort of a fun description of this playlist. But since it's
not a human, the descriptions are often like Normcore, Black Cat, Frightened, like, panics Thursday
morning. And you're like, what is this even mean?
It resonates a little bit.
Yeah, but you're like your vibe is right.
Yes.
But I'm also confused in a way that will keep me coming back to see what the next day list is.
Yes.
And yeah, it's like inherently viral in a way that I've only seen on Rapped probably for Spotify.
I would say another example of the good implementation of AI in a similarly both interesting but also viral way would be on Twitter GROC, their new AI chatbot.
A lot of the read my tweets and roast my.
account or draft a tweet with this person's tone based on all the tweets. Similarly, that's
taking the existing data they have on you and creating something that's like fun and shareable
and interesting and doesn't feel invasive because you're going and making the request versus
it like pushing something into your feed. Totally. Yeah. Yeah. And maybe the takeaway is this idea
that the best model doesn't necessarily equal the best product. Yeah. Yeah. I think you quote tweeted Nick
St. Pierre who said, remember Dolly 3 when it came out and everyone was talking about how
The coherence is so good.
And then his point was how many people are still using this model anymore.
And the answer, I think, is not many.
Yeah, I think there's a couple angles to that.
So for daylists in particular, it's not the most accurate LLM description of what your music taste is.
Like, it's not what a human curator probably would do.
But the weirdness is almost a feature instead of a bug.
Yeah.
Like this is sort of an example of the emotional intelligence versus the general intelligence,
which is like it knows what the person wants and not like a drug.
Oh, you listen to like soft country on Thursday mornings.
Yeah.
I think the other part is on the creative tool side we've seen, which is there's different
styles for different people, but also like how do products fit in your workflow?
How easy are they to use?
Are there social features?
Is there easy remixing?
Like all of the things that make consumer products grow and retain historically can drive
a worse model to do better than a quote unquote better model.
Yeah.
And I think it differs across modalities.
Again, Spotify is probably using an LLM to generate these.
And it's not the most complicated LLM in the world, right?
But it's good enough to generate interesting enough descriptions.
I would say for most text models and even most image models, the gap between, like, great open source model or great publicly available model and like best in class private model, there's a gap, but it's not like a golf necessarily.
Yeah.
Versus in video and music and some of the other more common.
complex modalities, there is still a pretty big gulf between what the best companies have
privately and like what is maybe available to via open source or others.
And so I think we've seen, at least if the text and image trend continues, that will
probably shrink over time across modalities.
And what that means, again, is it's not, does this team have the best researchers,
especially for consumer products, but does this team have the best understanding of the
workflows, the structure of the output, the integrations, the consumer behavior and
emotionality behind it that will allow them to build the best product, even if it's not
the best model, but the model is good enough to fulfill the use case. Totally. Right. How important
is it for these companies that are using stuff that's open source to fine tune it for their
own use case? Like, how important is it for them to modify the model itself versus just
being clever with retention hacks or product design, things like that.
I think it depends on the exact product and use case.
I mean, we've seen cases where people go totally viral by taking a base flux or
stable diffusion or whatever and allowing people to upload 10 images from it and it trains
a laura of you and makes really cool images.
But the company didn't fine tune their own model first.
Like all the avatar apps that have made 10s, if not in some cases, hundreds of millions
of dollars.
Like maybe there's a fine tune there.
but it's probably pretty basic.
Yes.
But then I think, so in the consumer side, usually the base models through prompting or designing
the product experience around it, you can get like pretty far.
I think in Enterprise, we're starting to see a little bit more need to fine-tune models around
I've talked to a bunch of companies, for example, doing product photography for furniture
or like big items where you don't have to do a big photo shoot.
You can just have AI generate the images.
And you might, for that, want to fine-tune the big.
model on a giant data set of couches from every possible angle.
So it gets an understanding of like, how do you generate the back shot when you only
have the side or the front shot of the couch?
Yeah.
Because the bar is just so much higher there in terms of a usable output versus a consumer
product where in so many cases, the randomness is part of the fun.
Well, on the note of models, let's talk about one more.
Adobe released their Firefly video model.
Firefly was released in March 2023, but that was text to image.
And so now they're releasing this video model.
They released a sneak peek on Wednesday.
They also said, interestingly enough, since March 23, the community has now generated
12 billion, what they say, images and vectors, which is a ton.
And now they're, again, they're moving toward video, and they released a few generations
that were all created in under two minutes each thoughts.
Adobe is a super interesting case because how they describe their models is they only train on the safest, most responsibly licensed data and they don't train on user data.
And so I think historically they've been a little bit sort of hamstrung in terms of just the pure text to image or probably text to video quality because when you really limit the data set compared to all the other players in the space, the outputs typically aren't as high quality.
I will say where they've done super well is like how do you bring AI into products that people are
already using? I don't know if this is counted in the firefly numbers of generations. I guess it was,
but they've gotten really good at like within Photoshop, you can now do generative expand where
you got a photo that was portrait and wanted it to be landscape or whatever. And you can just drop the
photo in, hit the crop button, drag out the sides. And then firefly generates everything that should have been
around the original image, kind of like filling in the blanks.
And I've also seen even like viral TikTok trends of someone uploading a photo of themselves
standing somewhere and then using generative fill to kind of zoom out and see whatever
the AI thinks they were standing on, which I think is like reflective of the fact that
Adobe for the first time has made that available.
Like typically they've been desktop base pretty heavy in a positive way, like complex
products.
But with AI, they've now put Firefly.
on the web for free. They have a mobile app in Adobe Express. They're really going after consumers
in a way that I think we haven't seen them do before. I will say like reading the blog post for
the new video model, it did seem very focused on professional video creators and how to embed
into their workflows. Like, okay, you have one shot. What's the natural next shot that comes out
of that and how do we help you generate that versus a pure consumer video generator? Yeah, which makes
sense, I think, because what has really resonated with them in image is, I would say, generative fill and
generative expand, which is sort of taking an existing asset and saying, you know, if this was
bigger, what would be included or I want to erase this thing, which they really shine in, honestly.
Like, I still use those features all the time. Yeah. Yeah. I know they've announced in the past that
they're also going to be bringing some other models, video models, into their products, like I think
SORA and PICA and others. And so I don't. I don't.
at least see this as their attempt to, like, be the dominant killer all in one video model,
but maybe starting to integrate with some of their own tech.
They have a really interesting opportunity because they have so many users of saying,
okay, if we just want to have the best possible AI creative co-pilot,
is our model that's trained on this very limited data,
like truly the best model for making that image or video?
Or should we give users a choice between our,
model and these like four other models that we basically offer within our ecosystem, which I think
if they do go that ladder route, which they've sort of signaled they will, is a really interesting
distribution opportunity for startups because most startups have no way of reaching the hundreds
of millions of consumers at once that are using Adobe products. That's a great point. So I didn't
even realize this, but they've said that they likely want to bring in these other models. And they
can be the model that says they're creator first and make sure that they're only using certain rights,
but then they can integrate these other models and maintain their dominance with however many people have Adobe subscriptions.
Yes, exactly.
I think they've talked about that extensively for video, and I think they reiterated that with maybe PICA, specifically, with the most recent release,
but before they had talked about kind of SOAR and other video models as well.
They're pretty interesting.
I think even for years, they've allowed outside companies and developers to build plugins on top of the Adobe suite.
And some of them seem like things that Adobe itself would want to build.
like, for example, a way to kind of productsize your preset editing settings and allow anyone
else to use those.
You might think that Adobe could do that.
But if I were them, I would be thinking, hey, actually, we may not build the AI-native
version of Adobe ourselves, but we will become stickier as a product if we let others build
those AI-native tools and make them usable in Adobe versus sending those people that build
their own products and pull users away from the Adobe Suite. I think we still feel like there will
be one, if not several standalone AI Native Adobe's that come out of this era. But yeah, we'll see.
Well, to your point, it does feel like the model that was shown in their article was more based on,
like you said, people who come with existing content. Yeah. Yeah. You can uplevel it or chop it up
in some unique way, but not so much, as you said, AI Native. Let's start from nothing. Let's start
from a text prompt or something like that. Well, this has been great. Stuff is moving
so quickly. So we'll have to do this again in a few weeks when I'm sure there will be many more
models, many more sandbox announcements, all that. Yeah. Awesome. Thank you for having us.
Of course. Thank you. All right, you heard it here. If you'd like to see these kind of
timely episodes continue, you've got to let us know. Put in your vote by sharing this episode
or leaving us a review at rate this podcast.com slash a 16c. You can also always reach us
with future episode's suggestions at podpitches at acccc.com.
Thanks so much for listening, and we will see you next time.