a16z Podcast - A Big Week in Tech: NotebookLM, OpenAI’s Speech API, & Custom Audio
Episode Date: October 8, 2024Last week was another big week in technology. Google’s NotebookLM introduced its Audio Overview feature, enabling users to create customizable podcasts in over 35 languages. OpenAI followed with th...eir real-time speech-to-speech API, making voice integration easier for developers, while Pika’s 1.5 model made waves in the AI world.In this episode, we chat with the a16z Consumer team—Anish Acharya, Olivia Moore, and Bryan Kim—about the rise of voice technology, the latest AI breakthroughs, and what it takes to capture attention in 2024. Anish shares why he believes this could finally be the year of voice tech. Resources: Find Olivia on Twitter: https://x.com/omooretweetsFind Anish on Twitter: https://x.com/illscienceFind Bryan on Twitter: https://x.com/kirbyman01 Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
There's elements of it that are almost similar to early chat GPT.
Anyone who's now building a conversational voice product can have access to that level of conversational performance.
The majority of people may experience AI for the first time is actually going to be via the phone call.
We're taking the oldest and most information dense of all of our mediums of communication and finally making it almost programmable.
Phone calls are kind of this API to the world.
API to the world.
Within a couple weeks of deploying their voice model, they'd had 3 million users do 20 million calls.
Last week was yet another big week in technology.
For one, NoBook Elam, Google's latest sensation, has been making its way across the Twitterverse,
with its new audio overview feature.
The feature uses end user customizable rag, which basically means that people can create
their own context window, for generating surprisingly good podcasts, across.
35 languages. And to add to the voice mix, Open AI held their developer day and announced
their real-time speech-to-speech API, enabling any developer to add real-time speech
functionality to their own apps. Plus, they noted a whopping 3 million active developers on
the platform. Finally, we saw one video model company, PICA, break through the AI noise with their
1.5 model, giving us fodder to discuss what is really required to capture attention in 2024 and
beyond. Today, we discuss all that and more with A16C consumer partners Olivia Moore, Brian
Kim, and general partner, Anish Acharya. This was also recorded in two segments, one with
Olivia and another with all three partners, so you'll hear us pivot between the two. Plus,
Anish actually predicted that this would be the year of voice, despite it never historically
working as an interface. In fact, Microsoft CEO Satya Nadella even previously called the past
decades' generation of assistance, quote, dumb as a rock. Well, it certainly seems like
we're turning a corner. Let's get started. As a reminder, the content here is for informational
purposes only, should not be taken as legal, business, tax, or investment advice, or be used to
evaluate any investment or security, and is not directed at any investors or potential investors in
any A16C fund. Please note that A16C and its affiliates may also maintain investments in the
companies discussed in this podcast. For more details, including a link to our investments,
please see A16C.com slash Disposures.
Another big week in tech. I think the biggest thing I've seen is Notebook LM. So just quick
recap for the audience. Google is kind of known for the side quest becoming main quest.
And this product actually has been around for a while. It originated in 2023,
three, but its new audio overview feature has been taking over Twitter with these AI-generated
podcast hosts, which are surprisingly good. And I'm saying that as a podcast host who has this
job. And so basically what people can do is they can drop in their own information in a context
window, and then it'll use that to spin up these podcasts. Olivia, you've actually tried these
out, right? Yeah. So I think it originated as something for researchers or academics. The
idea was that you would store all of your notes, all of your papers, all of your information
within this Google workspace. And then this new feature that they've added is these two
AI agents essentially that play the role of podcast hosts and they go back and forth talking
about the data, asking questions, getting into examples. The thing that's really interesting
to me about it going viral in the past week or so has been there's actually nothing that
feels incredibly new or even incredibly in some ways cutting edge about it.
Like it's not the open AI brand new real-time model that cuts voice latency down to almost
nothing.
In fact, with Notebook L.M, you have to wait three to five to sometimes 10 minutes for them
to generate the episode once you click the button.
I think what's really striking about it is the realism and the humanness of the voices
and then also how they interact with each other.
Yes.
Is crazy.
The words, the intonation, the interruptions.
Exactly.
They disagree with each other.
They interrupt each other.
Like, this is not just upload a script and get a read out.
It does feel like two human beings talking.
And to that point, the other kind of striking thing about it is it's not just repeating
or summarizing the points that you upload in whatever data sources.
They're actually answering and asking really interesting and deep questions.
They're making comparisons.
They're making analogies.
They're taking it a step deeper of almost.
like, how would you teach someone about this topic? I uploaded basically a bunch of true crime
court case filings. And it did a podcast about the case. And then it spent the last two minutes
diving into the ethics of why are we entertained by true crime? Should we be using this information
to create media? Things like that. So it's really kind of like a next level interpretation of the
content, I would say. Totally. I've seen so many examples of this. Someone uploaded just their credit
card statement and they were able to grill them on that. Even that, I don't think the grilling was
prompted per se. It was like, just talk about this, find something interesting within this.
Yeah. There has to be some sort of very creative LLM or something behind the scenes. One of the other
use cases I loved was someone uploaded their resume and their LinkedIn profile. And it made like
an eight-minute podcast describing them as this incredible legendary mythic figure and going over all the
high points of their careers. I really like that because I see some people using some of the
the music LMs and they're using them for, let's say, a really nice birthday song.
Yeah.
And so when you played with Notebook LM, was it the kind of thing where sometimes you're on,
let's say, Dolly or Mid Journey and you're like, oh, it's not quite what I want and you're
just playing the AI slot machine.
Was it like that or was it first shot I'm getting exactly the kind of podcast I was hoping
for?
It's a little bit slot machine in that the output is different every time.
But I would say it's a lot more reliable in that almost every generation that I would do,
something would be interesting. It would be on topic. It would be usable. One example I did, I got
very into it. At first, I was sticking to uploading academic papers. I was like, I'm going to
use this for its intended purpose. And then one of my generations, I was like, the hosts, they sound
like they're flirting with each other, right? Yes. They have such good chemistry. And so I was like,
what would happen if I upload literally a one sentence document that's like, I think you guys are
in a secret relationship? And they went off on like a two to three minute podcast that sounds.
I swear, like the meet cute in a romantic comedy or something, it's incredibly emotionally compelling, I would say.
And so now my vision, I have to do like a full audio drama.
Then we have to animate that.
Exactly.
It'll be like the first fully AI avatar movie using the voices inspired by the notebook L.M.
Characters.
This one's about AI, but like AI in relationships.
Really?
Yeah.
Specifically AI that are like, uh,
hosting a show like us.
Interesting.
And Google's notebook L.M.
Environment.
Oh, wow.
So, like...
Could we be secretly dating?
Exactly.
That's wild.
That's what the document asks.
Someone thinks we're giving away, like, secret love notes to each other through our banter.
Well, what was the end?
Did they agree?
I mean, you have to listen to it and get your take.
What if those AIs, you know, actually developed feelings for each other?
Like, real feelings.
Yeah, exactly.
So it's like, you're saying.
saying two lines of code could fall in love over a spreadsheet or something.
That's the idea. Yeah.
It's kind of wild.
But also kind of, I don't know.
I know, right?
Intriguing.
And so given that you have played around with it and that a lot of the feedback is really good and people are pleasantly surprised by this, what's your reaction?
Because like you said, there are products like this out there.
I mean, with AI, there are so many trends, as we've seen, like products that get really hot one week and then something more interesting comes along.
could be me just being optimistic. It feels like there's something here. And I hate to make this
comparison, but there's elements of it that are almost similar to early chat GPT. In that one, it's
really usable, even for people who aren't academics, people who don't know that much about
prompting. Anyone can upload a paper and kind of generate a podcast. The other thing that feels
chat GPT-esque is like people are already pulling it, quote unquote, off label. And maybe it's not
notebook LM itself that becomes the winning product, we'll see. I think there's a lot Google could do
to extend this more. They could make it a mobile app. You could customize the voices. I could see it
being used for kid bedtime stories if they tweaked it a little bit. But I think something about
the format of personalized podcast or personalized audio is going to happen. Some of the experiences
or the podcast being generated are no doubt impressive, but also feel a little maybe gimmicky or like cool
once, but is this really something that you can see evolving into something practical, useful?
I, for one, can see it actually becoming a real product because right now is doing podcasts,
for example, but over time it may be easier to add avatars or videos as backdrop of what they're
talking about. And that becomes basically a short-from- YouTube video that is very personalized.
So one of the fun examples was, like, kids love Minecraft. I love Minecraft. When there's like a new
bedrock edition that drops and there's like a release notes that are pages and pages and
long. And kids rely on YouTube to figure out what's new, like what changed. If you drop the release
note into Notebook LM and just say, tell me what's new and tell it in a way that kids love.
And then it generates this 20 minute or 10 minute back and forth. So can you believe this new
update? It allows this character to fly. But those are the type of things that actually becomes
really interesting in a everyday use case. It makes me want to have like a digital diary or something
where you can upload it and then it gives you like a podcast of like how the last.
month of your life has been like the innovation is less like a new medium and more how they've
really unlocked something to your point around how to make any topic exciting and generate insights
and make it something that you really want to listen to and spend time on potentially unlimited
outputs I totally agree it could be videos it could be avatars the interesting thing about that is
I'd always thought of it as you can read something watch something or listen to something
but maybe a nuance of listening is listening to it in conversation format.
I do think there's something really magical about that's just the two hosts going back and forth from the top.
Yes. There was a TikTok I saw yesterday that have two million likes, completely organic.
And it was a law school student who was studying for her midterm.
And she had uploaded like, I don't know, 60 pages of lecture notes.
And then it generated a 12-minute podcast for her to review before the exam.
If you even hear another human being telling a story around an example or a case,
it makes it so much easier to remember and understand.
You're basically opening up another lane, right?
Because you can read something as you're listening to something,
as you're immersed in something else in the real world.
Maybe another thing to talk about is Open AI's Dev Day.
They released a lot, but maybe the highlight point was this real-time speech-to-speech API.
Anish, I know you've thought a lot about this idea that real-time really matters for speech
and that latency is almost like a metric that we're going to hear a lot more about.
Yeah, there's a threshold.
above which voice doesn't really work as a modality to interact with the technology
because it doesn't feel real.
And below that threshold, which is maybe three or 400 milliseconds,
sort of holds the illusion of talking to a person.
Phone calls are kind of this API to the world.
So it feels like the way that the majority of people may experience AI for the first time
is actually going to be via the phone call, and that is unlocked by this real-time technology.
And the crazy thing is like so much still runs on the phone system.
Absolutely.
Even if you just think about one vertical, like health.
care. It's like taking incoming calls from patients. It's like doctors calling other doctors,
calling pharmacies, insurers. So if we think about how this becomes more real time,
are there different applications that you think are unlocked, like let's say music, education,
how does real-time voice maybe change some of those industries? Most of the ed tech products we've
seen so far have been like, if you attempt a homework problem maybe, then you take a screenshot,
you upload it to an AI product, it tells you if it's right or not. And now with real time,
both voice and some of the video and vision model stuff, it's actually,
almost like having a tutor sitting next to you, going through it with you, even with some of the
vision stuff, show it your piece of paper. So now it's like AI is moving towards actually helping
you learn versus a lot of the use cases so far have been maybe cheating adjacent in like,
how do I just get to the answer? Now it's, what is your process? That's actually really interesting.
You're basically saying that in a way the lack of latency allows for people to integrate in that
moment. Yeah. And in the past maybe because there was more latency, people took shortcuts because
they didn't want to wait. Or if it's with you, it can say, here's the way you're doing it.
Here's another way, actually, that might make more intuitive sense for you to solve this math
problem. It's going along the journey of understanding with you versus just being kind of answers
or outcome base, which a lot of the AI ad tech products have been historically.
What's really interesting about that is that there's a sort of design language or design
cues that are already built into conversations. So interrupting is one or the sort of uh-huh, uh-huh
is another. So that actually should unlock much more interesting product experiences as well
because, and of course the latency is necessary for that, but so is the ability to even
understand these parts of sort of, I don't know, they're not quite nonverbal, but they're not also
a part of the explicitly spoken language. A lot of products, especially in consumer, it's not just
about being optimal per se or perfect, right? In fact, what a lot of people are commenting on when you
see the notebook LM examples. It is the filler words. It is the interrupting. It is the
imperfections that people are drawn to. This is a big step forward. And for anyone who tried
to use the chat GPT voice mode before, essentially, you would press a button, you would say
something. The LLM would pause. It would interpret it. It would generate something to say back.
And then it would return an answer. But it take at least a couple seconds. It was very buggy.
It was very glitchy. It was more like sending a voice memo, having someone hear it, and send
back a voice memo than having an actual live conversation with a human.
And so the new model is truly more like almost zero latency, full live conversation.
This has been available through ChatGBT's own advanced voice mode, which people are using
and loving.
But what happened this week at Developer Day was they're essentially making that available
via API for every other company.
So anyone who's now building a conversational voice product can have access to that level
of conversational performance, which is huge and really exciting because it brings a lot of
AI conversation products from barely workable, not really workable, to suddenly extremely good
and very human-like.
Yeah, totally.
You had a tweet that said, this is a massive unlock for AI voice agents.
I'm expecting to see a lot more magical products in the next few months.
We're quickly leaving the era of latency and conversational experience being a blocker.
Can you speak just a little more to that in particular?
Yeah, absolutely.
Many of their AI voice products didn't really feel even SMB caliber in terms of quality, let alone maybe like an enterprise could actually deploy this.
Yeah.
So now it is, I think, arguably enterprise quality in terms of real companies being able to replace humans on the phone with an AI on the phone.
We're seeing this for all sorts of use cases.
The most obvious is maybe having someone answer the phone at a pizza shop to take orders or at a small business to book, nail,
appointments, all the way to things that are a lot more complicated, like even doing interviews,
first round interviews with AI, which is crazy to think about, but it's happening. Or even more
kind of vertical-specific use cases like freight brokers spend all day on the phone calling carriers,
calling truckers, and trying to find someone to haul a load in a certain price range. Now you could
do that with an AI that can call 100 carriers at once and negotiate the price instead of having a human
being do those calls sequentially all day. This new API and there's other open source attempts
at the same type of model is really going to allow those products to shine. Yeah. And some of the
products you're describing are kind of voice first. Yeah. But many of the apps that we've had to
date are typically not voice first, perhaps because we actually haven't had the technology.
And so I want to refer to Anisha's big idea at the end of 2023, which right now feels very accurate.
Yes, he was right on. Yeah. It said that voice first apps will become.
integral to our lives. And he basically says that despite voice specifically being the oldest and
most common form of human communication, it's never really worked as an interface for engaging
with technology. It feels like voice is one of the biggest things that's being unlocked by
AI. Voice is the easiest content to create and we're all creating audio all day, every day
essentially. But that content has never really been captured or used or automated in some ways.
Like now, even outside of real time, there are so many products that will listen to your meeting and will hear you say something and can automatically slack someone with a follow up or use it to trigger a commit in GitHub or a task on a sauna that your team has to follow up on.
And so I think what we're seeing now, both real time voice and non-real-time voice, is we're taking the oldest and most information dense of all of our mediums of communication and finally making it almost programmable and usable in a really powerful way.
The one thing I think we didn't quite predict when we were forecasting voice for this year was that it's really, really been working for B2B and not as much on consumer quite yet.
We're getting there.
I think on B2B, even thinking about the voice agents, a lot of businesses are struggling to find people to answer the phones for all sorts of roles.
They're struggling to retain them.
It's expensive.
And so it's super natural to plug in an AI that can perform at similar quality.
The consumer use cases are a little bit less obvious.
It's probably worked the most in companion so far.
So, again, chat, CBT, advanced voice mode or character AI.
I think they announced within a couple weeks of deploying their voice model.
They'd had 3 million users do 20 million calls.
Really?
Yes.
Wow.
Because if you're spending hours each day anyway talking to this companion,
giving it a voice and making it more real, makes a lot of sense.
So that, to me, was like the shining star of voice.
far. Open AI did highlight two other use cases on developer day in consumer. And both of them
were actually these kind of high touch expensive human services almost that are now democratized
with AI. So one of them is a company called Speak that does language learning. This might be
controversial. I love Duolingo as a product. I love it as a brand. But I think it's hard to use it
to learn a language end to end because it's just limited as an interface. So if you
really want to learn a language, you might have to pay someone, I don't know, $50 to $100 an hour
to tutor you.
And so the idea of speak is you have an AI voice agent that is essentially your language tutor.
And it's much more accessible and affordable.
So that was one.
And then the second one they highlighted was, what if you had a nutritionist via AI?
So this is a product called Healthify, where you can send in photos and then talk live
about what you're eating every day in your diet.
So I think we'll see more of those use cases unlocked with better voice models.
Yeah, I need that. I've been saying for a while, I didn't think of it specific to voice, but that I need an AI to just call me out on my BS to be like, these rear goals, you said you were going to run. You didn't do the things that you said you were going to do the doolingo versus speak example. But in Anish's prediction, he also talks about how, yes, some of these big companies are going to integrate these APIs or integrate this technology. But Gmail probably still going to look like Gmail. And so how do you think about that balance between the.
the incumbents utilizing this technology and then what's going to sprout.
That's completely new.
It's really interesting and something that we watch really closely in consumer in particular
because you would think that the Googles, the Microsofts have all of your data.
They have all of your permissioning.
There's a lot that they could do.
I think what we've seen is they're structurally in some ways disadvantaged
in building towards this AI shift in a really native way.
One, it's like these are big companies now.
They have a lot of people.
They have a lot of competing priorities.
And then the second thing would be, in some ways, they would cannibalize their own products.
Like, our view has been Google is likely to maybe add AI to augment Gmail, but are they likely to create the AI native version of Gmail that you could only conceptualize in the past three to six months?
Probably not, just because, again, of how big of a company they are and the fact that they have so much riding on the continued success of the existing product.
A good example of this is actually Zoom.
transcriptions. Are people using that? Yes. But there's also been a ton of products that are
independently successful in doing AI meeting notes. And those largely are building towards more
specific and opinionated workflows for different types of jobs or tasks. And it's just something
that Zoom is never going to do because they're such a broad-based platform. Talk about a completely
new platform. Like imagine Zoom, but it's asynchronous. Yes. Right? They're never going to build that
to your point. Exactly. Because they're inherently synchronous. Clearly, Open AI is
investing in voice, right? And that's not necessarily a given, right? If you think about they also
do imagery, they haven't really talked about Dali in a while, right? They also do video. Sora came out
a little while ago. But there really seems to be this voice push despite them operating across
modalities. Is that a signal people should be paying attention to? I think so. I think we've seen
already almost, even though it's still so, so early, like eras of AI so far. Creative Tools was the
first era and still a massive era. And I think we saw a ton of investment in image generation,
video generation, music generation, much of which is still happening. Especially it feels like as
AI moves from pure consumer use cases into more kind of controllable, highly monetizable enterprise
use cases, it does feel like voice is kind of a unique unlock in that it's a real game changer
for companies in particular to be able to capture and utilize this audio data that they never
have before.
Maybe another thing worth talking about here from Dev Day is that they announced that they
have 3 million active developers in the ecosystem, and they tripled the number of active
apps in the last year.
Since you've been studying consumer for so long, maybe ground the audience and how much
quicker is this happening per se than, let's say, the app era when Apple release its apps
store? How long did it take for 3 million active developers to be building on it? And just how big
is that kind of number today? Yeah, that's a great question. I have no idea. As you were asking
the question, I was like, do I know that for App Store? I'm like, I do not. Well, it took, I assume
years. Three million developers. That's incredible. Like my mass was like, look, like, I don't know
the F store number, but let's say each developer has the ability to, I don't know, like maybe
reach out to hundreds or a thousand unique users. That's sort of how I think about basically the
reachability of what they're building.
I think the other question is, like, what is the revenue per developer in the app store?
And is that a proxy for an AI?
Yeah, that's super interesting.
There is a data that I think I put out where you look at, it's not necessarily the app store
ones, but it's the SaaS, like historical SaaS companies versus Gen AI companies.
Oh, yeah.
And how the Gen AI companies are reaching a scale of revenue way faster than their SaaS
counterparts.
Cool.
Very interesting.
Yeah.
Yeah, I think a big part of that, though, is because.
Because Gen. AI is so well set up for consumption revenue. And so many SaaS businesses are
SaaS. They're like, you pay a fixed fee for the service monthly. And with a lot of these new
businesses, you're paying on a consumption basis. You're also pricing it as a subset of labor costs,
which are traditionally priced far higher than software costs. I think that's like a far more
compelling argument for why the revenue ramp is much faster versus I think the reason why
the report said was because the Gen AI companies require training costs up front. Therefore,
where they're imperative to make money is higher than SaaS, which maybe, but we know the ones
that are making money aren't necessarily incurring a huge training cost up front, much more
likely as they're replacing labor costs, or it's just so useful or so unique that the willingness
to pay is just higher.
For sure.
I mean, I might buy that argument in consumer in that the willingness to pay of consumers is
way higher post-gen-AI than pre-gen-AI, so maybe.
But for SaaS, I mean, SaaS businesses have always existed to make money.
But the developer community, 3 million people actively developing on it today based on how old is this platform.
Like that is incredible.
Yeah.
I also think I'm seeing so many people who wouldn't have previously called themselves a developer or creating just really small apps or maybe even using the API for themselves in a way that if we use the parallel of the app store in the past, you weren't really creating an app for yourself back in the day.
The barrier to entry for that would just be too high and it just wasn't on many people's radars.
You know, the story of a lot of productivity and pro-sumer companies is enabling app creation.
Like, Notion is a big app platform.
Actually, people have created these like daily habit tracker apps and a bunch of other things in the notion app store.
Totally, yeah. Airtable, obviously there's products like retool, but there's a lot of people who have been, or at least this like latent demand to make apps, especially for people that are non-technical in a business context or a hobbyist context.
And I think the AI, I know the AI thing is really unlocking it.
Yeah.
The App Store example is a very good one because we're seeing this maybe fragmentation in a positive way of the types of developers that are building on Open AI models.
There's literally people who we talk to who are like, I'm never going to raise venture funding.
I am printing cash.
Basically, I'm making a million or $2 million a month off of this not always thin, sometimes very sophisticated kind of products that targets maybe a really specific use case.
So we see that, and that could be an open AI developer, but also we could see a developer
who's no, I'm going to build a $50 billion company utilizing or fine-tuning these models.
So similar to the app store, we saw a big range of people who are like, I'm just going to be a
solopreneur making an app to I'm going to build a generational business on top of the app store.
Maybe the difference to me here so far has been kind of like, as with everything in AI, the slope of the
curve or the speed of ramp. I don't think we often saw, especially in the early days of an app
store, solopreneurs making millions of dollars a month. That's something that has been very uniquely
enabled by AI. Yeah, and you see this overlapping with the code LLM space, right? You've got cursor and
replet and all of these tools that allow people who couldn't code before to become a developer.
Totally, yes. You don't have to be a developer or designer. There's so many skill sets now that you can
abstract away to AI as long as you have good taste and good ideas. That tooling did not exist
in the App Store era and now exists in the AI era. Well, maybe to that end, clearly there's a lot
of building happening. And we've talked about this before, but I'd love to talk about the playbook,
right? Because you're going to build something within AI. It's more competitive than ever to get
that attention. And so maybe one frame for us to talk about that against is PICA's launched
1.5 this week. And I just saw so many meme videos. It was so viral. People squishing things,
inflating things, right? Taking a meme and distorting it. Exactly. It was actually really fun.
So in a pretty intuitive way, I understand why that kind of model went viral. But we are getting
to the point where, is there fatigue when someone releases a new model? I'd love for you to just maybe
break down what you might call like the anatomy of a successful launch in this world.
If you think about a video as a category, when SORA first came out with their examples, minds were blown.
Yes.
Minds were blown.
And I think that became this like front of mind of, oh my God, you can create and generate videos.
Now, the interesting thing about video that's not all created equal, right?
There's a character-centric video and that you have more of a scene generation video.
What is happening in the scene, the content density of the video always mattered, right?
slow motion movement of the scene is video, but it's a lot less interesting. Cat walking around
a garden. Interesting, but cat's moving. Cool. What we're seeing now is these products are becoming
a lot more opinionated and a lot more specific, if you will. So we talked about PICA, but you also
have the likes of Vigel where it's templatize of what you can do where little yadi like dance
walkout scene. That's very opinionated. Like it's not any video. It's a very
specific movement and scene where you're putting yourself in pica is the same thing where all this
sort of templates that are going viral are you take a specific object in the video and you're modulating
it whether you're squishing it blowing it up like inflating and it floats away it's sort of unexpected
it's unexpected what's happening in the video right it's not a cat walking and oh it's at point
a and might go to point b how interesting you don't expect the meme guys
by looking at another woman to actually be squished in a picture,
you don't expect all these different meme characters
we've blown up all of a sudden.
And I think that unexpectedness is sort of the next evolution of what's happening.
Yeah, I mean, one thing that was really interesting there
is there's a subset of things that people expect from video.
And with AI, it's not enough to just give people that.
Or maybe there is some subset if you're creating a stock video company, that's one thing.
But in order to go viral,
in order to garner attention in this very busy world, you need some sort of not-known quantity.
An opinionated point of view on what that should be, right?
They could have easily said, oh, like, we want video to be longer.
Because that's hard.
That's really hard.
Like 30-second video with some consistency in the scenes are difficult things to do.
They could have done that.
But instead, the team decided, you know what, we're going to pick, like, objects in the scene and do weird stuff with it.
Do you think that's required now to basically design around some sort of,
viral element. I think if there has been a large shocking development in the underlying modality,
again, video esoter type, like you do need some unexpected element of, again, opinion to garner attention,
or the quality just needs to be order of magnitude better, not just 20% better, but much better than
I think you get attention, but that's the underlying tech stack evolution, which I think we'll
continue to see as well. So I wouldn't say it's like a playbook of the only way to do it,
is to come up with wacky, like very attention-grabbing things.
There's, of course, the underlying technical evolution
that will continue to sort of push the boundary forward.
All right, that is all for today.
If you did make it this far, first of all, thank you.
We put a lot of thought into each of these episodes,
whether it's guests, the calendar Tetris,
the cycles with our amazing editor Tommy until the music is just right.
So if you like what we've put together,
consider dropping us a line at Rate This Pocket.
podcast.com slash a16c. And let us know what your favorite episode is. It'll make my day,
and I'm sure Tommy's too. We'll catch you on the flip side.