a16z Podcast - Grok, Genie 3, GPT-5 & the Rise of Vibe Coding
Episode Date: August 13, 2025a16z partners Olivia and Justine Moore unpack the latest in consumer AI including: - Grok’s “Imagine” and its instant, social-first creative tools- Google’s Genie 3 and the future of 3D world...s- GPT-5: what’s new, what’s missing, and why some want their old chatbot back- AI-generated music from ElevenLabs- Olivia’s vibecoded Jensen Huang selfie app Timecodes: 0:00 Introduction & This Week's Topics0:24 Grok Imagine: Social AI Image & Video Generation4:48 GPT-5 Release & GPT-4 Deprecation5:36 Comparing GPT-5 and GPT-4: Coding vs. Personality9:13 AI for Mental Health: Illinois Law & Industry Impact12:29 Genie 3: Interactive World Models from Google16:53 ElevenLabs Music Model: Licensed AI Music Generation19:16 Vibecoding: Consumer Experiments & Platform Evolution24:14 The Future of Vibecoding & AI Tools27:05 Conclusions Resources:Find Olivia on X: https://x.com/omooretweetsFind Justine on X: https://x.com/venturetwinsRead Anish and Justine’s vibecoding post: https://a16z.com/specialized-app-gen-platforms/ Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
It's sort of huge that you can click one button and get a video on your phone in less than a minute.
You can't just scrape data from the internet, like the record labels will come and do you.
Yes.
It's sort of one of the first really truly social kind of forays into AI, image, and video generation.
Now when you post a photo on X, you can like long click and press and immediately turn it into a video.
The fact that someone who's completely non-technical can build something, you can,
can build something that a couple thousand people can use overnight is, like, amazing and so exciting.
None of the existing social platforms have leaned that heavily into AI creative content.
And a lot of the AI creative tools, I think, can and should and will sort of integrate more social.
Things in consumer AI are moving fast.
In this episode, Olivia Moore and Justine Moore, creators and partners at A16Z,
break down what's new and what's next across the consumer AI space.
You'll hear about the latest updates from Grox Imagine
and what makes it so different from other creative tools on the market.
They break down the release of Genie 3, Google's new 3D world model,
and why it might be the start of an entirely new kind of gaming and media format.
And of course, they discussed GPT5,
not just what's new, but what's missing,
and why some users want their old chatbot friend back.
Along the way, we'll hear about AI-generated music
and Olivia's very own vibe-coded selfie app starring Jensen Hua.
Let's get into it.
Welcome back to this week in Consumer. I'm Justine. I'm Olivia. And we have a bunch of fun topics we want to cover this week, starting in the Creative Tools ecosystem with GROC Imagine. And then we're also going to talk about Jeannie 3 and the 11 Labs music model. And then we'll cover GBT5 and the deprecation of GPT40. And we'll cover our new vibe coding thesis. So this week, we are going to start with GROC, which has had a bunch of big updates over the last month.
or so, I'd say. So obviously, GROC 4 came out. The GROC companions caused a huge stir, particularly
Annie and Valentine. But I think more recently what's been really interesting is all of the
image and video generation features on GROC Imagine. Yeah, so GROC released an image and video
generation model called Imagine, which is offered standalone through the GROC app, and they're
also bringing it to the web. And it's now embedded into the CoreX app as well, which is really
exciting. Yeah, I think that's one of the things that's really unique about it.
I would say it's not the most powerful kind of image or video generation model that exists.
Elon has tweeted a bunch about how they're training a much bigger model.
But I think what's really cool about it is it's sort of one of the first really truly social kind of forays into AI, image, and video generation.
And what you mean by it being integrated into the X app is like now when you post a photo on X, you can like long click and press and immediately turn it into a video animation.
in the GROC app. Or even if you see someone else's photo posted on X, you can turn it into a video
or also edit the image with GROC, which is really exciting. Totally. I think one of the coolest
things about GROC imagine, to your point, it's not the most powerful model. It's not V-O-3.
On video, I would say the audio generation is okay, but not great. But it's fast. So fast.
Which I think for a lot of people has been kind of the real barrier to doing image and video
generation more seriously is you put in a prompt, you press go, and then sometimes you're waiting,
30, 60, 90 seconds for a generation.
Yeah, often minutes, honestly, for a generation.
And GROC images are basically instant, and the videos are pretty fast as well.
And so I found myself iterating very frequently.
It's now become in less than a week, like my go-to tool for image generation on mobile.
Yep.
And even the video, I would say, is getting there, especially if they're training a better model.
Totally, yeah.
I think for many people, they're not professional creators.
And so they don't want to make an image in one place and then go and download it
and then port it into another website.
Because often, very few other tools are on mobile,
especially for video generation.
So I think it's sort of huge
that you kind of can click one button
and get a video on your phone in less than a minute.
That feels like a massive step forward
for consumer applications of AI creative tools.
And I think Elon and a bunch of folks
on the XAI team have been tweeting about this.
One of the big use cases is like animating memes
or animating old photos
or like things that you already have on your phone
because you can access the camera roll so quickly through the GROC mobile app.
Yeah.
It will also generate real people.
Yes.
Yelan has been tweeting many imagined generated photos and videos of himself.
Yes.
Which I think is another big differentiator and something we've really only seen from V-O-3
and even then it's mostly characters versus celebrities.
Yes.
But it comes from GROC's kind of uncensored nature, which is pretty, I think, cool and
unlocks a whole bunch of new use cases.
Yeah, for sure.
And I think that allows the meme generation.
And even V-O-3, half the time I try to do an image of myself, it'll be like blocked due to our prominent person thing.
And I'm like, I'm not a – what do you mean?
I'm not a prominent person.
But, like, in that photo, I guess I look too much like some celebrity or prominent person and it decided to block it.
And I've never had that problem on GROC, which makes it just so fun and easy to play around with.
Yeah.
I'm excited to see where they take it.
It feels like we've seen meta experiment a little bit with kind of AI within their core products.
They've done the AI characters you can talk to.
and then uploading a photo to get an avatar
where you can generate photos of yourself.
But none of it felt quite right, I would say,
in terms of baking into the core existing experience
on Instagram or Facebook.
And Grock feels a little bit different,
so I'm excited to see where they go with it.
Yeah, I'd say none of the existing social platforms
have leaned that heavily into AI creative content.
And a lot of the AI creative tools,
I think can and should and will sort of integrate more social,
But today, most of them have just done relatively basic feeds and liking, not really comments, not really like a following model.
And so I think this is going to be a super interesting proof point about what a more social AI creative tool looks like.
Great.
The other big model news of this week, which is not just big for consumer, but for pretty much all of AI, was the GPT5 release.
Yes.
And the corresponding deprecation of GPT40, which I think ended up being even bigger news in consumer land specifically.
Yeah.
Yeah. This one was sort of fascinating because obviously it's been a while since OpenAI had had a major LLM release since GPT4. And so people were like very eagerly awaiting GPT5. But yeah, as soon as I got access to GPT5, I wanted to compare the outputs to GPT4. And I immediately noticed GPT4 was gone. Yeah. And so how are people, because I've seen a lot of posts with people kind of up in arms about 4O disappearing. How would you describe kind of the main differences between the models, at least how they're manifesting and.
user experiences. Yeah. So I've talked to a bunch of folks about this. I think a widespread
conclusion is GPT5 is really good at, especially like front end code. I think a lot of the model
companies are really focusing on coding as a major use case, a major driver of economic value,
something they can be really good at. And you can tell in the results from GPT5. And they
emphasize it in the live stream pretty significantly. And you can see from the examples people use,
it's much better generating things. It's much better debugging, et cetera. But a lot of consumers aren't
using it for code. A lot of consumers just want to chat with it. And there's a bunch of examples
of how it's a lot less expressive, emotional, and fun. Like, it doesn't really use exclamation
points. It doesn't really use emojis. It doesn't send things in all caps like it used to.
It doesn't do the classic 4-0. It's not just good. It's great. Yes, exactly. And I think there are
kind of two separate issues here. So one is the sort of like glazing excessive validation. Like it said,
like, you're the best. You should totally do that. That's the best decision.
for, like, everything you said, even if it was ridiculous,
which is, like, a problem that I think I'm glad they're working on
and getting rid of.
Because let alone, like, everyone's concerns about GBT psychosis or whatever,
you just can't trust something that always tells you you're right.
The second thing is, does it just have a fun and engaging
and more casual human-feeling personality?
And I think that actually maybe took a step back
from GPT-40 to GP-T-5.
And that is what people, like, if you look at the chat,
GPT subreddit. People are freaking out. People are freaking out. And I think that's why Sam sort of rolled it
back. And I think he actually may have announced this on Reddit in a comment in response to all of
this backlash where he was like, we hear you guys, we'll bring back 4-0 for the paid users.
I was actually kind of surprised they even got rid of 4-0. I know there had been a lot of jokes
about what a pain it is to have to select the model and kind of the dashboard was always getting
bigger. Yeah. But they had even started building some UI around 4-O.
image generation, they had some preset templates you could use. And so the fact that they didn't
just add on five as an option, but took away your ability to use every other model was a little
bit surprising to me. Yeah, I think there's image generation on five, right? Like I imagine some
of the templates and the editing tools, they plan to just move over between the models. They may
not have gotten there yet. I think it's so funny because if you imagine yourself in the shoes on one of
these researchers, you're like, we trained what is on the benchmarks clearly a much better model.
Yeah. Like it's smarter. It's better at math. It's better at coding. It can answer medical questions now, which they really focused on in the live stream. So of course, everyone will love and embrace with open arms. It's like step forward and model intelligence.
A move towards AGI. Exactly. And of course, classic consumer is, no, we don't want that. Give us the old toy back. Give us our fun friend who kind of mirrored the way we spoke to it and was over the top and sometimes kind of crazy, but was like really fun to chat with.
Yeah, yeah.
And I think to me, honestly, this exemplifies something I've suspected for a long time, which is I don't necessarily think the, like, smartest model that scores the best on sort of all of these objective benchmarks of intelligence will be the model that people want to chat with.
Yeah.
I think there's going to be a huge market for more of these companionship, entertainment, just having fun type models that doesn't need to be, like, the highest IQ person you know.
Yeah, I agree.
I do want to spend 30 seconds on that mental health and health overall use case, though.
Yeah. Yeah.
It's interesting timing because also last week, the state of Illinois just passed a law banning AI for mental health or therapy without kind of the supervision of a licensed professional.
And it's pretty interesting because the law is wide ranging to the extent that some AI mental health companies have already shut down new operations in Illinois or kind of prohibited new users from signing up.
Yeah.
It's basically anything that's kind of ongoing support or even personalized advice around specific emotional and mental issues is now counted as therapy and is technically illegal in Illinois.
Yeah.
I am confident ChatGBTBT is doing this and honestly is doing it well for a lot of people.
And so I guess my question is to what extent is this ever going to be enforced because they can't see people's individual chats.
I feel like Illinois always does weird stuff.
Like we've been consumer investors for too long.
And remember in like 2017, 2018, we would live.
literally talk to social apps, like consumer social apps that were like, we've launched
everywhere except for Illinois because they have all these like crazy regulations around like
people like data and sharing and like all of these things, which obviously it's good to have
those, but like went way beyond other states to the point where it made it difficult for
apps to operate there, which is in my opinion then bad for the consumer. I think there are like
a lot of people are sort of now grappling with this question of what does it mean for AI to offer
medical support or mental health support, I don't expect we'll see the other states go in the
direction of Illinois, partially because it's just so hard to regulate. Like, how can you control
what someone is talking to their chat GPT or clot or whatever about? Especially because GPT5
was kind of trained or fine-tuned at least with data from real physicians. Yeah, yeah. Yeah. So they
talked about this a lot in the live stream, and I was surprised they leaned in on this. I'm sure we've all
seen the viral Reddit posts about like chat GPT saved my life.
My doctor said, wait for this imaging scan.
It turns out I had this horrible thing that I was able to get treated immediately.
And Sam and Greg Brockman had been retweeting these posts for a while, which I was like, that's interesting because you think they'd, from a liability perspective, they want to avoid that.
Yeah.
But they had a whole section of the GPT5 live stream where they brought up someone who had cancer and was using ChatGPT to upload all of her documents, get suggestions about treatment, kind of talk through the diagnosis and what she could do next.
Yeah.
And they talked about how GPT-5 was kind of the highest-scoring model on this thing called Health Bench, which is a benchmark.
They trained with 250-plus physicians to measure how good an LLM is at answering medical questions.
And so I think it's kind of a really big statement that Open AI has leaned into this space so heavily versus being like, hey, there's a lot of liability around medical stuff.
Our AI chapot is not a licensed doctor, we're going to kind of let people do this off-label, but we're not going to endorse it.
Yeah. It seems like now they're really endorsing it. I'm excited. Me too. I upload all sorts of stuff
and get all kinds of advice. And it can be really smart and really helpful in a lot of cases.
I agree. There were two other big, actually, creative tool model releases this week.
Yes. Jeannie 3 from Google. And then a new music model from 11 Labs. So maybe let's start with Jeannie 3.
I've seen the videos, but what is it? Yes. Genie 3 took Twitter by Storm.
Yeah. So Google has a bunch of kind of different initiatives around image, video,
3D world, I think various teams like VO3 and the genie team working towards this idea of like
an interactive world model, which is basically you are able to have a scene that you can walk
through in real time or interact with that kind of generates on the fly. And you can imagine it
sort of like a personal video game. Yeah. I saw some of the videos of kind of taking famous
paintings and for the first time you're able to step into them and kind of swivel around and
move around in the world, almost like you have a VR headset on or something.
thing, and you're kind of turning around and seeing the full environment.
Those were really cool. And it's not just famous paintings. They've shown a bunch of examples of
from a text prompt you can create a world, from an image you can create a world. Amazing.
They've even shown taking V-O-3 videos and creating a world around it with Jeannie 3. And the cool
thing about Jeannie 3 is there's controls where you can move the character around. So you can
control, like, now go to the left. And then the scene sort of regenerates to show you what you
would see on the left. It's incredible.
Yeah.
They haven't released it publicly yet.
They invited some folks to try it out at their office who were kind of sharing results.
They shared a bunch of clips.
I'm personally really excited to get my hands on it.
I think the natural question we've all had with this use case and seeing the demos is like, this looks amazing.
Like, what are we going to do with it?
Exactly.
And it's expensive and probably takes a long time.
Like they don't use the stats around that.
Yeah.
Exactly.
I think there'll be a couple use cases.
So I think video is an obvious one where if you're generating this scene in real time and
then controlling how you or any character or objects are moving through it, that enables much more
control over a video that you could then kind of screen capture what is happening than you would
get from a traditional video model. So you're almost recording the video, you're recording video
as you move through the 3D world model, which then becomes like a movie or a film, essentially.
Yes. Our portfolio company World Labs has a really cool product out that I'm on and a number of
folks are on that does this. And Martine on our team shares a bunch of really cool examples of
stuff he makes with exactly that use case. Very cool. So much more controllable video generation,
which is huge. I think in gaming, like, there's kind of two paths that this can go, and it could
go both. One is it allows real game developers to create games much more quickly and easily,
where you don't have to kind of code up and render up an entire world. It can just generate
from the initial image or text prompt you provide and the guidance you give it.
And then you can imagine, like, could a game developer freeze that world and allow other people to play it like a traditional game?
So it's like the game then would be the same for every person versus in the first example, the game almost regenerates for everyone fresh as they move through it.
Right.
Okay.
But I think the second gaming example is more like kind of what you're alluding to, which is more personal gaming.
Yes.
Which is like every person puts in an image or video or text prompt.
and then is sort of creating their own mini-game where they're wandering through a scene,
which is sort of a totally new market that I think a lot of people will love.
And then the third example, which is a little out of our wheelhouse,
but a lot of folks are talking about how creating sort of these interactive dynamic worlds
are really good RL environments for agents to be trained on how to interact with the world,
like how things move, going around scenes, interacting with objects.
It's been a big space of conversation right now,
And sort of there's a desperate need for more.
There's so many companies now selling these RL environments for agents that they're manually creating.
And something like a Genie 3 could make that much easier and sort of allow you to generate unlimited environments for these agents to wander through and learn from.
I could see that for digital agents, but even like physical agents operating within robots or something like that.
Totally.
I think for all sort of agents or like self-learning systems, it's going to be fascinating.
So eagerly awaiting that one to come out.
And then, yes, our portfolio company 11 Labs also released their music model.
which is super exciting.
I did not know they were working on music.
Yes.
It's been in the works for a bit.
The really interesting thing about it is it's trained on fully licensed music, which means
so music is one of those spaces where the rights holders are extremely litigious.
And so compared to things like image or video, it's been harder for music companies to sort
of avoid stepping on the toes.
You can't just scrape data from the Internet.
Like the record labels will come and sue you.
Yes.
And the artists.
And, like, it's often a very complicated ecosystem of who owns the rights to a specific song or to an artist's voice or something like that.
And so, yeah, I think a lot of folks have thought that you could not get a good quality music model training on licensed data because it's hard and it's expensive and it takes a long time.
And it's hard to get them to agree to license you the data.
But from what I've seen and from my own experiments, folks have been really impressed by Eleven's outputs.
And so what does the license data open up in terms of use cases for the music?
model, do you think? Yeah. So I think a lot of consumers basically don't care if they're using a music
model that's trained on licensed data or not because they're not really monetizing or many of them
are not monetizing stuff that they make with this music. They're generating like a birthday song for
their friend or a meme clip or something like that. Or like background music for their AI video.
Yep. Yep. Whereas businesses, enterprises, big media companies, gaming companies, like they care and they need to be
able to say this music model we used was trained on fully licensed data to not kind of open
themselves up to any liability issues. So they could use this music hypothetically in like
advertisements or films or TV shows or things like that. Exactly. Which I think is a big step
forward for AI music as a whole. And I think we should expect to see more from 11 on this front,
which is very exciting. Awesome. And then our last big topic of this week is around vibe coding,
which continues to explode. I think we have two things to talk about here. One would be
own experiments in the world of vibe coding, which relates to a piece that you and Ani Shatariah put out
this past week about how we're seeing the vibe coding markets start to fragment.
Yes.
Your experiment is the more interesting part.
So let's start with that.
Yeah, maybe to give a real-world example, for the first time I vibe-coded an app that I
fully published and made available to the Internet.
Essentially what I did was I thought, hey, I'm seeing on my ex-feet all the time.
Everyone has a selfie with Jensen at NVIDIA.
How do they get this?
In his classic leather jacket, he must be spending.
all of his time taking selfies now because everyone has one and I don't. And so I was thinking there's
all these new amazing models out there like flux contacts that can kind of take an image, say,
of Jensen taking a selfie with someone else and put my, stick me in there instead.
Yes, I should have been in the video. I should have been in the video, exactly. So I did that.
I generated that myself on Kria and then I thought, I bet other people might feel like me and might
want this. And so I'd love to create an app where anyone can upload a photo and get a selfie with Jensen.
Yes.
And so I thought, okay, I can vibe code this.
So I vibe code it on Lovable, an app that connected to fall to pull in the Flux Context
API.
Yep.
And then you could upload your own photo, would generate the selfie with Jensen, which you can
download.
It was great.
It worked.
So a lot of people, so I published it on Twitter.
Yes.
A lot of people used it.
It got used by like 3,000 people overnight to the point where when I woke up, I had
exhausted my self-imposed budget of $100 to spend on API calls here.
Yes.
And so.
Because you were funding it.
I was funding it.
I was not making people put in their API key, yeah. I was not making anyone pay for it or put in their own API key.
So it was to the point where I had exhausted it. So instead of galling the model, it was just kind of stitching together half of your photo with half of Jensen's photo to produce a really kind of 2005 Microsoft paint-looking output, which has its own charm.
Yeah.
Anyway, but the surprise was, so one, the fact that someone who's completely non-technical can build something that thousands of people, and I did it in like a couple hours in an evening, if that, that a couple thousand people can use overnight.
is, like, amazing and so exciting.
Yes.
My second learning was, however, we're early in vibe coding
because these products are definitely built for people
who are already technical.
Yeah, there were some issues.
We had some issues.
We had some issues.
You should not expose your public API key.
Well, the problem is, you didn't even know you were exposing your public API key
until some nice man DMD when was like.
So the vibe coding platforms are, I think they assume that you have a certain level of knowledge already.
So if you go to publish a website or an application, they won't stop you and say,
hey, here's a security issue, here's a compliance issue, fix this before you publish.
Yes.
And so it was a really interesting learning experiment for me, I think first of like, I think
there will be, and this is what you got at in your blog post.
Yes.
They'll hopefully be a V2, V3, V5 of these vibe coding platforms that are built for people
who don't know these things already.
So two things people flagged to you that the vibe coding platforms did not was one,
your API key was exposed.
And two, you would not create like a protected database for the photos.
for the photos that were updated.
So it was like if you knew how you could go and access the selfies that were uploaded.
Yes.
Yeah.
And I've vibe coded many things.
Which I fix, to be clear.
I've had similar problems, vibe coding a lot of apps where, like, I feel like they assume
you have a level of technical knowledge to be able to fix things or to even know what a potential
problem could be.
And so, yeah, Anish and I published, Anish who is actually an engineer and I collaborated on
this post around basically how we think vibe coding will evolve in the future.
Yeah.
I think today you have a bunch of awesome.
platforms that are trying to be everything to everyone. They're saying like an engineer at a company
can use this to develop internal tools or someone can use this to build a SaaS app that scales to
hundreds of thousands of users and a consumer can also use this to create a fun meme app.
But I think the truth in terms of what we've seen at least is those are very different products,
both in terms of the use cases, the integration and the level of complexity required.
And there probably should be, for example, a platform that is like the training wheels version of vibe coding for like consumer non-developers like us that does not allow you to make mistakes like exposing the API key, even if it then means less flexibility in the product.
Exactly.
Like I wasn't super opinionated about what it looked like, all of the specific features.
I just wanted to work.
Yeah, you probably weren't super opinionated on like the coding language or exactly what database was using, like all the back end stuff of what it was built on.
of. You didn't really care. You just wanted it to work. Whereas there's many like enterprise
or true developer use cases where they very much want to control every element of the stack
and that level of inflexibility, that product would just not work for them.
And so I think what we are hoping to see basically is like specialized players emerge that
offer the best product for a particular demographic of user or for a particular use case.
And that will probably imply very different product experiences, product constraints, and also go-to-market strategies.
Like, if you are allowing any consumer to vibe code like a fun app to share with their friends or partner or whatever,
you probably want to be going viral on TikTok and Reels versus if you are building a vibe coding platform for designers to prototype new features in an enterprise or for engineers to make internal tools.
you might want to even have top-down sales or at least be kind of product-led growth within businesses.
And maybe invest in like deep integrations into core business systems.
Totally.
Whereas the consumer version, you might actually just want people to vibe code on mobile and get something that works in five minutes.
And that's a great point too, which is like the consumer users often just want something to look cool and work and not have security issues.
Whereas more business-oriented users, it often needs to integrate with what already exists.
for the business. Whether that's like a design system and aesthetic or whether that's, you know,
their CRM or the emailing platform they use or sort of all of these different products
that it needs to connect to that are external to the vibe coding tool. And so I think the conclusion
of the piece was like we're seeing early winners in vibe coding already. These are some of the
fastest growing companies in the AI application space. Yeah. But we probably expect to see even more
because it feels like we're so, so early. Totally. And many of the users of these products are
probably still pretty technical. Yes. And so there'll be a version of vibe coding that's
truly consumer grade. Yes. That I'm personally very excited to unlock. And I think we've seen
this in a lot of AI markets, because these markets are large enough, they can have multiple
winners that are specialized. Like, we've seen this for LLMs. Yep. Open AI, Anthropic, Google,
mistral. Like, there's XAI. There's all of these companies that have models that are really good
at particular things. And we've also seen this a ton in image and video, which I think has a lot of
correlaries to vibe coding, which is based on what type of user you are or what you care
about, to what extent you need to reference an existing character, an existing design
format, something like that. Do you want it on your phone and super fast? Or do you want it in the
browser and slower and the highest quality? Like, there are many companies that are doing
super well focused on different segments or verticals of this giant market. Yeah, super exciting.
Well, thanks for joining us this week. If you've tried out any of these creative models or had
any vibe coding experiments yourself. We love to hear from you. Please comment below and let us
know. And also, please feel free to ping us here or on Twitter if you have ideas of what we
should cover in a future episode. Thanks for listening to the A16Z podcast. If you enjoyed the
episode, let us know by leaving a review at rate thispodcast.com slash A16Z. We've got more
great conversations coming your way. See you next time. As a reminder, the content here is for
informational purposes only, should not be taken as legal business, tax, or investment advice,
or be used to evaluate any investment or security, and is not directed at any investors or
potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain
investments in the companies discussed in this podcast. For more details, including a link to our
investments, please see A16Z.com forward slash disclosures.