a16z Podcast - What Comes After Mobile? Meta’s Andrew Bosworth on AI and Consumer Tech
Episode Date: April 24, 2025Are we nearing the end of the smartphone era?In this episode, a16z Growth General Partner David George talks with Meta CTO Andrew “Boz” Bosworth about what comes after apps and touchscreens. From ...smart glasses to AR headsets, Boz shares how AI is powering a new wave of computing—one that’s ambient, agentic, and driven by human intent.They explore what it takes to build for this future, the risks of changing interaction models, and why the next big platform shift may already be in motion.This episode is part of our AI Revolution series, where we explore how industry leaders are leveraging generative AI to steer innovation and navigate the next major platform shift. Discover more insights and content from the AI Revolution series at a16z.com/AIRevolution. Resources: Find Boz on X: https://x.com/boztankFind David on X: https://x.com/davidgeorge8 Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
Is there a better way? I think there is.
Every single interface that I interact with,
every single problem space that I'm trying to solve
are going to be made easier by virtue of this new technology.
If you were starting from scratch today,
you probably wouldn't build this app-centric world.
You can imagine a post-phone world.
The past 20 years of consumer technology
have been a story of apps, of touch screens, and of smartphones.
These form factors seemingly appeared out of nowhere, and may be replaced just as quickly
as they were ushered in, perhaps by a new AI-enabled stack, a new computing experience that
is more agentic, more adaptive, and more immersive.
Now, in today's episode, A16C's growth general partner, David George, discusses this feature
with arguably one of the most influential builders of this era.
That is, Meta CTO Andrew Boz-Bosworth, who spent nearly two decades.
at the company, shaping consumer interaction from the Facebook news feed all the way through
to their work on smart glasses and AR headsets.
Here, BAUS explores the art of translating emerging technologies into real products that people
use and love.
Plus, how breakthroughs in AI and hardware could turn the existing app model on its head.
In this world, what new interfaces and marketplaces need to be developed?
What competitive dynamics hold strong and which fall by the wayside?
For example, will brand still be a moat?
And if we get it right,
Boz says the next wave of consumer tech won't run on taps and swipes,
it'll run on intent.
So is the post-Mobile phone era upon us?
Listen in to find out.
Oh, and if you do like this episode,
it comes straight from our AI Revolution series.
And if you miss previous episodes of this series,
with guests like AMD's CEO, Lisa Sue,
Anthropic co-founder Dario Amadeh,
and the founders behind companies like Databricks,
Weimo, Figma,
and more, head on over to a16Z.com slash AI Revolution.
As a reminder, the content here is for informational purposes only, should not be taken
as legal, business, tax, or investment advice, or be used to evaluate any investment or security
and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies
discussed in this podcast.
For more details, including a link to our investments, please see A16Z.
dot com slash disclosures.
Boss, thanks for being here.
Thanks for having me.
Appreciate it.
Okay, I want to jump right in.
How are we all going to be consuming content
five years from now and ten years from now?
Ten years, I feel pretty confident
that we will have a lot more ways
to bring content into our view shed
than just taking out our fun.
I think augmented reality glasses,
obviously, are a real possibility.
I'm also hoping that we can do better
for really engaging in immersive things.
Right now you have to travel to like the sphere,
which is great, but there's one of them
it's in Vegas and it's like a trip.
Are there better ways that we can have access to
if we really want to be engaged in something
not just immersively but also socially?
So it's like, oh, I want to watch the game,
I want to watch it with my dad,
I want to feel like we're courtside.
Sure, we can go and pay a lot for tickets.
Is there a better way? I think there is.
So 10 years, I feel really good
about all these alternative
content delivery vehicles.
Five years is trickier.
For example, I think the glasses,
the smart glasses, the AI glasses,
the display glasses that we'll have in five years
will be good.
Some of them will be super high-end
and pretty exceptional.
Some of them will be actually little
and not even a tremendously high-resolution displays,
but they will be always available and on your face.
I wouldn't be doing work there,
but if I'm just trying to grab simple content
in moments between, pretty good for that.
So I think what we are seeing is,
as you'd expect,
we're at the very beginning now
of a spectrum of super high-end,
but probably very expensive experiences
that will not be evenly distributed across the population.
Yeah, yeah, yeah.
A much more broadly available set of experiences
that are, they're not really rich enough
to replace, like, the devices that we have today.
And then hopefully, a continually growing number
of people who are having experiences
that really could not be had any other way today.
You know, thinking about what you could do
with mixed reality and virtual reality.
Yeah, we're going to build up to a lot of that stuff.
So throughout your career, I would say one of the observations I would have
is you've been uniquely good at piecing together
various big technology shifts into new product experiences. So in the case of Facebook early
days for you, obviously you famously were part of the team that created the news feed. And that's a
combination of social media, a mobile experience, and applying your old school AI to it to create.
Yeah, exactly. But that's pretty cool. And like a lot of times these trends, they come in bunches.
And then that's what creates the breakthrough products. So maybe take that and apply it to where we are
today with the major trends that are in front of you.
Let me say two things about this.
The first one is I think if there was a thing that, not me specifically, but I think me and my
cohorts at it were really good at, was like, we were really immersed in like what the problem
was.
Like, what were people trying to do?
What are they want to do?
And when you do that, you are going to reach for whatever tool is available to accomplish
that goal.
That allows you to be really honest about what tools are available and see trends.
I think the more oriented you are towards the technology side, you get caught.
in a wave of technology
and you don't want to admit
when that wave is over
and you don't want to embrace the next wave.
And you're building technology
for technology's sake
so solving a product problem.
But if you're embracing
what are the issues
that people are really going through
in their life
and they don't have to be profound.
I bring that up just because I think
we're in this interesting moment
where I think all of us
have been through a phase
where a lot of people
wanted a new wave to be coming
because it would have been
advantageous to them.
Yeah.
But those things weren't solving
problems that regular people had.
I think the reason
we're so enthusiastic about
the AI revolution
that's happening right now,
now is it really feels tangible.
These are real problems that are being solved.
And it's not solving every problem.
It creates new problems.
It's fine.
So it feels like a substantial, real nuke capability that we have.
And what's unusual about it is how broad-based it can be applied.
And while it has these interesting downsides today on factuality and certainly compute
and cost and inference, those types of tradeoffs feel really solvable.
And the domains that it applies to are really broad.
And it says that's pretty unusual.
Certainly in my career, you almost always, when these technological breakthroughs happen, they're almost always very domain-specific.
It's like, cool, like this is going to get faster, or that's going to get cheaper, or that's now possible.
This kind of feels like, oh, everything's going to get better.
Yeah.
Every single interface that I interact with, every single problem space that I'm trying to solve are going to be made easier by virtue of this new technology.
That's pretty rare.
Mark and I always believed that this AI revolution was coming.
We just thought it was going to take longer.
Yeah.
We thought we were probably still 10 years away at this point.
Yeah.
But what we thought would happen sooner was this revolution in computing interfaces.
And we really started to feel 10 years ago like the mobile phone form factor, as amazing as it was, this is 2015, was like already saturated.
That was what it was going to be.
And once you get past the mobile phone, which is, again, the greatest computing device that any of us have ever used to this point.
Yeah, of course.
It's like, okay, well, it has to be more natural in terms of how you're getting information into your body, which is obviously ideally using through our eyes and ears.
and how we're getting our intentions expressed back to the machine.
You no longer have a touchscreen.
You no longer have a keyboard.
So once you realize those are the problems,
it's like, cool, we need to be on the face
because you need to have access to eyes and ears
to bring information from the machine to the person.
And you need to have these neural interfaces
to try to allow the person to manipulate the machine
and express their intentions to it
when they don't have a keyboard or mouse or a touchscreen.
And so that has been an incredibly clear-eyed vision
we've been on for the last 10 years.
But we really did grow up in an entire generation of engineers
for whom the system was fixed.
The application model was fixed.
The interaction design.
Sure, we went from a mouse to touchscreen,
but it's still direct manipulation interface,
which is literally the same thing that was pioneered in 1960s.
So, like, we really haven't changed these modalities.
And there's a cost to changing those modalities
because we as a society have learned
how to manipulate these digital artifacts through these tools.
Yeah.
So the challenge for us was, okay, you have to build this hardware, which has to do all these
amazing things and also be attractive and also be light and also be affordable.
And none of these existed before.
And what I tell my team at times is like, that's only half the problem.
The other half the problem is great, how do I use it?
Like, how do I make it feel natural to me?
I'm so good with my phone now.
It's an extension of my body, of my intention at this point.
Yeah.
How do we make it even easier?
Yeah.
And so we were having these challenges.
And then what a wonderful blessing.
AI came in two years ago much sooner than we expected and is a tremendous opportunity to make
this even easier for us because the AIs that we have today are a much greater ability
to understand what my intentions are.
I can give vague reference and it's able to work through the corpus of information as
available to make specific outcomes happen from it.
There's still a lot of work to be done to actually adapt it.
And it's still not yet a control interface.
Like I can't reliably work my machine with it.
There's a lot of things that we have to do.
We know what those things are.
And so now you're in a much more exciting place, actually.
Whereas before we thought, okay, we've got this big hill to climb on the hardware,
you've got this big hill to climb on the interaction design,
but we think we can do it.
And now we've got a wonderful tailwind,
where on the interaction design side at least,
there's the potential of having this much more intelligent agent
that now has not only the ability for you to converse with it naturally
and get results out of it,
but also to know by context, what you're seeing, what you're hearing, what's going on around you,
and make intelligent inference based on that information.
Let's talk about, like, reality labs and the suite of products, what it is today.
So you have Quest headsets, you have the smart glasses.
And then on the far end of the spectrum is Orion and some of the stuff that I demoed.
So just talk about the evolution of those efforts and what you think the markets are for them
and how they converge versus not over time.
So when we started the Arab Meta project, they were going to be smart glasses.
and in fact they were entirely built
and we were six months away from production
when Lama 3 hit
and the team was like, no, we got to do this.
And so now they're AI glasses, right?
They didn't start as AI glasses
but the form factor was already right.
We could already do the compute.
We already had the ability.
So yeah, now you have these glasses
that you can ask questions to.
And in December, to the early access program,
we launched what we call live AI.
So you can start a live AI session
with your Rayband meta glasses
and for 30 minutes until the battery runs out,
it's seeing what you're seeing.
Yeah.
And it's funny because on
paper, the Rayband meta looks like an incremental improvement to Rayband stories. And this is kind of the
story I'm trying to tell, which is the hardware isn't that different between the two, but the
interactions that we enable with the person using it are so much richer now. When you use Orion,
when you use the full AR glasses, you can imagine a post phone world. You're like, oh, wow, like if this
was attractive enough and light enough and had battery life enough to wear all day, this would
have all the stuff I need. Like, it would all be right here. And when you start to combine that
with images that we have of what AI is capable of, so you did the demo where we showed you
the breakfast. Yeah, it did. And it's, yeah, and for what it's worth, I mean, I'll explain it because
it's very cool. Kind of walk over and there's a bunch of breakfast ingredients laid out. And I look at it,
and I say, hey, meta, what are some recipes? That's right. And a decent ingredients.
So that is, for me at least, when we think about Orion, initially it didn't have that
AI component when we first thought about it. It had this component that was very direct manipulation. So it was very much the model on the app model that we're all from there. Of course. And I think there's a version of that. Yeah, of course. You're going to want to do calls and you're going to be able to do your email and you're able to do your texting and you want to play games. We have a stargazer game and you know, you have to do your Instagram reels. What we're now excited about is, okay, take all those pieces and layer on the ability to have an interactive assistant that really understands not just what's happening on your device.
and what email is coming in.
Yeah, of course.
But also what's happening
in the physical world around you.
And is able to connect
what you need in the moment
with what's happening.
And so these are concepts
where you're like,
wow, what if the entire app model's upside down?
What if it isn't like,
hey, I want to go fetch Instagram right now?
It's like, hey, the device realizes
that you have a moment between meetings,
you're a little bit bored.
Hey, do you want to catch up on the latest highlights
from your favorite basketball team?
Like, those things become possible.
Having said that, the hardware problems are hard
and they're real,
and the cost problems are hard and they're real.
And come for the king, you best not miss.
The phone is an incredible
centerpiece of our lives
today. It's how I operate my home. I use
my car. I use it for work. It's everywhere, right?
And the
world has adapted itself to the phone.
So it's weird that my ice maker has a phone app,
but it does. I don't know.
I'm not sure. It seems excessive, but like
somebody today who's like, I got to make an ice maker
number one job. Got to have it out.
It's a smart refrigerator. You're like, I don't need this.
I do think it's going to be a long...
This is what has said. The 10-year view for me
is, I think, much clearer. I think these things are
be available, widely accepted, increasingly adopted. The five-year view is harder because,
man, like, even if something's amazing. Knocking out the dominance of the phone in five years,
it just seems so hard. It's like unthinkable for us, right? That's what I said, like,
Orion was the first time I thought maybe. Orion, like, putting that on my head, I was like,
the first place I've had into that. I was like, okay, like, it could happen. Yeah. Like,
there does exist a life for us as a species past the phone. Yeah. Yeah, it still has the whole
dynamic of, well, how do I envision my life without the operating system that I'm so accustomed
So the physical stuff that you do, but just the familiarity and all the stuff that's working in there.
So what do you think of the interim period?
So maybe you get to the point where the hardware is capable, it is market accessible, but do you tether to the phone, do you take a strong view that you will never do that and let the product stand?
Like, how do you think about that piece?
The phones have this huge advantage and disadvantage.
Huge advantage is like the phone is already central to our lives.
It's already got this huge developer ecosystem.
It's this anchor device, and it's a wonderful anchor device for that.
The disadvantages, I actually think what we found is the apps want to be different when they're not controlled via touch screen.
And that's not super novel.
A lot of people failed early in mobile, including us, by just taking our web stuff and putting on the mobile phone and being like, oh, the mobile phone, we'll just put the web there.
Yeah.
But because it wasn't native to what the phone was, and I mean everything from interaction design to the...
actual design to the layout to how it felt like because we weren't doing phone native things
we were failing with one of the most popular products in the history of the web it's this is like
the major like design field like the skeuomorphic idea versus the native idea yeah and i think having
the developers is a true value and i think having all this application functionality is a true value
but then once you actually reproject it into space and you're manipulating it with your
fingers like this as opposed to a touchscreen you have much less precision
it doesn't respond to voice commands
because there's no tools for that.
There's no design integration for that.
So having a phone platform today
feels like, wow, I've got this huge base
to work from on the hardware side,
but I've also actually got this kind of huge
anchor to drag on the software side.
And so we're not opposed to these partnerships,
and I think it'll be interesting to see
once the hardware is a little bit more developed
how partners feel about it.
And I hope they continue to support
people who buy these phones for $1,200,
$1,300, being able to bring whatever hardware
they want to bring, then take the full
functionality of that with them.
The biggest question I have is whether the entire
app model, because we were imagining a very phone
like app model for these devices,
a bit of the very different. Interaction
design, input, and control schemes are very
different, and that demands like a little extra developer
attention. I am wondering if
like the progression of AI over the next
several years doesn't turn the app model in its head.
Like right now it's kind of an unusual thing
where I'm like, I want
to play music. So in my head
I translate that to I have to go open spot
or open title and the first thing I think of is who is my provider going to be yeah of
course as opposed like that's not what I want extremely limiting what I want is to play music yes
and I just want to be like go to the AI I'm like cool play this music for me yeah and it should
know oh like you're already using this service we'll use that one or these two services are both
available to you with this one has a better quality song or this one has lower latency
whatever thing is or it's like hey the song you want isn't available on any of these services
do you want to sign up for this other service that does have the song that you want I don't want to have to
be responsible for orchestrating what app I'm opening to do a thing. We've had to do that because
that's how things were done in the entire history of digital computing. You had an application-based
model that was the system. So I do wonder how much AI inverts things. That's a pretty hot take.
Yeah, that's a hot take. Inverge things. And that's not about wearables. That's not about
anything. That's just like even at the phone level, if you were building a phone today, would you
build an app store the way you historically built an app store? Or would you say like, hey,
you as a consumer, express your intention, express what you're trying to accomplish.
And let's like see what we have.
Let the system see what it can produce.
Yeah, for you.
Yeah.
But I do think if you were starting from scratch today, you probably wouldn't build this like
app-centric world where I, as a consumer, I'm trying to solve a problem and first have to
decide which of the providers I'm going to use to solve that problem.
Yeah, of course.
That's fascinating.
And again, I think it's a function of where the capabilities are today and I think
where we have line of sight into orchestration capabilities.
Because I'd say knowledge-wise, that is probably capable today.
I think orchestration-wise, it's probably...
or a little bit away.
And then, of course, you've got to build the developer ecosystem
to develop on the platform.
Which is incredibly hard.
That's the thing I want to see.
That's the hardest piece, right?
That's the hardest piece.
Yeah.
The stronger we get at agenic reasoning and capabilities,
the more I can rely on my AI to do things in my absence.
And at first, it will be knowledge work, of course.
Yeah, that's fine.
But once you have a flow of consumers coming through here,
what you're going to find is that they're going to have a bunch of dead ends.
Yeah.
Where they're going to ask the AI, hey, can you do this thing for me?
And it's going to say, no, I can't.
That's the gold mine that you take to developers.
And you're like, hey, I've got 100,000 people a day to use your app.
They're trying to use your app.
Yeah.
They don't know they are, but they're trying to use their app.
Look, here's the query stream.
Here's what's coming through.
And we're having to tell them no today.
If you build these hooks.
You got 100,000 people clamoring for something today.
Coming in for your service.
Yeah.
And it's totally fine for RAI to go back and say, hey, you've got to pay for this.
There's a guy who does this for you, but you got to pay for it.
And by the way, I'm not just talking about apps.
I'm like, it's a plumber.
It's like, there's something of a marketplace here that I think emerges over time.
So that's how I see it playing out.
I don't see it playing out as like someone goes into a dark room and comes up with this app platform.
No.
What's going to happen is there's going to become a query stream of people using AI to do things
and that AI will fail repeatedly in certain areas because that's a type of functionality
that is currently behind some kind of an app wall.
And there's no.
Or it hasn't been built native to whatever.
That's right.
Consumption mechanism.
There's no, yeah, yeah.
And everyone wants to build the bridges.
It's like, no, no, it's going to manipulate the pixels, and it's going to manipulate.
It's like, fine, it can do those things.
I'm not saying the AI can't cross those boundaries.
But I think over time, that becomes the primary interface for humans interacting with software,
as opposed to the, like, pick from the garden of applications.
Yeah, that makes a ton of sense.
That's a very alluring end state, just as a consumer, right?
Yeah, it's messy.
And I think it creates these very exciting marketplaces for functional.
functionality inside the AI, it abstracts away a lot of companies brand names, which I think is
going to be very hard for an entire generation of brands.
Yeah.
Like the fact that I don't care if it's being played on one of these two music services,
that's hard for those music services who like really want me to care.
Yeah.
And like they want me to have a stronger opinion about it.
And like they want me to have an attachment.
I don't want to have an attachment.
There are some things where you may value the attachment, you don't whatever.
In the world where I'm like, here's an app garden, and these two are competing for my eyeballs,
the brand that they've built is the hugely valuable asset.
In the world where I just care if the song gets played and sounds good,
a different set of priorities are important.
I think that's net positive because what matters now is performance on the job being asked.
For actual product experience.
And value and price per performance, like matters a lot.
Yeah.
I think a lot of companies won't love that.
Well, abstracting a way that's like effectively articulating,
extracting away margin pools, which puts a lot more pressure on us trusting the AI or the distributor of the AI.
And so far as I'm floating between different companies that are each providing AIs, the degree which I trust them to not be bought and paid for in the back end, they're not giving me the best experience or the best price for money.
They're giving the one that gives them the most money.
Yeah, of course.
So, yeah, it's the experience of people search today, right?
It's a very different world.
It's a very different world.
It's a very different world.
But you can actually see inklings of it today, right?
So certain companies are willing to work with the new AI providers in agentic task completion.
And then they're like, well, actually, wait a minute, I don't just want the bots executing this stuff.
I want the humans coming to me.
I think I need that.
It's existential that I have this brand relationship directly with the demand side.
So that's potentially messy, but a bright future, especially if we don't have to pay that brand tax.
Yeah, it'll be very messy.
I don't know it's avoidable because I think once consumers start to get into these
tight loops where more and more of their interactions are being moderated by an AI, you won't
have a choice. That's like where your customers will be. Yeah. But it's going to be a pretty
different world. Yeah, it'll be a different world and there will probably be some groups that
try to move fast to it as a way to compete with things that are branded. Yeah. And just say I'm
going to compete on performance and price. Yeah, that's right. Where do you think that could potentially
happen first? It probably will mirror query volume. I think of this a lot. We do have a model of this,
which was in the web era
when Google became the dominant search engine.
So before that, the web era was like very index-based.
It was like Yahoo, and it was like links
and getting major sources of traffic
to link to you was the game.
And then once Google came to dominance,
which happened very quickly over maybe a couple of years, I feel like,
all that mattered was like SEO.
All that mattered was like where you were in the query stream.
Yeah.
And the query stream dictated what businesses came over and succeeded.
Yeah.
Because, like, the queries that were the most frequent, those were the ones that came first.
Yeah.
And so, like, I was- Travel sites that had. Travels is the one that's-Travel came right away, right?
And it was a huge disruption.
And travel agents went from a thing that existed, do I think that didn't exist in a relatively short?
Like, immediately.
And they all competed on the basis of, like, execution of the best deal.
It was literally, like, seamless fashion with the highest conversion.
I think SEO's gotten to a point now where it's kind of a bummer.
It's, like, made things worse.
No, it's just like, it's just like game.
Everyone's gotten so good at it.
Especially with AI actually now.
That's right.
So I actually think it's like we have this incredible flattening curve.
Now it's like starting to kind of rise up in terms of...
Especially with paid placement too.
Yeah.
That's so dominant.
So da.
Yeah.
That's right.
And this is like probably the cautionary tale for how this plays out in AI's as well.
I think there will be a pretty good golden era here where the query stream will dictate
what businesses come first because those are the queries that are, that's the volume of people
unsatisfied with the existing solutions that they have.
Yeah.
Otherwise, they wouldn't be.
And product providers and developers will follow that and build specifically to solve those problems.
That's right.
Once it tips in each vertical, we get a lot of progress very quickly.
Yeah.
Towards better solutions for consumers.
And then once it's a steady state, it starts to be gainsmanship.
Yeah.
And that's the thing to fight.
And that's decaying or a...
That'll be the true test today.
The true test.
Can it get through that?
Can it avoid falling into that track?
Can it avoid that trap?
Yeah, yeah.
That's right.
Well, a lot of that is business model driven.
And we'll see how that evolves over time, too.
That's right.
You guys have also been leading from the front on this idea of open source.
Yeah.
And so talk about some of your efforts on that side of the business.
And then what is the ideal market structure of the AI model side for you guys?
There's two parts that came together.
The first one is Lama came out of Fair, our fundamental AI research group.
And that's been an open source research group since the beginning.
You know, since John Lacoon came in and they established that.
It's allowed us to attract incredible researchers who really believe that,
we're going to make more progress as a society working together across boundaries of individual
labs than not.
And to be fair, it's not just us, obviously, the Transformer paper was published at Google.
And, like, you know, big, we self-supervised learning was our contribution.
Like, everyone's contributing to the knowledge base.
But when we open-source law, that's how all models were open-sourced at that.
Yeah, yeah, yeah, yeah, of course.
Like, everyone was open, the only thing that was unusual was...
Everything else just went closed-source over time, effectively.
That's right.
But before that, every time someone built a model, they open-sourced it so that other people could
use the model and see how great that model was.
Like that was, like, mostly how it was done.
Sure.
If it was worth anything.
There's certainly some specialized models for translations and whatnot were kept closed.
But, like, if it was a general model, that was what was done.
Lama 2 was probably the big decision point for us.
Lama 2, and this is where I think the second thing that came in is a belief that I've had
that I was advancing really strenuously internally that Mark really believes in 2.
And he's written his post about this, which is, first of all, we're going to make way
more progress if these models are open.
Yeah.
Because a lot of these contributions aren't going to come from these big labs.
Like, they're going to come from these little labs.
And we've seen this already with Deep Seek in China, which was put in a tough spot and then innovated incredibly on the memory architectures and a couple other places to really get amazing results.
And so we really believe we're going to get the most progress collectively.
The second thing is inside this piece is, you know, this is a classic, I believe this is you're going to be commodities.
And you want to commoditize your compliments.
Yes.
And we're in a unique position strategically where our products are made better through AI, which is why we've been investing for so long.
Whether it's recommendation systems in what you're seeing in feed or reels, whether it's simple things like what,
friend, do I put at the top, when you type, you want to make a new message?
Who do I think you're going to message right now?
Little things like that.
It's a really big, expansive things like, hey, here's an entire answer.
Here's an entire search interface that we couldn't do it for in WhatsApp.
Yeah, yeah, yeah.
That, like, now is a super popular surface.
Yeah.
So there's all these things that are possible for us that are made better by this AI,
but nobody else having this AI can then build our product.
The asymmetry works in our favor.
Yeah, of course.
And so for us, like, commoditizing your compliments is just good business sense
and making sure that there is a lot of competitively priced, if not
almost free models out there, helps the entire industry, helps a bunch of small startups and
academic labs. It also helps us. Yeah, you is the application provider, huge. So we're all
super aligned on that business model alignment and industry. It's a strong alignment there.
So it comes from both this fundamental belief in how this kind of research should be done
and then aligns perfectly our business model. And so there's no conflict. Yeah, societal progress plus
business model alignment. It's all together. It's all going the same direction. That's awesome. It's great.
I want to shift gears to talking about the impediments to progress and what you think, you know, are kind of linear versus not.
So the risks to the vision, to the overall vision that you articulated, obviously, hardware, AI capabilities,
vision capabilities and screens and all that, resolutions.
We talked about the ecosystem and developers and native products.
So maybe just talk about what you see are kind of the linear path things and the things that may be harder or riskier.
We have real invention risk.
There exists risk that the things that we want to build,
we don't have the capacity to build as a society, as a species yet.
Yeah.
And that's not a guarantee.
I think we have windows to us.
You've seen Orion, so it can be done.
Yeah, it feels like it's a cost reduction in exercise.
It's a material's improvement exercise, but it can be done.
There is still some invention risk.
Far bigger than the invention risk, I think, is the adoption risk?
Is it considered socially acceptable?
Are people willing to learn a new modality?
Like, we all learned to type when we were kids at this point.
We were born with phones in our hands at this point.
Are people willing to learn a new modality?
Is it worth it to them?
Ecosystem risk, even bigger than that.
Like, great, you build this thing.
But if it just does like your email and reels,
that's probably not enough.
Do people bring the suite of software
that we require to interact with modern human society
to bear on the device?
Those are all huge risks.
I will say we feel pretty good about where we're getting
on the hardware, on acceptability.
We think we can do those things.
That was not a guarantee before.
I think with the Rayvan meta-glasses,
we're feeling like, okay, we can get through...
You feel like the acceptability.
Humans will accept that I'm using technology.
Within that, super interesting regulatory challenges,
here I have an always-on machine
that gives me superhuman sensing.
My vision is better.
My hearing is better.
My memory is better.
That means when I see you a couple years from now,
and I haven't seen you on the internet,
I'm like, ah, God, I remember that.
We did a podcast together.
What was the guy's name?
Can I ask that question?
Am I allowed to ask that question?
What is your right?
It's your face.
showed me your face.
Yeah.
And if I was somebody with a better memory, I could remember the face.
So, like, that happened, but I don't have a great memory.
So am I allowed to use a tool to assist me or not?
So there's really subtle regulatory, privacy, social, acceptability questions that are, like, embedded
here that are super deep individually and can derail the whole thing.
Like, you can easily derail.
Yeah, absolutely.
Easily derail the whole thing and slow progress.
Yeah.
That's the thing is I think we sometimes think in our industry, it's like, feel the dreams.
If you build it, they will come.
And it's like, no, a lot of things have to happen right.
Well, you could also re-step, too.
That's the whole, that's the risk.
You're sure you get your hands locked.
Great technology can get derailed for a long period of time.
Nuclear power got derailed.
Yeah, for absolutely stupid reasons.
For 70 years, for bad reasons.
We know we're bad now.
And they're just like, they just played it wrong.
Yeah, of course.
And they were like, ah, ignore this.
It's like, no.
These people actually feel this way.
So I think, yeah, I feel pretty good about the invention risk.
Acceptability risk is looking better than it has been.
But, like, I think there's still a lot of.
big hedges to cross there.
I actually think the ecosystem
was one I would have said previously was the biggest
one, but AI
is now my potential silver bullet there.
If AI becomes the major interface,
then it comes for free.
And I will also say that
we've had such a positive response from
even just set aside Orion, even
the Rayban Metas, companies that want to
work for us and build on that platform, it's not
a platform yet. There's so little computer.
There's so little compute.
We literally don't have any space yet.
But we did do a partnership with Be My Eyes, which helps blind and hard-of-vision people navigate, and it's really spectacular.
And so there's a little window there where we can start building.
So, yeah, I would say the response has been more positive than I had expected to that.
So everything right now, tailwinds abound, right now, and to be honest, after eight years of...
It's been a lot of headwinds, having a year of tailwinds is nice.
Yeah, it's good.
I'll take it. I'm not going to look at the face.
No victory loves, yeah, but that's good.
Okay.
But it's all hard.
Yeah, at every point, it could all fail.
Yeah, I like that you just started with...
It's invention risk.
I don't know.
There's many ways this just won't work.
Yeah, that's right.
You know, even if it does work, it might not take.
That's open.
Well, I'll say two things about this.
And this is where Mark just deserves so much credit is we're true believers.
Like, we have actual conviction.
Yeah.
Mark believes this is the next thing.
It needs to happen, and it doesn't happen for free.
Like, we can be the ones to do it.
Our chief scientist, Michael A.
Rash, who's one of my favorite people I've ever gotten a chance to work with.
He talks a lot about the myth of technological event.
It doesn't eventually happen.
There's a lot of people in tech who are like, yeah, AR will eventually happen.
That's not how it fucking works.
No, not.
That would actually, you have.
A.R. is a specific one there, but just absolutely not.
You have to stop and put the money and the time and do it.
Somebody has to stop and do it.
And that is the difference.
The number one thing I'd say is like the difference between us and anybody else is we believe
in this stuff in our cores.
This is the most important work I'll ever get a chance to do.
This is Xerox Park level new stuff where we're rethinking how humans are going to interact
with computers.
It's like JCR Licklider and the Human in the Loop Computing.
We're seeing that with AI.
It's a rare moment.
It's a rare moment.
It doesn't even happen once a generation, I think.
It may happen every other generation, every third generation.
Like, you don't get a chance to do this all the time.
So we're not missing it.
We're just like we're going to do it.
And we may fail, like it's possible.
But we will not fail for lack of effort or belief.
Great.
Thanks for time, boss.
Cheers.
Yeah, cheers.
Thank you.
Thank you.