Dwarkesh Podcast - Mark Zuckerberg — AI will write most Meta code in 18 months
Episode Date: April 29, 2025Zuck on:* Llama 4, benchmark gaming* Intelligence explosion, business models for AGI* DeepSeek/China, export controls, & Trump* Orion glasses, AI relationships, and preventing reward-hacking from our ...tech.Watch on Youtube; listen on Apple Podcasts and Spotify.----------SPONSORS* Scale is building the infrastructure for safer, smarter AI. Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, while their public leaderboards help assess model capabilities. They also just released Scale Evaluation, a new tool that diagnoses model limitations. If you’re an AI researcher or engineer, learn how Scale can help you push the frontier at scale.com/dwarkesh.* WorkOS Radar protects your product against bots, fraud, and abuse. Radar uses 80+ signals to identify and block common threats and harmful behavior. Join companies like Cursor, Perplexity, and OpenAI that have eliminated costly free-tier abuse by visiting workos.com/radar.* Lambda is THE cloud for AI developers, with over 50,000 NVIDIA GPUs ready to go for startups, enterprises, and hyperscalers. By focusing exclusively on AI, Lambda provides cost-effective compute supported by true experts, including a serverless API serving top open-source models like Llama 4 or DeepSeek V3-0324 without rate limits, and available for a free trial at lambda.ai/dwarkesh.To sponsor a future episode, visit dwarkesh.com/p/advertise.----------TIMESTAMPS(00:00:00) – How Llama 4 compares to other models(00:11:34) – Intelligence explosion(00:26:36) – AI friends, therapists & girlfriends(00:35:10) – DeepSeek & China(00:39:49) – Open source AI(00:54:15) – Monetizing AGI(00:58:32) – The role of a CEO(01:02:04) – Is big tech aligning with Trump?(01:07:10) – 100x productivity Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Transcript
Discussion (0)
All right, Mark, thanks for coming on the podcast again.
Yeah, happy to do it.
Good to see you.
You too.
Last time you were here, you had launched Lama 3.
Now you've launched Lama 4.
Well, the first version.
That's right.
What's new?
What's exciting?
What's changed?
Oh, well, I mean, the whole field's so dynamic.
So, I mean, I feel like a ton has changed since the last time that we talked.
Meta AI has almost a billion people using it now, monthly.
So that's pretty wild.
And I think that this is going to be a really big year on all of this,
because especially once you start getting the personalization loop going,
which we're just starting to build in now, really,
from both the context that all the algorithms have about what you're interested in feed
and all your profile information, all the social graph information,
but also just what you're interacting with the AI about,
I think that's just going to be kind of the next thing that's going to be super exciting.
So really big on that.
The modeling stuff continues to make really impressive advances, too, as you know.
The Lama 4 stuff, I'm pretty happy with the first set of releases.
You know, we announced four models and we released the first two, the Scout and Maverick ones,
which are kind of like the mid-sized models, mid-sized to small.
It's not like, actually the most popular Lama 3 model was the 8 billion parameter model.
So we've got one of those coming in the Lama 4 series too.
our internal code name for it is Little Lama.
But that's coming probably over the next, over the coming months.
But the scout and Maverick ones, and I mean, they're good.
There's some of the highest intelligence per cost that you can get of any model that's out there,
natively multimodal, very efficient, run on one host, designed to just be very efficient
and low latency for a lot of the use cases that we're building for internally.
And that's our all thing.
We basically build what we want.
And then we open source it so other people can use it too.
So I'm excited about that.
I'm also excited about the behemoth model, which is coming up.
That's going to be our first model that is sort of at the frontier.
I mean, it's like more than two trillion parameters.
So it is, I mean, it's, you know, as the name says, it's quite, quite big.
So we're kind of trying to figure out how we make that useful for people.
It's so big that we've had to build a bunch of infrastructure just to be able to post-train it ourselves.
And we're kind of trying to wrap our head around how does the average developer out there,
how are they going to be able to use something like this?
And how do we make it so it can be useful for distilling into models that are of reasonable size to run?
Because you're obviously not going to want to run, you know, something like that in a consumer model.
But yeah, I mean, there's a lot to go.
I mean, as you saw with the Lama 3 stuff last year, the initial Lama 3 launch was exciting.
And then we just kind of built on that over the year.
3.1 was when we released the 405 billion model.
3.2 is when we got all the multimodal stuff in.
So we basically have a roadmap like that for this year, too.
So a lot going on.
I'm interested to hear more about it.
There's this impression that the gap between the best closed source and the best open source models
has increased over the last year
where I know the full family of Lama 4 models
is not yet, but Lama 4 Maverick is 35 on
Chabot Arena and on a bunch of major benchmarks
it seems like 04 Mini or Gemini 2.5 Flash
are beating Maverick, which is in the same class.
What do you make of that impression?
Yeah, well, okay, there's a few things.
I actually think that this has been a very good year
for open source overall.
If you go back to where we were last year,
what we were doing with Lama
was like the only real super innovative open source model.
Now you have a bunch of them in the field.
And I think in general, the prediction that this would be the year
where open source generally overtakes closed sources,
the most used models out there, I think is generally on track to be true.
I think the thing that's been sort of an interesting surprise,
I think positive in some ways, negative in others,
but I think overall good is that it's not just Lama.
There are a lot of good ones out there.
So I think that that's quite good.
Then there's the reasoning phenomenon, which you basically are alluding to with talking about 03 and 04 and some of the other models.
I do think that there's this specialization that is happening where if you want a model that is sort of the best at math problems or coding or different things like that, I do think that these reasoning models with a lot of
the ability to just consume more test time or inference time compute in order to provide more
intelligence is a really compelling paradigm. But for a lot of the applications that, and we're going
to do that too, we're building a Lama 4 reasoning model and that'll come out at some point.
For a lot of the things that we care about, latency and good intelligence per cost
are actually much more important product attributes.
If you're primarily designing for consumer product,
people don't necessarily want it to wait like half a minute
to go think through the answer.
If you can provide an answer that's generally quite good too
in like half a second, then that's great.
And that's a good tradeoff.
So I think that both of these are going to end up being important directions.
I am optimistic about integrating the reasoning models
with kind of the core language models over time.
I think that's sort of the direction
that Google has gone in
with some of the more recent Gemini models.
And I think that that's really promising.
But I think that there's just going to be
a bunch of different stuff that goes on.
You also mentioned the whole chatbot arena thing,
which I think is interesting.
And it goes to this challenge around
how do you do the benchmarking, right?
And basically, how do you know
what models are good for which things?
One of the things that we've generally tried to do over the last year is anchor more of our models
in our meta-AI product North Star use cases, because the issue with both kind of open source
benchmarks and any given thing like the LMARENA stuff is it's just, they're often skewed for
either a very specific set of use cases, which are often not actually what any normal person does
in your product.
They are often weighted, kind of the portfolio of things that they're trying to measure
is different from what people care about in any given product.
And because of that, we've found that trying to optimize too much for that stuff is often
let us astray and actually not led towards the highest quality products and the most
usage and best feedback within MetaI as people use our stuff. So we're trying to anchor our North Star
in basically the product value that people kind of report to us and what they say that they want
and what their revealed preferences are and using the experiences that we have. So sometimes I think
sometimes these things don't quite line up and I think a lot of them are quite easily
gamable, right? So, I mean, I think on the arena, you'll see stuff like,
like Sonnet 3-7. It's like a great model, right? And it's like not near the top. And it was
relatively easy for our team to tune a version of Lama 4 Maverick that basically was way at the top.
Whereas the one that we released, that's the kind of the pure model actually has no tuning for that
at all, so it's further down. So I think you just need to be careful with some of the benchmarks. And we're
going to index primarily on the products. Do you think feel like there is some benchmark which captures
what you see as a North Star of value to the user, which can be sort of objectively measured
between the different models and you're like, I need Lama 4 to come out on top on this?
Well, I mean, our benchmark is basically user value in meta-AI, right? So it's not compare
other models. Well, we might be able to because we might be able to run other models in that and be
to tell. And I think that that's one of the advantages of open sources, basically, you have a good
community of folks who can, like, poke holes at, okay, where is your model not good? And where
is it good? But I think the reality at this point is that all these models are optimized for
slightly different mixes of things. I mean, everyone is trying to, I think, go towards the same,
you know, I think all the leading labs are trying to create general intelligence, right? And
superintelligence, whatever you call it, right? Like, basically, AI that can lead to
towards a world of abundance where like everyone has these superhuman tools to create whatever they want
and that leads to just dramatically empowering people and creating all these economic benefits.
I think that that's sort of however you define that, I think that that's kind of what a lot of the
labs are going for. And but there's no doubt that different folks have sort of optimized
towards different things. I think the anthropic folks have really focused on kind of coding
and agents around that. You know, the open AI folks, I think have gone a little more towards
reasoning recently. And I think that there is a space which, if I had to guess, I think we'll end up
probably being the most used one, which is quick, is very natural to interact with, is very
natively multimodal that fits into kind of throughout your day the ways that you want to interact
with it. And I think you've got a chance to play around with the new meta. And I think you've got a chance to play around with
with the new meta-AI app that we're releasing.
And, you know, one of the fun things that we put in there is the demo for the full duplex voice.
And it's, I mean, it's early, right?
I mean, it's not, you know, there's a reason why we haven't made that the default voice model in the app.
But there's something about how naturally conversational it is that I think is just like really
fun and compelling.
And I think being able to mix kind of that in with the right personalization is going to lead
towards a product experience where, you know, I would basically just guess that you go forward a few years,
like, we're just going to be talking to AI throughout the day about different things that we're
wondering. And, you know, it's like you'll, you'll have your phone, you'll talk to it on your phone,
you'll talk to it while you're browsing your feed apps. It'll give you context about different stuff.
You'll be able to answer questions. It'll help you as you're interacting with people in messaging
apps. You know, eventually, I think we'll walk through our daily lives and we'll either have
glasses or, you know, other kinds of AI devices and just be able to kind of seamlessly interact
with it all day long. So I think that that is, that's kind of the North Star. And whatever
the benchmarks are that lead towards people feeling like the quality is like that's what they
want to interact with, that I think is actually the thing that is ultimately going to matter the most
to us. I got a chance to play around with both Orion and also the METAI app. And the voice mode was
super smooth. It was quite impressive.
On the point
of what the different labs are optimizing for,
to steal man their view, I think a lot
of them think that once you fully
automate software engineering and
AI research, then you can kick
off an intelligence explosion where
you have millions of copies of these software
engineers replicating the research that
happened between Lama 1 and Lama 4,
that scale of improvement, again,
in the matter of weeks or months rather than
years. And so it really matters
to just have closed the loop on the
software engineer and then you can be the first to ASI. What do you make of that? Well, I mean,
I personally think that's pretty compelling. And that's why we have a big coding effort too.
I mean, we're working on a number of coding agents inside meta, you know, because we're not really
an enterprise software company. We're primarily building it for ourselves. So we're, so again,
you know, we go kind of like for, you know, the specific goal. We're not trying to build a general
developer tool, we're trying to build a coding agent and an AI research agent that basically
advances Lama research specifically. And it's like just fully kind of plugged into our tool chain
and all this. So I think that that's important and I think is going to end up being an important
part of how this stuff gets done. I would guess that like sometime in the next 12 to 18 months
will reach the point where most of the code that's going towards these efforts is written by AI.
And I don't mean like auto-complete.
I mean, right, today you have like, you have kind of, you know, good auto-complete.
Like you start writing something and it can complete the kind of section of code.
I'm talking more like you give it a goal.
It can run tests, right?
It can kind of improve things.
It can find issues.
It writes higher quality code than like the average very good person.
on the team already.
And, like, I think that's going to be a really important part of this for sure.
But I don't know if that's the whole game.
I mean, I think that that's, that I think is going to be a big industry.
And I think that that's going to be an important part of how AI gets developed.
But I think that there are still guys.
I think, I mean, look, I guess one way to think about this is this is a massive space.
Right.
So I don't think that there's just going to be, like, one company with one.
one optimization function that serves everyone as best as possible. I think that there are a bunch of
different labs that are going to be doing leading work towards different domains. Some are going to be
more kind of enterprise focused or coding focus. Some are going to be more productivity focused. Some are
going to be more social or entertainment focused. Within the assistant space, I think there are going to be
some that are much more informational or productivity. Some are going to be more companion
focused. It's going to be a lot of the stuff that's just like fun and entertaining and like shows up in
your feed. And I think that that's, so I think that there's just like a huge amount of space. And
part of what's fun about this is like, it's like going towards this AGI future, there are a bunch
of common threads for what needs to get invented, but there are a lot of things at the end of the
day that need to get created. And I think that that's, I think you'll start to see a little more
specialization between the groups if I had to guess. It's really interesting to me that you
basically agree with the premise that there will be an intelligence explosion.
and something like super intelligence on the other end.
But then if that's the case, tell me of my misunderstanding you,
if that's the case, why even bother with personal assistance and whatever?
Why not just get to superimmune intelligence first and then deal with everything else later?
Well, I think that that's just one aspect of the flywheel.
Right.
So part of what I generally disagree with on the fast takeoff thing is it takes time to build out physical infrastructure.
Right?
So if you want to build like a gigawatt cluster of compute,
that just is going to take some time, right? It like takes Nvidia a bunch of time to like stabilize
their new generation of, of the systems. And then you need to figure out the networking around it.
And then you need to like build the building. You need to get permitting. And you need to get the
energy. And then like, okay, you want like some, whether it's gas turbines or green energy, you need to like,
there's a whole supply chain of that stuff. So I think there's like a lot of, and we talked about this a
bunch on the last time that I was that it was on the podcast with you. And I think some of these are
just like physical world, human time things that as you start getting more intelligence in one
part of the stack, you'll basically just run into a different set of bottlenecks. I mean,
that's sort of the way that engineering always works. It's like you saw one bottleneck,
you get another bottleneck. Another bottleneck in the system or another ingredient that's going to
make this and like work well is basically people getting used to in learning and having a
feedback loop with um with using the system so i don't think like like these systems don't tend to be
the type of thing where like something just shows up fully formed and then people magically
fully know how to use it um and that's the end i think that there is this
co-evolution that happens, where people are learning how to best use these AI assistants.
On the same side, the AI assistants are learning what those people care about,
and the developers of those AI assistants are able to make the kind of AI assistance better.
And then you're also building up this base of context.
So now you wake up, and you're like a year or two into it.
And now the AI assistant can reference things that you talked about a couple of years ago.
And like, that's pretty cool.
But you couldn't do that if it just do.
you just launched the perfect thing on day one.
There's no way that it could reference what you talked about two years ago if it didn't exist two years ago.
So I guess my view is like there's this huge intelligence growth.
There's a very rapid curve on the uptake of people interacting with the AI assistance and like the learning feedback and kind of data flywheel around that.
And then there is also the build out of the supply.
chains and infrastructure and regulatory frameworks to enable the scaling of a lot of the physical
infrastructure. But I think at some level, all of those are going to be necessary and not just
the coding piece. I guess one specific example of this that I think is interesting. Actually,
even if you go back a few years ago, we had a project on, I think it was on our ads team,
to automate ranking experiments. That's like a pretty constrained environment. It's not like
right open-ended code. It's basically look at the whole history of the company, every experiment
that any engineer has ever done in the ad system, and look at what worked, what didn't,
what the results of those were, and basically formulate new hypotheses for different tests that
we should run that could improve the performance of the ad system. And what we basically found
was we were bottlenecked on compute to run tests based on.
the number of hypotheses. It turns out even with just the humans that we have right now on the ads
team, we already have more good ideas to test than you actually have either kind of compute or
really cohorts of people to test them with, right? Because even if you have like three and a half
billion people using your products, you still want each, you know, each test needs to be
statistically significant. So it needs to have, you know, some number of whatever it is,
hundreds of thousands or millions of people. And there's only there's some, there's kind of only so
much throughput that you can get on testing through that. So we're already at the point,
even with just like the people we have, that we already can't really test everything that we want.
So now just being able to test more things is not necessarily going to be additive to that.
We need to get to the point where the average quality of the hypotheses that the AI is generating
is better than what the, all the things above the line that we're actually able to test that like sort of the best humans on the team have been able to do before it'll even be marginally useful for it.
So I think that there's like, we'll get there. We'll get there, I think pretty quickly. But it's not like, okay, cool, the thing can write code. All of a sudden, everything is just improving massively. There are like these real world constraints that, that basically it needs to, first it needs to be able to kind of do a reasonable,
job, then it needs to be able to, you need to have the compute and the kind of people to test.
And then over time, as the quality creeps up, I don't know, are we here in like five or ten
years? And it's like no set of people can generate a hypothesis as good as the AI system.
I don't know, maybe, right? Then I think in that world, obviously, that's going to be how all
the value is created. But that's not the first step. Publicly available data is running out.
So major AI labs like meta, Google DeepMind, and Open AI all partner with scale to push the
boundaries of what's possible. Through Skills Data Foundry, major labs get access to high-quality
data to fuel post-training, including advanced reasoning capabilities. Skills Research Team SEAL is
creating the foundations for integrating advanced AI into society through practical AI safety
frameworks and public leaderboards around safety and alignment. Their latest leaderboards
include Humanities Last Exam, Enigma Eval, Multi-Challenge, and Vista, which test a range of
capabilities from expert level reasoning to multimodal puzzle solving to performance on multi-turn
conversations. Scale also just released scale evaluation, which helps diagnose model limitations.
Leading frontier model developers rely on scale evaluation to improve the reasoning capabilities
of their best models. If you're an AI researcher or engineer and you want to learn more about
how scales data foundry and research lab can help you go beyond the current frontier of capabilities,
go to scale.com slash thwarcash.
So if you buy this view that this is where intelligence is headed,
the reason to be bullish on meta is obviously that you have all this distribution,
which you can also use to learn more things that can be useful for training.
You mentioned the meta AI app as a billion active users.
Not the app. Not the app.
The app is a standalone thing that we're just launching now.
I think it's fun for people who want to use it.
It's a cool experience. We can talk about that. We're kind of experimenting with some new ideas in there that I think are novel and worth talking through. But I'm talking mostly about our apps. MetaI is actually most used in WhatsApp. Got it. So it's, in WhatsApp is mostly used outside of the U.S. We just passed like 100 million people in the U.S. But it's not the primary messaging system in the U.S., I messages. So I think people in the U.S. probably tend to underestimate the meta-AI use somewhat.
But it's also part of the reason why the standalone app is going to be so important is the U.S. is, you know, for a lot of reasons, one of the most important country.
And like, you know, the fact that WhatsApp is the main way that people are using MetA.I.
And that's not the main messaging system in the U.S. means that we need another way to kind of build a first-class experience that's in front of people.
And I guess, to finish the question, the bearish case would be that if the future of AI is less about just answering your questions and more so just being a virtual.
co-worker. It's not clear how meta-AI inside of WhatsApp gives you the relevant training data
to make, you know, like a fully autonomous programmer, remote worker. So, yeah, in that case,
does it not matter that much who has more distribution right now with LLMs? Well, again,
I just think that there are going to be different things, right? It's like if you were sitting at the
beginning of the kind of the development of the internet, and it's like, well, what's going to be
the main internet thing. Is it going to be knowledge work or is it going to be like massive consumer
apps? It's like, I don't know, you get both, right? It's like you don't have to choose one, right?
And now, the world is big and complicated and does one company build all that stuff? I think
normally the answer is no. But yeah, no, to your question, people do not code in WhatsApp for the
most part. And I don't foresee that that's going to be like that's people starting to write code in
WhatsApp is going to be like a major, major use case. Although I do think that people are going to ask
AI to do a lot of things the result in the AI coding without them necessarily knowing it.
So that's a, that's a separate thing. But we do have a lot of people who are writing code at
meta, and they use meta AI. We have this internal thing that we call Metamate and basically
in a number of different coding and AI research agents that we're building around that.
and that has quite its own feedback loop and I think can get good for accelerating those efforts.
But again, I just think that there are going to be a bunch of things.
I think AI is almost certainly going to unlock this massive revolution and knowledge work and code.
I also think it's going to be kind of the next generation of search and how people get information
and do more complex information tasks.
I also think it's going to be fun.
I think people are going to use it to be entertained.
And a lot of the internet is like memes and humor, right?
And we have this like amazing technology at our fingertips.
And it is sort of amazing and kind of funny when you think about it,
how much of human energy just goes towards entertaining ourselves and pushing culture forward
and finding humorous ways to explain cultural phenomenon that we observe.
And I think that that's almost certainly going to be the case in the future, right?
If you look at like the evolution of things like Instagram and Facebook, if you go back 10, 15, 20 years ago, right?
It was like text.
Then we all got phones with cameras.
Most of the content became photos.
Then the mobile networks got good enough that if you wanted to watch a video on your phone, it wasn't just like buffering.
So that got good.
So over the last like 10 years, most of the content has moved, you know, basically towards video at this point.
Most of the time spent in Facebook and Instagram is video.
But like, I don't know, do you think in five years we're just going to be like sitting in our feed and consuming media that's video?
It's like, no, it's going to be interactive, right?
It's like you'll be scrolling through your feed and there will be content that is basically, I don't know, maybe it looks like a reel to start.
But then like you talk to it or you interact with it and it talks back or it changes what it's doing or you can jump into it like a game and interact with it.
And that's all going to be like AI.
Right.
So I guess my point is there's just all these different things.
And I guess we're ambitious, so we're working on a bunch of them.
But I don't think any one company is going to do all of it.
Okay, so on this point of AI-generated content or AI interactions,
already people have meaningful relationships with AI therapists, AI friends,
you know, maybe more.
And this is just going to get more intense as these AIs become more unique
and more personable, more intelligent, more spontaneous and funny and so forth.
How do we make sure people are going to have relationships,
relationships with the AIs? How do we make sure that these are healthy relationships? Well, I think
there are a lot of questions that you only really can answer as you start seeing the behaviors.
So probably the most important upfront thing is just like ask that question and care about it at
each step along the way. But I think also being too prescriptive up front and saying we think
these things are not good often cuts off value, right? Because I don't know, people use stuff that's
valuable for them. I mean, one of my core guiding principles and design
products is like people are smart right they know it is valuable in their lives
every once in a while you know something can something bad happens in a product and
you want to make sure that you design your products well to minimize that but but if
if if you think that someone is doing is bad and they think it's really valuable most of
the time in my experience they're right and you're wrong and you just haven't come up with
the framework yet for understanding why the thing that you're doing is
is valuable and helpful in their life.
Yeah, so that's kind of the main way that I think about it.
I do think that people are going to use AI for a lot of these social tasks.
Already one of the main things that we see people using that AI for is kind of talking
through difficult conversations that they need to have with people in their life.
It's like, okay, I'm having this issue with my girlfriend or whatever.
Like, help me have this conversation or like, I need to have this hard conversation.
with my boss at work. Like, how do I have that conversation? That's pretty helpful. And then I think
as the personalization loop kicks in and the AI just starts to get to know you better and better,
I think that will just be really compelling. You know, one thing just from working on social
media for a long time is there's the stat that I always think is crazy. The average American, I think,
has, I think it's fewer than three friends. Three people that they'd consider friends. And
the average person has demand for meaningfully more. I think it's like 15 friends or something,
right? I guess there's probably some point where you're like, all right, I'm just too busy.
I can't deal with more people. But the average person wants more connectivity connection than they
have. So, you know, there's a lot of questions that people ask of stuff like, okay,
is this going to replace kind of in-person connections or real-life connections?
And my default is that the answer to that is probably no.
I think it, you know, I think that there are all these things that are better about
kind of physical connections when you can have them.
But the reality is that people just don't have the connection and they feel more alone
a lot of the time than they would like.
So I think that a lot of these things that today,
there might be a little bit of a stigma around.
I would guess that over time,
we will find the vocabulary as a society
to be able to articulate why that is valuable
and why the people who are doing these things
are like why they are rational for doing it
and how it is adding value for their lives.
But also I think that the field is very early.
So I mean, it's like I think, you know,
there are a handful of companies and stuff
we're doing virtual therapists.
And, you know, there's like
virtual girlfriend type stuff, but it's, um, it's very early, right? It's, I mean, the,
the embodiment in the things is, is pretty weak. A lot of them, like, you, you open it up and it's
just like an image of, like, of the therapist or the person you're talking to or whatever.
I mean, sometimes there's some very rough animation, but it's not like an embodiment. I mean,
you've seen the stuff that we're working on in reality labs where, like, you have the Kodak
avatars and it, like, feels like it's a real person. I think that's kind of where it's going. You're going to,
you know, you'll be able to, um, basically have, like,
like an always on video chat where it's like, oh, and also the, the, the AI will be able to, you know,
the gestures are important too, like more than half of communication when you're actually having a
conversation is not the words that you speak. It's all the nonverbal stuff. Yeah. I did get a chance
to check out Orion the other day, and I thought it was super impressive. And I'm mostly optimistic
about the technology. Just because generally I'm, as you mentioned, like libertarian about if people
are doing something, probably I think it's good for them. Although I actually don't know if it's the
case that if somebody is using TikTok, they would say that they're happy with how much time
they're swimming on TikTok or something. So I'm mostly optimistic about it also in the sense that
if we're going to be living in this future world of AGI, we need to be in order to keep up with
that humans need to be upgrading our capabilities as well with tools like this. And just generally
there can be more beauty in the world if you can see Studio Ghibli everywhere or something.
I was worried that one of the flagship use cases that your team showed me was I'm sitting
at the breakfast table and on the periphery of my vision is just a bunch of reels that are
scrolling by. Maybe in the future, my AI girlfriend is on the other side of the screen or something.
And so I am worried that we're just removing all the friction between getting totally reward
hacked by our technology. Yeah, how do we make sure like, I don't know, this is not what
ends up happening in five years? I mean, again, I think people have a good sense of what they want.
I mean, that experience that you saw was a demo just to show multitasking and holograms, right?
So, I mean, I agree that, like, I don't think that the future is, like, you have stuff that's trying to compete for your attention and the corner of your vision all the time.
I don't think people would like that too much.
So it's actually, it's one of the things as we're designing these glasses that we're really mindful of is, like, probably the number one thing that glasses need to do is get out of the way and be good glasses.
right and as an aside i think that's part of the reason why the rayban meta product has done
so well is like all right it's like great for listening to music and taking phone calls and
taking photos and videos and the ai is there when you want it but when you don't it's like a great
you know good looking pair of glasses that that that that people like and it kind of gets out of the way
well. I would guess that that's going to be a very important design principle for the
augmented reality future. The main thing that I see here is, you know, I think it's kind of
crazy that for how important the digital world is in all of our lives, the only way we can
access it is through these like physical, you know, digital screens, right? It's like,
You have like a phone, you have your computer, you can put a big TV.
It's like this huge physical thing.
It just seems like we're at the point with technology where the physical and the digital
world should really be fully blended.
And that's what the holographic overlay is allow you to do.
But I agree.
I think a big part of the design principles around that are going to be, okay, you'll be
interacting with people and you'll be able to bring digital artifacts into those interactions
and be able to do cool things like very seamlessly, right? It's like if I want to show you something
here, like here's a screen, okay, here it is, I can show you, you can interact with it, it can be
3D, we can kind of play with it. You want to play a card game or whatever. It's like,
all right, here's like a deck of cards. We can play with it. It's like two of us are here physically.
Like you have a third friend who's just holograming in, right? And they can kind of
participate too. But but I think that in that world people are going to be, you know, just like you
don't want your physical space to be cluttered. It's sort of like a, you know, it just kind of has like a,
it wears on you psychologically. I don't think people are going to want the digital kind of physical
space to feel that way either. So I don't know. That's more of an aesthetic and, and, and one of
these norms that I think we'll have to get worked out. But, but I think we'll figure that out.
Going back to the AI conversation, you're mentioning how big of a bottleneck the physical infrastructure can be.
Related to other open source models like DeepSeek and so forth, deepseek right now has less compute than a lab like meta.
And you could argue that it's competitive with the Lama models.
If China is better at physical infrastructure, industrial scale-ups, getting more power and more data centers online, how worried are you that they might beat us here?
I mean, I think it's like a real competition.
I mean, I think that you're seeing the industrial policies really play out where, yeah, I mean, I think China's bringing online more power.
And because of that, I think that the U.S. really needs to focus on streamlining the ability to build data centers and build and produce energy.
Or I think we will be at a significant disadvantage.
At the same time, I think some of the export controls on things like chips, I think you can see how they're clearly working in a way because, you know, there was all the conversation with Deepseek about, oh, they did all these, like, very impressive low-level optimizations.
And the reality is they did, and that is impressive.
But then you ask, why did they have to do that when none of the, like, American labs did it?
And it's like, well, because they're using like partially nerved chips that are the only thing that
Nvidia is allowed to sell in China because of the export controls. So, so deep seek basically
had to go spend a bunch of their calories in time doing low-level infrastructure optimizations
that the American labs didn't have to do. Now, they produced a good result on text, right?
It's like, I mean, deep seek is text only. So the infrastructure is impresses.
the text result is impressive.
But every new major model that comes out now is multimodal, right?
Its image, its voice, and there's isn't.
And now the question is, why is that the case?
I don't think it's because they're not capable of doing it.
I think that they basically had to spend their calories on doing these infrastructure
optimizations to overcome the fact that there were these export controls.
But when you compare like Lama 4 with Deepseek, I mean, our reasoning model isn't out yet.
So I think that the kind of R1 comparison isn't clear yet.
But we're basically like effectively, same ballpark on all the tech stuff is what Deep Seek is doing, but with a smaller model.
So it's much more kind of efficient per the kind of cost per intelligence is lower with what we're doing for Lama on text.
And then all the multimodal stuff were effectively leading at, and it just doesn't even exist in their stuff.
So I think that the Lama 4 models, when you compare them to what they're doing, are good.
And I think generally people are going to prefer to use the Lama 4 models.
But I think that there is this interesting contour where, like, it's clearly a good team that's doing stuff over there.
And I think you're right to ask about the accessibility of power, the accessibility of power, the accessibility of
compute and chips and things like that, because I think the kind of work that you're seeing
the different labs do and play out, I think is somewhat downstream of that.
Freemium products attract a ton of fake account signups, bar traffic, and free to your
abuse. And AI is so good now that it's basically useless to just have a captcha of six squiggly
numbers on your signup page. Take cursor. People were going to insane links to take advantage
of cursors free credits, creating and deleting thousands of accounts, sharing logins, even coordinating
through Reddit. And all this was costing cursor a ton of money in terms of inference compute and
LLMAPI calls. Then they plugged in Work OS radar. Radar distinguishes humans from bots. It looks at
over 80 different signals from your IP address to your browser, to even the fonts installed
on your computer, to ensure that only real users can get through. Radar currently runs millions
of checks per week. And when you plug radar into your own product,
you immediately benefit from the millions of training examples that radar has already seen through other top companies.
Previously, building this level of advanced protection in-house was only possible for huge companies.
But now with WorkOS radar, advanced security is just an API callaway.
Learn more at WorkOS.com slash radar.
All right, back to Zuck.
So Sam Altman recently tweeted that OpenAI is going to release an open source, a soda reasoning model,
I think part of the tweet was that we will not do anything silly, like say that you can only use it if you have less than 700 million users.
DeepSeek has the MIT license, whereas Lama, I think a couple of the contingencies in the Lama license require you to say built with Lama on applications using it or any model that you train using Lama has to begin with the word Lama.
What do you think about the license?
Should it be less onerous for developers?
I mean, look, we've basically pioneered like the open source.
LLM thing. So I mean, I don't I don't consider the the license to be
onerous. I kind of you know, I think that when we were starting to push on open
source, it was this, it was this big debate in the industry of like, is this even a
reasonable thing to do? But can you do something that is safe and trustworthy with open
source? Like is, will open source be able to be competitive enough that anyone will even
care. And basically when we were answering those questions, which, you know, a lot of the hard work
that, you know, I think a lot of the teams at Meadow, although there are other folks in the
industry, but really the Lama models were the ones that I think broke open this whole open
this whole open source AI thing in a huge way. You know, we were very focused on, okay, if we're
going to put all this energy into it, then at a minimum, you know, if you're going to have these
large cloud companies like Microsoft and Amazon and Google turn around and sell our model,
that we should at least be able to have a conversation with them before they do that around
basically like, okay, what kind of business arrangement should we have? But our goal with the
license isn't, you know, we're generally not trying to stop people from using the model.
We just think like, okay, if you're like one of those companies or, I don't know, if you're
Apple, you know, just come talk to us about what you want to do.
and let's find a productive way to do it together.
So I think that that's generally been fine.
Now, if the whole open source part of the industry evolves in a direction where,
you know, there's like a lot of other great options.
And if like the, you know, the license ends up being a reason why people don't want to use Lama,
then I don't know, we'll have to reevaluate the strategy, whether, you know,
what it makes sense to do at that point.
But I just don't think we're there.
That's not in practice a thing that we,
have seen companies coming to us and saying, we don't want to use this because your license
says that if you reach 700 million people, you have to come talk to us. So it's at least so far,
it's a little bit more of something that we've heard from like kind of open source purists.
Like, is this as clean of an open source model as you, we, as, as you'd like it to be?
And look, I mean, I think that debate has existed since the beginning of open source with like
you know the you know just all the gpL license stuff versus other things and it's like okay just
like does it need to be the case that anything that touches open source can has to has to be
open source or can people just take it and use it in different ways and i'm sure there will continue
being debates around this but i don't know if you're if you're spending many many billions of
dollars training these models i think asking the other companies that um are also huge and similar
in size and can easily kind of afford to have a a relationship
with us to talk to us before they use it?
I think it seems like a pretty reasonable thing.
If it turns out that you, you know, other models are also, you know, there's like a bunch of
good open source models, so that part of your mission is fulfilled.
And maybe other models are better at coding.
Is there a world that you'd just say, look, open source system ecosystem is healthy.
There's plenty of competition.
We're happy to just use some other model, whether it's for internal software engineering at
meta or deploying to our apps.
We don't necessarily need to build with Lama.
Well, again, I mean, we do a lot of things.
So it's possible that, you know, I guess let's take a step back.
The reason why we're building our own big models is because we want to be able to like build exactly what we want.
Right.
And none of the other models in the world are sort of exactly what we want.
If they're open source, then you can take them and you can find to them in different ways.
But you still have to deal with the model architectures and, you know, they make different size trade.
off surround that affect the latency and inference cost of the models. And it's like, okay, the scale
that we operate at, that stuff really matters. Like, we made the Lama Scout and Maverick models
certain sizes for a specific reason, because they fit on a host and we wanted certain latency,
especially for the voice models that we're working on, that we want to just basically have
purveyed and be across everything that we're doing from the glasses to all of our apps to the
meta-AI app and all this stuff.
So I think that there's a level of kind of control of your own destiny that you only get when you build the stuff yourself.
That said, there are a lot of things that, like, AI is going to be used in every single thing that every company does.
When we build a big model, we also need to choose which things, which use cases internally we're going to optimize for.
So does that mean that for certain things, we're not going to, you know, think that, like, okay, maybe Claude is better for building this specific development tool that this team is using.
All right, cool, then, like, use that.
Fine.
Great.
I don't think we don't want to fight with, you know, one hand tied behind our back.
We're doing a lot of different stuff.
You also asked would we maybe, would it not be important because other people are doing open source?
I don't know.
On this, I'm a little more worried because I think you have to ask for anyone who shows up now and is doing open source now that we have done it, there's a question which is would they still be doing open source if we weren't doing it?
And like I think that there are a handful of folks who see the trend that more and more development is going towards
open source and like oh crap like we kind of need to be on this train or else we're going to lose.
Like we have some closed model API and like increasingly a lot of developers that's not what they want.
So I think you're seeing a bunch of the other players start to do some work in open source.
But it's just unclear if it's dabbling or fundamental for them in the way.
that it has been for us.
And a good example is like what's going on with like Android.
It's like Android start off as the open source thing.
There's not really like any open source alternative.
Like I think over time Android has just been kind of getting more and more closed.
So I think if you're us, you'd kind of need to worry that if we stopped pushing the industry in this direction,
that like all these other people maybe are only really doing it because they're trying
to kind of compete with us in the direction that we're pushing things.
And, you know, they already have their revealed preference for what they would build if
open source didn't exist.
And it wasn't open source, right?
So I just think we need to be careful about relying on that continued behavior for the
future of the technology that we're going to build at the company.
I mean, another thing I've heard you mention is that it's important that the standard
gets built around American models like Lama.
I guess I wanted to understand your logic there
because it seems like with certain kinds of networks,
it is the case that the Apple App Store
just has a big contingency around what it's built around.
But it doesn't seem like if you build some sort of scaffold for deep seek,
you couldn't have easily just switched it over to Lama 4,
especially since between generations,
like Lama 3 wasn't M-O-E, Lama 4 is.
So things are changing between generations of models as well.
So what's the reason for thinking things will get built out
in this contingent way on a specific standard?
I'm not sure what do you mean by contingent.
Oh, as in like it's important that people are building for LAMA rather than for LLMs in general,
because that will determine what the standard is for the future.
Well, look, I mean, I think these models encode values and ways of thinking about the world.
And, you know, we had this interesting experience early on where we took an early version of Lama and we translated it.
I think it might have been into French or some of their language.
and the feedback that we got
I think it was French
from French people was this sounds like an American
who learned to speak French
like it doesn't sound like a French person
and it's like well what do you mean
does it not speak French well it's like no it speaks French fine
it's just like the way that it thinks about the world
is like seems slightly American
so I think there's like these subtle things
that kind of get built into it
over time as the models get more sophisticated
they should be able to embody different value sets across the world.
So maybe that's like a very kind of,
you know, not particularly sophisticated example,
but I think it sort of illustrates the point.
And, you know, some of the stuff that we've seen in testing
some of the models, especially coming out of China,
is like they sort of have certain values encoded in them.
And it's not just like a light fine tune to,
get that to feel the way that you want. Now, the stuff is different, right? So I think language models
or something that has like a kind of like a world model embedded into it have more values. Reasoning,
I think is, I mean, I guess there are kind of values or ways to think about reasoning, but
one of the things that's nice about the reasoning models is they're trained on verifiable problems.
So do you need to be worried about like cultural bias if your model is doing math? Probably not.
Right. I think that that's, you know, I think it's like the, the chance that like some reasoning model that was built elsewhere is like going to kind of incept you by like solving a math problem in a way that's, that's, devious seems low. There's a whole set of different issues. I think around coding, which is the other verifiable domain, which is, you know, I think you kind of need to be worried about like waking up one day and,
Like, does a model that I have some tie to another government, like, can it embed all kinds of
different vulnerabilities in code that then, like, the intelligence organizations associated with that
government can then go exploit?
So now you sort of like, all right, like in some future version where you have, you know,
some model from some other country that we're using to, like, secure or build out a lot of our
systems.
And then all of a sudden you wake up and it's like everything is just vulnerable to, in a way
that like that country knows about, but like you don't, or it turns on a vulnerability at some
point. Those are real issues. So what we've basically found is, now I mean, I'm very interested
in studying this because I think one of the main things that's interesting about open source
is the ability to distill models. You know, most people, the primary value isn't just like
taking a model off the shelf and saying like, okay, like meta-built this version of Lama, I'm going to take it,
and I'm going to run it exactly in my application.
It's like, no, well, your application isn't doing anything different if you're just running our thing.
You're at least going to fine tune it or try to distill it into a different model.
And when we get to stuff like the behemoth model, like the whole value in that is being able to basically take this very high amount of intelligence and distill it down into a smaller model that you're actually going on to run.
But this is like the beauty of distillation.
And it's like one of the things that I think has really emerged as a very powerful technique in the last year since the last time we sat down is, um,
And I think it's worked better than most people would predict, as you can basically take a model that is much bigger and take probably like 90 or 95% of its intelligence and run it in something that's 10% the size.
Now, do you get 100% of the intelligence?
No.
But like 95% of the intelligence at 10% of the cost is like pretty good for a lot of things.
The other thing that's interesting is now with this like more varied open source community where you, it's not just Lama, you have other models.
you have the ability to distill from multiple sources.
So now you can basically say, okay, llama's really good at this.
Like maybe the architecture is really good because it's fundamentally multimodal and fundamentally
more inference friendly and more efficient.
But like, let's say this other model is better at coding.
Okay, well, just you can distill from both of them and then build something that's better
than either of them for your own use case.
So that's cool.
But you do need to solve the security problem of knowing that you can distill.
it in a way that is safe and secure.
And so this is something that we've been researching and have put a lot of time into.
And what we've basically come to is like, look, anything that's kind of like language is
quite fraught because there's like a lot of values embedded in that.
So unless you don't care about having the values from whatever the model is that you got,
you probably don't want to like distill the straight like language world model.
on reasoning, I think you can get a lot of the way there by limiting it to verifiable domains,
running kind of code cleanliness and security filters, like whether it's like the Lama Guard open source
or the Code Shield open source things that we've done, that basically allow you to incorporate
different input into your models and make sure that both the input and the output are secure.
and then just a lot of red teaming to make sure that you're like you just have people or experts who are looking at this.
It's like, all right, is this model doing anything that isn't what I want after distilling from something?
And I think with a combination of those techniques, you can probably distill on the reasoning side for verifiable domains quite securely.
That's something I'm pretty confident about.
And it's something that we've done a lot of research around.
But I think this is a very big question, is like, how do you do good distillation?
Because there's just so much value to be unlocked.
But at the same time, I do just think that there is some fundamental bias in the different models.
Speaking of value to be unlocked, what do you think the right way to monetize AI will be?
Because obviously digital ads are quite lucrative.
But as a fraction of total GDP, it's small.
In comparison to, like, all remote work, like, even if you can increase its productivity and not replace work,
that's still worth tens of trillions of dollars.
So is it possible that ads might not be it?
Yeah, how do you think about this?
I mean, like we were talking about before,
there's going to be all these different applications
and different applications tend towards different things.
Ads is great when you want to offer people a free service, right?
Because it's free.
You need to cover it somehow.
Ads is like, okay, it's ads solves this problem of like
a person does not need to pay for something
and they can get something that is like amazing for free.
and also by the way with modern ad systems
a lot of the time people think that the ads add value to the thing
if you do it well right it's you know you need to be good at ranking
and you need to be good at having enough liquidity of advertising inventory
so that way you know if you only have five advertisers in the system
no matter how good you are at ranking you may not be able to show something
to someone that they're interested in but if you have a million advertisers in the system
then you're probably going to be able to find something pretty compelling if you're good
it picking out, you know, the different needles and the haystack that that person's going to be
interested in. So I think that definitely has its place. But there are also clearly going to be
other business models as well, including ones that just have higher costs. So it doesn't even
make sense to offer them for free, which, by the way, there have always been business models like
this. There's a reason why social media is free and ad supported. But then if you want to watch
Netflix or like ESPN or something, you need to pay for that. It's okay, because the content
that's going into that, like they need to produce it, and that's very expensive for them to produce,
and they probably could not have enough ads in the service in order to make up for the
cost of producing the content. So basically, you just need to pay to access it. Then the
tryoff is fewer people do it, right? It's like they're talking about hundreds of millions of
people using those instead of billions. So there's kind of a value switch there. I think
think similar here. You know, not everyone is going to want like a software engineer or a thousand
software engineering agents or whatever it is. But if you do, that's something that you're,
you are probably going to be willing to pay thousands or tens of thousands or hundreds of
thousands of dollars for. So I think that this just speaks to the diversity of different things
that need to get created is like, there are going to be business models at each point along the
spectrum. And it met a, yeah, for the consumer piece, we definitely want to have a free thing. And I'm sure
that will end up being ad-supported. But I also think we're going to want to have a business model
that supports people using arbitrary amounts of compute to do like really even more amazing things
than what it would make sense to be able to offer the free service. And for that, I'm sure we'll
end up having a premium service. But I mean, our basic, you know, values on this.
so we want to serve as many people in the world.
Lambda is the cloud for AI developers.
They have over 50,000 Nvidia GPUs ready to go
for startups, enterprises, and hyperscalers.
Compute seems like a commodity though,
so why use Lambda over anybody else?
Well, unlike other cloud providers,
Lambda's only focus is AI.
This means their GPU instances and on-demand clusters
have all the tools that AI developers need pre-installed.
No need to manually install Kuda, drivers,
or manage Kubernetes.
And if you only need GPU compute,
you can save a ton of money
by not paying for the overhead
of general purpose cloud architectures.
Lambda even has contracts
that let enterprises use any type of GPU
in their portfolio and easily upgrade
to the next generation.
For all of you wanting to build with Lambda 4,
Lambda has a server-o-less API without rate limits.
It's built with rapid scaling in mind.
Users have 1,000 extra inference consumption
without ever having to apply for a quota
or even speak to a human.
head to Lambda.AI slash Thoracash for a free trial of their inference API
featuring the best open source models like DeepSeek and Lama 4
at the lowest prices in the industry.
All right, back to Zuck.
How do you keep track of, you've got all these different projects,
some of which we've talked about today.
I'm sure there's many I don't even know about.
As the CEO overseeing everything,
there's a big spectrum between, like, you know, going to the Lama team,
and here's the hyperparameters you should use to just giving like a mandate,
like go make the AI better, and there's many different projects.
How do you think about the way in which you can best deliver your value add and oversee all these things?
Well, I mean, a lot of what I spend my time on is trying to get awesome people onto the teams.
Right?
I mean, it's, so there's that.
And then there's stuff that cuts across teams.
It's like, all right, you build meta AI and you want to get it into WhatsApp or Instagram.
It's like, okay, now I need to get those teams to talk together.
And then there's a bunch of questions like, okay.
I was, it's like, okay, do you want the thread for meta-AI and WhatsApp to feel like other WhatsApp threads,
or do you want it to feel like other kind of like AI chat experiences?
There's like different idioms for those.
And so I think that there's like all these interesting questions that sort of need to get answered around like,
how does this stuff basically fit into all of what we're doing?
Then there's a whole other part of what we're doing, which is basically pushing on the infrastructure.
If you want to stand up a gigawop cluster, then first of all, that has a lot of implications for the way that we're doing infrastructure buildouts.
It has sort of political implications for how you engage with the different states where you're building that stuff.
It has financial implications for the company in terms of, all right, there's like a lot of economic uncertainty in the world.
Do we like go double down on infrastructure right now?
and if so, what other tradeoffs do we want to make around the company?
Those are things that, like, it's tough for other people to really make those kind of decisions.
And then I think that there's this question around, like, taste and quality, which is, like,
when is something good enough that we want to ship it?
And I do feel like, in general, I'm the steward of that for the company, although we have a lot of other people, I think, have good taste as well.
who are also filters for for different things.
But yeah, I think that those are basically the areas.
But I think AI is interesting because more than some of the other stuff that we do,
it is more of research and model-led than really product-led.
Like you can't just like design the product that you want and then try to build the model
to fit into it.
You really need to like design the model first and like the capabilities that you want
and you get some emergent properties.
then it's like, oh, you can build some different stuff because this kind of turned out in this way.
And I think at the end of the day, like, people want to use the best model, right?
So that's partially why, you know, when we're talking about building the most, like, personal AI,
the best voice, the best personalization, and like also a very smart experience with very low latency,
those are the things that we basically need to design the whole system to build,
which is why we're working on full duplex voice, which is why we're working on
like the personalization to both have like good memory extraction from your interaction with
AI but also be able to plug into all the other meta systems and why we design the
specific models that we designed to have the kind of size and latency parameters that
they do. Speaking of politics, there's been this perception that some tech leaders have been
aligning with Trump. You and others have donated to his inaugural event and we're on stage with
them and I think you settled like a lawsuit, which gave a result of them getting $25 million.
I wonder what's going on here. Does it feel like the cost of doing business with the
administration or, yeah, what's the best to think about this?
I mean, my view on this is like he's the president of the United States. Our default as an
American company should be to try to have a productive relationship with whoever is running the
government. I would do this, you know, like we've tried to offer to support previous administration
as well. I've been pretty public with some of my frustrations with the previous administration,
how they basically did not engage with us or the business community more broadly, which I think,
frankly, I think is going to be necessary to make progress on some of these things. Like,
we're not going to be able to build the level of energy that we need if you don't have a dialogue
and they're not prioritizing trying to do those things. So, but fundamentally, you know,
look, I mean, I think a lot of people want to write the story about like, like,
you know, what direction are people going.
Like, I just think it's like, we're trying to build great stuff.
We want to work with, have a productive relationship with people.
And that's sort of, that's how I see it.
And it is also how I would guess most others see it.
But obviously I can't speak for them.
You've spoken out about how you've rethought some of the ways in which you engage
and defer to the government in terms of moderation and stuff in the,
past, how are you thinking about AI governance? Because if AI is as powerful as we think it might
be, the government will want to get involved. What is the most productive approach to take
there? And what should the government be thinking about here? Yeah, I guess in the past,
I probably just, I mean, most of the comments that I made, I think we're in the context
of content moderation, where, you know, it's been an interesting journey over the last 10 years
on this where there's obviously been an interesting time in history. There have been novel
questions raised about online content moderation. Some of those have led to, I think, productive
new systems getting built, like our AI systems to be able to detect nation states trying
to interfere in each other's elections. I think we will continue building that stuff out and
that, that I think has been net positive. I think other stuff, we went down some bad paths.
Like, I just think the fact-checking thing was not as effective as community notes because
it's not an internet scale solution. There weren't enough fact checkers. And like, people didn't
trust the specific fact checkers. They, like, you, you know, you want a more robust system. So I
think where we got with community notes is the right one on that. But, um, but my point on this was,
was more that, um, that I think historically I probably deferred a little bit too much to, um,
either the media in kind of their critiques or the government on things that they did not really have authority over, but just as like a central figure.
Like I think we tried to build systems that were maybe we could not have to make all of the content moderation decisions ourselves or something.
And I guess I just think part of the growth process over the last 10 years is just, okay,
like we're a meaningful company. We need to own the decisions that we need to make. We should listen
to feedback from people, but shouldn't defer too much to people who are not, who do not actually
have authority over this because at the end of the day, we're like, we're in the seat and we need
to like own the decisions that we make. And so I think we probably, you know, it's been a maturation
process and in some ways painful, but, but I, you know, I think we're probably a better company for it.
Will tariffs increase the cost of building data centers in the U.S. and shift buildouts to Europe and Asia?
It is really hard to know how that plays out.
I think we're probably in the early innings on that, and it's very hard to know.
Got it.
What is your single highest leverage hour in a week?
What are you doing in an hour?
I don't know.
I mean, every week is a little bit different.
I mean, it's probably got to be the case that the most leveraged thing that you do in a week is not the same.
same thing each week or else by definition you should probably spend more than one hour doing that
thing every week. But yeah, I don't know. It's part of the fun of both, I guess, this job,
but also the industry being so dynamic as like things really move around, right? And like,
and the world is very different now than it was at the beginning of the year than it was six
months into the middle of last year. I think a lot has sort of has really advanced meaningfully.
And like a lot of cards have been turned over since the last time that we sat down.
I think that was about a year ago, right?
Yeah, yeah.
Or I guess if you were saying earlier that recruiting people is a super high leverage thing you do.
It's very high leverage.
Yeah.
Yeah.
Yeah.
What would be possible if, you know, you talked about these models being mid-level software engineers by the end of the year?
What would be possible if, say, software productivity increased like 100x in two years?
What kinds of things could be built that we can't build right now?
What kinds of things?
Well, I, it's, that's an interesting question.
I mean, I think one theme of this conversation is that the amount of creativity that's going to be unlocked is going to be massive.
And if you look at like the overall arc of kind of human society and the economy over 100 or 150 years, it's basically people going from being primarily agrarian and most of human energy going towards just feeding ourselves to
that has become a
kind of smaller and smaller
percent and the things that take care of
our basic physical needs
or a smaller and smaller percent of human energy
which has led to two impacts
one is
more people are doing kind of creative
and cultural pursuits
and two is that more people
people in general spend less time working
and more time on entertainment and culture
I think that that is almost certainly
going to continue as this goes on. This isn't like the one to two year thing of what happens when
you have a super powerful software engineer. But I think over time, you know, if you, like everyone
is going to have these superhuman tools to be able to create a ton of different stuff. I think
you're going to get this incredible diversity. Part of it is going to be solving like things that we
hold up is like these like hard problems, like solving diseases or like solving different
things around science or, um, or just like different technology that makes our lives better.
But I would guess that a lot of it is going to end up being kind of cultural and social pursuit
and entertainment. And like, I would guess that the world is going to get a lot more, like a lot
funnier and like weirder and and quirkier in a way that like the memes on the internet have
sort of gotten over the last 10 years. And, um, and I think that that adds a certain kind of
richness and depth as well, that in kind of funny ways, I think it actually helps you connect
better with people. Because now, like, I don't know, it's like all day long. I just find
interesting stuff on the internet and like send it in group chats to the people I care about
who I think are going to find it funny. And it's like, like the media that people can produce
today to express very, very nuanced, specific cultural ideas. I don't know. It's cool. And I think
that'll continue to get built out. And I think it does advance society in a bunch of ways,
even if it's not like the hard science way of curing a disease. But I guess this is sort of,
if you think about it, like the meta-social media view of the world, is like, yeah,
I think people are going to spend a lot more time doing that stuff in the future. And it's going to
be a lot better. And it's going to help you connect because it's like going to help express different
ideas because the world's going to get more complicated. But like our technology, our cultural
technology to kind of express these very complicated things in like a very kind of funny little
clip or something are going to just get so much better. So I think that's all great. I don't know.
Next year for I tend to, I mean just I guess one other thought that I think is interesting to cover
is I tend to think that it for at least the foreseeable future, this is going to lead towards
more demand for people doing work, not less.
Now, people have a choice of how much time they want to spend working.
But I'll give you one interesting example of something that we were talking about recently.
We have like three, almost three and a half billion people use our services every day.
And one question that we've struggled with forever is like, how do we provide customer support?
Today, like you can, you can write an email.
But we've never seriously been able to contemplate how.
having voice support where someone can just call in.
And I guess that's maybe one of the artifacts of having a free service, right?
It's like the revenue per person's not so high that you can have an economic model that
people can kind of call in.
But also with three and a half billion people using your service every day.
I mean, there would be like a massive, massive number of people, like some like the biggest
call center in the world type of thing.
But it would be like $10, $20 billion, some ridiculous a year to kind of staff that.
So we've never really kind of like thought too seriously about it because it was always just like, no, there's no way that this kind of makes sense.
But now as the AI gets better, you're going to get to this place where the AI can handle a bunch of people's issues.
Not all of them, right?
Because maybe 10 years from now or something, it can handle all of them.
But when we're thinking about like a three to five year time horizon, it'll be able to handle a bunch kind of like self-referralizing.
it'll be able to handle a bunch
kind of like self-driving cars
can handle a bunch of terrain
but in general they're not like
doing the whole route by themselves
yet in most cases
right? It's like people thought truck driving jobs
were going to go away. There's actually more truck driving jobs
now than there were like when we started
talking about self-driving cars
in whatever it was almost 20 years ago
and
I think for going back to this customer
support thing it's like all right
it wouldn't make sense for us to staff out
calling for everyone,
but let's say the AI can handle 90% of that.
Then like, and then if you,
if it can't handle it,
then it kicks it off to a person.
Okay, now, like, if you've gotten the cost
of providing that service down to one-tenth
of what it would have otherwise been,
then, all right,
maybe now that actually makes sense to go do.
And that would be kind of cool.
So the net result is like,
I actually think we're probably going to go hire more customer support people.
It's like the common knowledge or like the kind of common belief that people have is that like,
oh, this is clearly just going to automate jobs and like all these jobs are going to go away.
I actually just, that has not really been how the history of technology has worked.
It's it's been, you know, you can, you like create things that take away 90% of the work and that leads you to want more people, not less.
Final question.
Who is the one person in the world today who you most seek out for advice?
Oh man. Well, I feel like it's part of my style is I like having a breadth of advisors. So it's not just, it's not just one person. But it's, um, we've got a great team. I mean, it's, uh, you know, I'm, I'm, I think that there's people at the company, people on our board. Um, there's a lot of people in the industry are doing new stuff. I, there's not, there's not like a single person. Um, but.
But it's, I know it's fun. And also, as when the world is dynamic, I'm just having a reason to work with people you like on cool stuff.
I'm, to me, like, that's what life is about.
Yeah. All right. Great note to close on.
Awesome. Thanks for doing this.
Yeah. Thank you.
I hope you enjoyed this episode. If you did, the most helpful thing you can do is just share it with other people who you think might enjoy it.
Send it to your friends, your group chats, Twitter, wherever else. Just let the word go forth.
Other than that, super helpful if you can subscribe on YouTube and leave a five-star review on Apple Podcasts and Spotify.
Check out the sponsors in the description below.
If you want to sponsor a future episode, go to thwarcash.com slash advertise.
Thank you for tuning in. I'll see you on the next one.
