Pivot - Mark Zuckerberg on the AI bubble and Meta's new display glasses | ACCESS
Episode Date: October 14, 2025Pivot is off for the holiday! Kara and Scott will return on Friday, but in the meantime, we're bringing you the premiere episode of ACCESS. Tech insiders Alex Heath and Ellis Hamburger talk all things... Mark Zuckerberg, from the newest Meta Ray-Ban Display glasses to the beverage selections in the new Meta AI Lab. Alex then sits down with Zuck himself ahead of the 2025 Meta Connect conference. Find ACCESS on YouTube or your favorite podcast app. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Hi, everyone. This is Pivot from New York Magazine and the Vox Media Podcast Network. I'm Kara Swisher.
We're off for the holiday today, but we have a special episode from Access with Alex Heath and Ellis Hamburger for you today.
In this episode, Alex and Ellis talk all things Mark Zuckerberg from the newest meta-rayband display glasses to the beverage selections in the new meta-AI lab.
Alex then sits down with Zuck himself ahead of the 2025 Meta-Connect conference.
enjoy and we'll be back in your feeds on Friday.
I mean, didn't you just tell Trump you were going to spend like $600 billion?
I mean, that's...
I did. Yeah, through 2028, which is...
That's a lot of money.
It is. And if we end up misspending a couple of hundred billion dollars,
I think that that is going to be very unfortunate, obviously.
But what I'd say is I actually think the risk is higher on the other side.
Welcome to Access from the Vox Media Podcast Network.
I'm Alex Heath and Ellis, you are.
I am Ellis Hamburger, not your favorite sandwich, but your new favorite podcast host.
I have had a lot of people since I've been saying that I'm doing a show with you,
ask me if your actual last name is hamburger.
It is verified.
Yeah, and you have at Hamburger on X, which is a flex.
That's why I'm so scared to leave.
Please don't make me leave.
Ellis, why are we doing a podcast?
I feel like there's so many podcasts, but, you know, I've been getting that question a lot, too.
Yeah, Alex, I think we have great chemistry.
We've known each other for a long time.
We both, I think, see a different side of tech as it is today.
I feel like you're so well connected in big tech.
You love to schmooze with all the biggest founders.
I do love to shingle holes.
Yeah, you do. I've got a good, good stranglehold on the AI startup arena with the work that I've been doing at meaning. And I feel like we could just really bring something different. I mean, you've been in media forever. I was starting in media at the verge, then went into startup world for a while at Snapchat and the browser company. And so I think we have something interesting. I think we want to talk about the inside conversation, what people are really thinking and talking about as opposed to just what's in the headlines. So.
Yeah, hopefully we could do that.
Yeah, I think we both just wanted to make a show we wanted to listen to
and didn't feel like a show like that existed.
And I hope we're going to make that.
I think we will.
And it feels really fun to be doing it with you.
And the way we're going to structure these episodes is, you know, it's a talk show and
interview show, pretty standard.
You and I are going to wrap about some things happening in our world, things we think
that you're going to want to know about,
even if you're pretty plugged in that's coming soon or stuff that just hit.
And then we're going to go usually to an interview, either with a big name.
This week we've got the one and only Mark Zuckerberg.
Next week we've got Dylan Field, the CEO of Figma, for his first pod since the biggest tech opio of the year.
We're going to have an interesting mix of, I think, big names and then also early stage founders,
some of which you work with directly at meaning that we just think people should know about
that are going to be the companies that you're going to be hearing about in the next few years
or maybe already.
And we want to make something that feels good if you're tapped in, but also relevant
if you just want to know more about this crazy tech world we're in.
Did I get on that?
Yeah.
I think you got it all right.
I think the one other thing that's on my mind is I just want to have fun with this.
Tech has been a part of my life for so long, and while this industry is so often mired and, you know, often valid, skepticism, pessimism, uncertainty, I think there is still so much brightness and optimism and fun just to have in building the future together. Obviously, we all want to be honest and hold each other accountable. But I think tech is culture these days. And I want to cover the whole thing holistically versus just the last earnings call.
and what the latest Arpoo and DAU numbers are.
Yeah, I can ask that stuff.
I mean, I'm pretty interested in the business and the strategy of these companies.
It's what I cover a lot at sources, my new publication about the tech industry and Silicon
Valley and AI.
I was previously deputy editor at the verge where I had a pretty successful newsletter there
called Command Line.
And now I'm an entrepreneur, I guess.
you Ellis. I'm on the other side. And this pod is part of that. I hope it feels part of the same
cinematic universe, I guess. But Sources is going to be, I guess, maybe where I also play a little
more bad cop to the good cop of the mood we're trying to make on the show. We found out that
Zuck wanted to do the first episode. That gave us a deadline. I think neither of us were planning
for. But it's been great. It's been a good kick in the pants to get this thing going. And, you know,
usually we're going to do these interviews together this week is a little unusual just because
of the timing. And so it's just Zach and I. But, you know, I think there's also an element
of the different perspectives we can bring to these conversations, right? Like you've got
this really interesting perspective working with a lot of these startups directly. And then I've
got my kind of more journalist POV of having met a lot of these leaders in big tech, especially
in the big AI labs over.
the years. Yeah, well, first I want to get at the real distinction between how you would
interview Zuck and how I would interview Zuck. I think you're going to be in one of those
fireside chat chairs. I want to go T-shirt shopping with them. Maybe head to the jewelry store after,
look at some chains. That's the interesting combination, I think, that we represent Alex.
Hey, Pivot listeners. I want to tell you about a new podcast from the Box Media Podcast Network
called Access with Alex Heath and Alex.
Hamberger. It's a show about the inside conversation happening across the tech industry.
You may know Alex Heath from shows like Decoder and the Vergecast, and he's a founder of
sources, a new publication about the tech industry, and a contributing writer for The Verge.
And you'll probably only know Ellis if you worked in Silicon Valley yourself. He's the
former tech reporter turned tech industry insider working closely with today's hottest
startups. Their first episode features an interview with Mark Zuckerberg about Meta's
latest smart glasses, the AI race, and what's next for the social media giant.
You can find the Access podcast with Alex Heath and Ellis Hamburger on YouTube or wherever
you listen to podcasts.
So I got an early sneak peek of the conversation with Zuck, and I'll say this, he seemed
very confident, very comfortable, having fun.
What was the vibe you got from him during the conversation?
What was he wearing?
How was he feeling?
It's pretty crazy, you know, to go from sitting next to Trump to sit next to sit
next to Alex Heath.
It seems like he's got some swag these days.
I mean, yeah, the swag has been the Zuck arc for the last couple of years, I would say.
I mean, you notice the chain is tucked in this year.
He was still wearing it, but it's tucked in, which I don't know if that means, you know,
we've got business to do, which is maybe this new AI stuff.
I do think they feel pressure on these glasses.
They really want these glasses to be well received.
They're Ray-Ban branded.
And they're pretty wild.
They can do a lot.
It's not full augmented reality, but it's a pretty good heads-up display that can do texting, heads-up navigation, a bunch of other stuff.
And they have this neural band that controls the glasses that feels like legitimate sci-fi.
It's one of the coolest demos you will do.
And they announced it this week at Connect, and people are going to start seeing them out in the wild.
They're kind of expensive.
You know, they're $800.
I think these are very clearly an early adopter kind of product, pro sumer, so to speak.
And I got to demo them last week, and they are really cool.
You cannot be a fan of tech like you and I are Ellis and not think these are cool.
Now, whether I'm going to wear them all day or they're going to start replacing the smartphone,
I think that's very much TBD.
But yeah, man, I was pretty impressed.
So what was the magic moment across all these different use cases that you tried with the glasses?
Because I feel like there have been so many ambitions for what they could be.
I mean, I was back at Snapchat in the early days when we launched spectacles.
And all you could do is just take like a 10 second video or a photo with them.
We've seen people iterate on them over time.
I know you mentioned to me that it did a lot more than you expected.
Like what was just first principles?
What was the best thing you tried?
There was this moment where I felt like I felt like,
I had super hearing.
It was a thing that only something like this form factor could do where they're calling
it, I think, live captions.
So I was in this room with a bunch of people and you could look at someone and everyone
was talking very loud.
And if you were looking at them, it would live caption what they were saying even if
they were six to eight feet away and it was super loud and you couldn't hear them on
your own.
And then they've added language translation to this where it can do live language translation back
and forth. That was pretty magical. The display itself is honestly pretty good. It sits to the
side of your eye, which is kind of unusual when you first try it, but it lets you see the picture
you're about to take or the video you're about to take, which is honestly something that feels
simple. But in that form factor of a camera on your face, it actually matters a lot. And the band has
this little gesture where you can like twist in a knob like in the air to like zoom in
and zoom out. So it feels like your Tom Cruise and minority report a little bit. The crazy
thing is is with the band and I really can't describe how much of a game changer the band is for
input because you don't have to just talk to it or wave in front of it to get it to do something.
You can just do this very light tap gesture. You do a little pinch. Little pinch and the display
just melts away, comes back, melts away as you pinch. And I started getting it. And I started getting
that pretty fast. I wore the glasses for about an hour, took them outside, took the demo off
rails a little bit, asked the AI things that weren't verbatim what they told me to ask. It still
worked. You know, like place, you know, some glasses and plates on this table, make it look like a table
setting. It did it. You know, stuff like that. The display is like 5,000 minutes or something. It's super
bright. And it's very clearly designed to just be worn everywhere you go. And the battery lives around
eight hours. I bet it's less than that with the display going the whole time and it really
get to push that to the limits. But overall, I love a good Nits conversation. Good Nits. But how many
is it? Yeah. Is it 8K quantum dot or is it ocular occlusion? Yeah. No, this is gorilla glass version
three vestibule. I think you made some stuff up there, but that's good. Well, so setting us out the
display, I feel like so much of AI is, you know, obviously the hardware has to be there, but how does
their AI work? I feel like especially in the age of AI and MCP and people trying to do things
agentically, so much of it is like, what sources is it using? What can it do? Can it plug into your other
services? Or was it just kind of like better Siri, at least at the outset? I would say better visual
Siri. It's still meta AI, which obviously is not the leading AI. AI definitely took a backseat in the
demo. I think they probably wished that they had more AI features, but Zuck is doing this massive AI reboot that
everyone's been reading about. I asked him about that actually in the interview. We talked about
it and he drops some new stuff, I think, about the lab that was pretty interesting. I don't
want to give it away. But I think they know they're behind on AI, but they have the bare minimum.
And no one has a product anywhere close to this in terms of the form factor, the price, the display,
the band, and the ability to like do texting with the band. Zuck said he types 30 words per
minute with the band, which I found. How does that work?
it's like you're scribbling with your finger yeah it's like auto complete with slight wrist finger
gestures like you can write almost like on your leg or whatever and it auto completes it um
and he's doing he says he's doing 30 words per minute uh which is impressive i think it's really more
for just shooting off like quick text which i did and it worked um but yeah it's uh it's it's a
wild device like it's definitely something if you are listening to a show like this you're going
want to try so watching zuck for a great many years i feel like he's been trying to build and own the
next platform forever you know he's tried phones he's tried vr trying to build the metaverse now
a i glasses Alex you are a betting man do you bet that this is the one where the platform works
because it is such a it is such a fine line right like apple is does appear to be unique
good at combining the hardware and the software because everybody's competing on the
hardware. I do think at the end of the day, the software is going to be a pretty big deciding
factor. Does it have the apps you want? How does, for example, Vision OS feel to you
compared to the OS that they're kind of building across AR and VR and this and that? Is this the
platform where they win? I think it's going to take a few years for this to really get mature and
become something that is compelling to a lot of people, not just early adopters.
You can totally see the path to it, though, when you try this, I think.
And I felt that way when I tried Orion their first pair of like full AR glasses that are
not a consumer release last year, which when I say Orion in the interview, that's what I'm
talking about.
Yeah, I think compared to the Vision Pro, it's definitely not as full featured.
But it's a different use case.
It's not a headset.
said, it's not a pair of goggles that fully blocks you off from the real world.
I mean, these are chunky, but like in the right lighting could pass almost as normal
glasses.
And that's their main job.
I mean, their main job is to be something you can wear around.
And the tech is supposed to be supplementary to that.
And I think there's a lot of work to do to make the tech actually feel like it fades into
the background appropriately.
I think meta is very motivated to figure it out.
You're right.
They really want another platform.
And they're sinking billions of dollars into all of this because they like pretty much
every big tech company I know of thinks that this combo of glasses with a display and
AI is maybe going to be the next smartphone.
So I have one more question for you before we get to the feature presentation in
evening with Alex and Zuck. We like to talk about people here on the Access podcast, even though
this is the very first episode. So every time that I feel like meta goes through a different
transformation, we hear about how that team has been moved next to Zuck's office. Yeah.
How does that work out over time? Do you just kind of find yourself in the inner orbit and then with
each new tech trend, you get a couple inches away, kind of like tectonic plates. Who's in the inner
circle right now, who has been pushed to the outer orbit? Yeah, the inner circle right now is the new
AI lab, which I think I'm the first outsider to see physically. When I was there for my demos,
they walked me in and I was getting a lot of side eye, I would say, from the researchers in there
who are like, who is this guy that's looking at our Lama algorithm written out on these whiteboards?
Luckily, I don't know math. So their secret is safe with me. But yeah, they're,
The lab is sitting there in this kind of special area with Zuck.
And as I say in the interview, they were cranking.
I think I saw some shoes off, a lot of code happening.
And it's very clear that meta is rebooting this stuff.
And I think this device is part of the reason why.
I think they know that AI is the killer feature for glasses like this.
And they want to be on the frontier of that.
Most importantly, what are they drinking?
We got Red Bulls, Monster Energy, Diet Coke is making a comeback.
What was on the tables?
I didn't catch the drinks.
That would have been a good catch.
But, you know, these big tech campuses, they have just about everything you need.
You know, you never need to leave.
It's like Hotel California.
All right, man.
Well, I guess we'll get into the combo with Zuck here.
Mark, I don't think you can be into tech.
technology and the cutting edge like I am and not try these here in the middle, these new display
glasses and think they're just not really cool. And I want to get into what they do and why you're
building them. But can you kind of just initially set the stage for us and explain why you all
are doing a display in this form factor? Because you've had the AR glasses and you have the glasses
that don't have displays. So why do something in the middle here? I mean, we're working on all kinds
glasses. I mean, my, my theory is that, you know, at a high level glasses, I think, are going to be
the next computing platform device. I think that they're great for a number of reasons. One is
that they don't take you away from the moment, so you can stay present in the moment, unlike with
phones. I think that that's a big deal. They're also basically the best device for AI, because
it's the only device where you can basically let an AI see what you see, hear what you hear, talk to you
throughout the day, and then once you get the display, it can just generate a UI in the display
for you. So that's great. And then the other thing that glasses can do is it's really the only form
factor that can put holograms in the world to help you seamlessly blend the physical world
around you in the digital world, which, I mean, I just think it's a little crazy that we're here.
It's 2025. We have this incredibly rich digital world, and you access it through this like five
screen in your pocket. So that I think like there's going to get these things blended together.
So that's glasses at a high level. But then you get into, okay, well, what do people want with
glasses? And glasses are very personal. It's not just like a phone where everyone kind of is okay
with something that's pretty similar. Maybe you get a different color case. People are going to want
a lot of different styles. People are going to want different amounts of technology depending on
whether they want a frame that is on the thinner side or bulkier side or whether they can afford
more technology or less. So I think that there's just going to be this whole range of different
things. From, you know, simple glasses that don't have that much technology in them, maybe the ability
to talk to AI and have it see what's going on around you, all the way up to full augmented reality
glasses that have kind of wide field of view like the Orion prototype and everything in between
and different styles, right?
So we did, we started with Rayban, which is probably the most, single most iconic and popular glasses design and history.
And now this year, we were adding Oakley.
So we did the Oakley meta Houston's this summer.
And then at Connect, we announced these guys, the Oakley Meta Vanguard, which we'll get to in a bit.
This is, I think, what people kind of had in mind when they thought, when they heard that we were doing something with Oakley.
Yeah.
It was more this, but these are dope.
Yeah, I mean, it's, I mean, they look great.
Yeah.
They're great for performance.
And we'll talk about that in a minute.
But the deal is, I mean, people are going to want all kinds of different things.
So, um, there's going to be this whole spectrum.
And one important technology is obviously going to be getting, um, getting a holographic display.
And then within that, there's a whole world of options too.
You could have a small holographic display that can just display a little bit of
information.
You could have wide field of view that can basically,
overlay kind of avatars and deliver a sense of presence.
Which was Orion last year.
That's Orion and that's kind of what we're building towards in the consumer version of that.
So there's a number of different points on the spectrum.
And what we're doing here with the Meta Rayband display, I think, is a good kind of starting
point where it's not a tiny display.
It's actually, it's quite meaningful.
You can read a whole text thread.
You can, you know, watch videos.
You can do a video chat.
You can watch videos that you've taken.
I guess you could even watch reels on it if you want.
So it's a meaningful size display, but this one isn't really meant for putting objects in the world.
It's more meant for just showing information.
And so anyway, we've been working on this for like, I mean, all the glasses at meta we've
been working on for more than 10 years at this point.
So, you know, when we get, when we have these moments along the way where we get to like
show a new technology that I think is.
pretty different from what others are working on, which the display in the glasses is one thing,
but the metaneural band as the way to interact with it, where you just get these like micro gestures
in your hand and you're controlling what you're saying. It's just wild. The band is wild. We got to talk
about the band, but I guess the thing that surprised me the most in my demo of the glasses last week
was just how much they can do. Frankly, I mean, I've been reporting on these for a while and
as the buildup has been, you know, coming for them.
And I thought they would have a little bit of a more limited use case to start,
but they can do quite a bit.
And I'm curious, what was the goal of their overall functionality?
What are you trying to achieve with this?
Are you trying to replace the phone or just get people to use it less?
I mean, what's the big picture idea of, like, this is what it can do?
Well, I always think about everything from a communication connection angle,
first, right? Because that's kind of the legacy and the DNA of the company. So probably the most
important thing that I've focused on wanting to get them to do is be able to wear them, get a text
message, respond really quickly and subtly with your hand, if you want, with like, we're having
this conversation now. And I could, I mean, like, we're talking about like this level of hand motion.
Yeah. Like I'm making, it's like not. I thought you might wear a
in the interview and then I would. I could. I could have gone. I mean, I'll put them on.
Another thing about him is you can't you can't tell they have a display like even when
and their transitions and even when their sunglasses. So that's actually an important part of the
technology is light leakage is a feature of some types of waveguides. And so basically you get
these tradeoffs where you want them to be very efficient. There are wave guides where you have to
pump just a ton of light through them in order to get anything to show up. But then some wave
guides have just different artifacts. And usually in a bad way. It's like the light will catch
them and you'll see all kind of like rainbowing or something. Another artifact that we think is pretty
bad because it's a privacy issue is if the person who you're looking at can basically see it.
Yeah. Well, the very worst version of it would be if they could see what you're seeing.
But I think another version that I think is still not that socially acceptable is if they can see that you're looking at something at all.
So I think that that's one of the things that we're really proud of in the design here and that we put a lot of energy into is the displays are super bright to you and to the person that you're looking at, they can not even really tell that you're doing anything.
that that's an important thing for having it to be socially acceptable, right? I mean, we also,
we design them so that way when the display comes up, it's offset slightly to the side. We don't
want to block what you're doing. An important principle for the glasses is the technology needs
to get out of the way. It's, I mean, fundamentally, it's like, you know, this is something that
you're going to be wearing on your face for a lot of the day. I mean, we designed these specifically
to be, you know, both indoor and outdoor that with the transition lenses, they work really well,
sunglasses outdoors.
But the reality is that most of the day you're not going to be using technology,
or at least visual, yet.
So, like, maybe you'll be listening to music or something.
But we want, you know, when you are interacting with something, it should show up.
It should kind of be off to the side.
If you don't interact with it, it needs to get out of the way really quickly.
That's, like, a really important principle of the whole technology.
You've got this wake gesture with the band where you can just tap quickly to make the
display go away.
Yeah, very subtle.
And again, I want to get into the band.
is wild in its own right um the thing that really stood out to me from my demo from these was
some things that you only could do in a form factor like this because i mean the texting is cool
but like there's this live captions thing where i was in a room with a bunch of people and they all
started talking really loudly and i if i just looked at someone it would live caption what they
were saying yeah tune out everything else it's like super hearing and then you're also doing that
with language translation yeah so you can do real time i mean ideally
both people are wearing the glasses to get the full experience,
but you don't actually need to have the other,
you could just hear what they're saying in your language
or see it.
Yeah.
That's pretty wild and that speaks to like what just this form factor could do.
I'm curious like, there's that,
are there other things that this form factor uniquely can do
in your mind that a smartphone can't?
I mean, all the things around AI where you have an AI
that basically you wanna have context around what's going on with you, right?
So, like, if you want an AI that can see what you see, hear what you hear, can just kind of talk to you passively throughout the day, and then can show you information contextually, that's just not something that a phone can do.
I mean, I guess technically you could walk around holding a phone like this, but you can't really do that.
No one does it.
Yeah, you can't.
Those demos have existed forever, and I'm always like, I don't want to hold my phone up.
So I think that's actually going to be the main one.
And I think all the live AI stuff, it's interesting.
it takes on a different, like a different feel.
So we have live AI in the raybans without a display too, the kind of classic raybans.
And for that, it's audio-only live AI.
So it's really helpful for when you're doing something kind of by yourself.
If you're cooking or something, then, you know, it can just give you, it can, it's watching
what you're doing with the video.
And you can ask your questions about what you should be doing or it can give you tips.
that's all that's all great but it's not really useful when you're in another kind of conversation and
the thing that the thing that I've observed with the thought experiment I've run but also just
wearing these is you know so we go through the world we have dozens of conversations a day
and in every conversation I usually have like five things I want to follow up on maybe there's
like some, it reminded me that I should, you know, go do a thing or or it reminded me of a person
who I wanted to talk to or maybe I'm talking to someone and maybe they like assert some
assumption that doesn't quite sound right and I want to fact check it or like gut check it.
These are all things that I think with live AI, you can have this AI that's sort of running in
the background and that goes and often does work for.
you and then can bring that context back, whether it's asynchronously kind of offline when
you're done with a conversation or sometimes like when you're in the middle of a conversation,
it's just useful to have more context right then. How have you been using these? Someone on your
team was saying you text a lot through them. Yeah, I, well, I'm a texter. I like run the company
through text messages. So when you were asking about what, you know, what can you do that you can't do
on a phone. I mean, I guess you can, you can obviously text on a phone and we all do it dozens of
times a day, but I think that there's like a lot of times where it's just not socially
acceptable to send a text message. And so like, let's say you're in the middle of a conversation.
You want to like ask someone a question or get some information. I mean, I have this like all
the time. I'm like having a one-on-one conversation with someone and I like, oh, I like wanted to
ask someone this or like I wanted to ask someone else a question to like pull some context.
I can ask this person what they think about it, but I'm not going to, like, pull up my phone in the middle of a conversation.
with this it's actually just super quick you can just like send a message in like five seconds
get the context back like it actually just really improves the conversations that you're having
I find it's um to me this is the one thing that I think is basically better about Zoom
than in-person conversations is that you can sort of multitask a little bit right it's like
it's it's worse and basically like every other way than kind of an in-person physical
conversation. But the one thing that I think is useful is you can go from having a conversation
to basically asking someone else a question. It's not necessarily distracting. It's additive,
right? Because otherwise, your option is like, all right, you have a conversation, then you go
check in with someone else. Then you have to go back and call the other person back and have a
whole second conversation. So it's just short circuits these things all the time. And now I think this
kind of brings the best part of that into physical conversations. Well, you basically feel present
in the conversation, you can pull in whatever information you need. It's super, super helpful.
Yeah. A real like, holy shit thing about this product is the band. I thought that, because
it was with Orion, the demo you guys had last year too, and I thought it at the time,
I was like, there's something special with this band. And you're calling it's a neural band. Is that
right? Yeah, the neural band. It's a, because it's a neural interface. It's a neural interface. It's a
nerve activity. So, like, it feels like it's reading your mind. It's not doing that, but
it's not reading your mind. What you're doing is you're sending signals through your
muscular nervous system that it actually picks up before you even make movements but um you it basically
picks up these micro gestures and that allows you to control your interface um no matter where your hand
is so it's not doing like hand tracking visually or anything like that like you can have your hand
by your side you can have your hand behind your back like whatever in your jacket pocket um and it's
fine and um and the gestures are really subtle right so it's like it's like this is all i need to do
to bring it up i mean this brings up mad ai so i really like the music one did did you i didn't try
that one no oh so the way when you're listening to music that you adjust the volume is you just
kind of oh the dial yeah you pretend that there's a dial and you just turn the dial i did that
with the zoom in something yeah you could do you could do it on photos too yeah that's like it feels
like minority report when you do that it's like in real life it's a it's a it's a good
interface. Yeah, I think it's what you mean, but yeah, it's like, not the weird part of minority
report. It's just, it's like, it's, it's, it feels like sci-fi. And yeah, and I'm wondering
why this band, like, why did you land on this band as the input for this? Because people have been
trying to figure out input for glasses like these forever. And it's usually voice or hand gestures or
something, but it's like, I'm not going to be in the subway like gesturing out into space.
So, okay, I think that those are going to be useful too, but I don't think.
that they're, I don't think that they're complete, right? So voices is obviously going to be
important. People talk to MetAI, they do voice calls, they, you know, you do video chats. So
voice is going to be a big thing. But I think the reality is that a lot of the time we're around
other people. And for the use case that we really wanted to nail, which, which I actually think is
the most frequent and most important use case that we do on our phones is messaging. So if you want to
nail that. What you need is the ability for a message to come in, to not be distracting,
not be like center in your field of view, but just be there. And then you need a way in whatever
situation you're in to be able to quickly respond in like five or ten seconds in a way that is not
interruptive to your current interaction and is socially acceptable and doesn't feel rude.
And then you get to, okay, hand gestures. I mean, yeah, I think that there are going to be useful
things for, I mean, in Minority Report, he's doing a fair amount of that. But for gaming and things
like that, I think you'll do that. But like you said, you're not going to walk down the street
like that, right? I mean, that's kind of, pretty goofy. Yeah, it looks, it looks weird. Your arms get
tired. You know, it's much more the former than the latter in terms of reason why it doesn't work.
But the ladder is also true. So we needed something that was basically silent and in subtle.
So the questions are, there are a few options for that.
One that people are working on is basically whispering, right?
So you can like subaudibly even pick up on the sound or you can have some camera that
can like look at your mouth and do like lip reading.
That's still pretty weird in a meeting.
I agree.
Yeah.
I agree.
So it didn't pass my bar for kind of subtlety, even though it is silent.
in theory. So I think you need to go for the neural interface. And the other thing that's
nice about the neural interface is you can get really high bandwidth input to, so it's not like,
you know, with like, you know, smart watches today, you basically like, you know, you can move
your arm and it can pick up like a gesture or two, but it's, but it's a very low bandwidth. There
aren't that many things that it can do. You need something that can basically be reading the muscle
signal. So that way you can just control it very subtly. And this can do that. I mean, I can, I mean,
already, I haven't like, you know, we're not that optimized. I think we're going to get the
autocorrect a lot better, but I'm already, I think, around like 30 words a minute type.
Really? Yeah. No. It's, yeah. Man. I mean, how how advanced do you think the band gets in
its current form factor in terms of what it can do? Um, I think quite a bit. I mean, basically
today
well I guess
you have the sensors
which can pick up the signals from your muscles
but then on top of that
it's basically just an AI machine
learning problem
to be able to pick up what you
mean by the thing
and right now
it's not
particularly personalized
so
so you get it out of the box
and it needs to work with certain gestures
and you've never used it before
so it works with these like
you're doing kind of
it's like I mean this isn't a big gesture
but I mean this is much bigger
than what it needs to be in the future
and then in the
and then for kind of the neural text entry
and you're basically you can
kind of think about it as if you have a mini pencil
and you're just like writing out the letter
but over time
what should happen
is that the AI learns your pattern for how you, you know, write each letter and you should
be able to write, make like increasingly subtle and invisible motions that it basically learns
are your way of doing that letter or that input or whatever. And I think the future version
of this is that the motions just get really subtle and you're effectively just firing muscles
in opposition to each other and making no visible movement at all.
all, and it picks it up. So personalized auto-complete via your wrist, basically? Yeah, so super
fast, because if you're not moving, there's no latency from actually having to actually
physically move and then retract after making a motion. So I think the upper bound of that is very
high. The other thing is, and that's just for typing, which is one kind of modality, but I think
you can kind of, there are all these different dimensions where you can use it to input into
things and you could control a hand in space that is like operating a UI, right?
And it's like there's all kinds of different things that it can do that I think will just be
really interesting to get into over time.
And, you know, we basically, we invented the neural band to work with the glasses, but I actually
think that the neural band could end up being a platform on its own.
I agree.
To basically just interact with all of your electronics and devices and do all kinds of
different things once we kind of get it to be a little bit more mature.
So you could have an API for it that could theoretically plug into a smart home or something
like that.
That'd be wild.
Yeah.
Yeah.
The price point for these is also lower than I expected.
It's only like 800 bucks.
Yeah.
Who are these for from a like, is this an early adopter thing?
Like you care about the cutting edge.
Like this is for you.
You're not making a ton of these right.
I assume this is not like a mass, they're going to be a massive thing for you.
It's more to see how people use this technology.
I mean, I think that this is going to be a big part of the future.
I mean, my view is that, you know, there's between a billion and two billion people who wear glasses on a daily basis today for a vision correction.
Like, is there a world where in five or seven years, the vast majority of those glasses aren't AI glasses in some capacity?
Like, I think that that's, it's kind of like when the iPhone came out and everyone had to,
flip phones and it's like just a matter of time before they all become smartphones. I think that
these are all going to become AI glasses. And the question is, all right, well, there are eight billion
people in the world, not, you know, one to two? So are there going to be a bunch of other people
who also wear glasses? I would guess yes. I mean, there are a lot more people who wear sunglasses
some of the time. So, yeah, I mean, I think it's a big category. And there's a lot going on
here. I think what you see when you're building products is that V1, you build what you think
is going to be great. And then you get a lot of feedback, but you also didn't get everything exactly
perfect in V1. So, you know, V2 and V3 just end up a lot better, right? It's not a, it's not a
coincidence, I think, that, you know, the first version of the Raybans, Rayband's stories, um,
We thought it was good.
Then when we did the second version of Rayvan meta,
I think it's sold five times more.
And it was just refined, right?
It was like, so I think that there's going to be some dynamic here
where it's like you have the first version,
you learn from it.
The second version is just a lot more kind of polished and tape.
And it's in the software, it gets polished too,
not just the hardware and that just kind of compounds and gets better and better.
And full AR that fills your vision that's still coming to.
Yeah.
I mean, we're working on all the stuff, and we want to get it all to be as affordable as possible.
The reality is that the more technology you want to cram into the glasses, be more expensive, that is, because you're putting more components in.
We also want the glasses to be as thin as possible, and that's a process of miniaturization that happens.
And similarly, the more technology that you cram in, the harder it is to make smaller.
So as much as we can miniaturize this technology, it will always be true that if you put half the technology and you'll be able to make even thinner glasses.
And then some people have different aesthetic preferences where they'll want, I mean, like, fortunately, you know, thick glasses are kind of in style. But, you know, some people want thinner ones like yours. You can't fit many electronics. Well, now I don't feel cool. Yeah. Well, it's, yeah, you may have to rethink your aesthetic choices in the future. But yeah. Um, see, on pricing, we, um, I mean, we did, we work on on getting it to be as affordable as possible. And, um, um,
you know, in our view is that our profit margin isn't going to come from a large device
profit margin.
It's going to come from people using AI and the other services over time.
Because you'll pay a subscription for the AI or something.
Or, yeah, or use it and do commerce through it or whatever, whatever the different things
are that people do.
So we're not like, you know, a company like Apple that is that whose primary or a large part
of their margin comes from having a large margin on the hardware.
But in general, yeah, we try to make it as affordable as possible.
And my hope is that if we build another one of these, hopefully it's even more affordable
or with the other choice that we can make is put even more technology in it and keep the
price point there.
But I think you're going to have a few different price points.
There's going to be the kind of standard AI glasses that don't have a display.
And I think those will sort of range between, you know, $300 to $500 or $600 depending on, you know,
the aesthetic and kind of how high fashion it is.
maybe even more than 600 if you get something that's really high fashion but that's kind of the
range that we've seen so far from the the early raybans to to some of the the uh oakly meta vanguards
with like all the kind of you know custom stuff in it like uh optical lenses that can um with a prescription
yeah um then there's a category like this with the kind of a display that isn't kind of a full
field of view AR display. And I think that that's going to be, yeah, on the order of $1,000,
like maybe you get it a little better, maybe it's a little more, but you can call it in that area.
And then I think when you get to the full AR glasses, that'll be somewhat more to start. And
I think people will just want to have the whole portfolio. And then the goal over time will be to get
as much of that technology into as affordable and as thin of a form factor so you can just
like have as many styles as possible.
And so you've got these new Oakleys and your deal with Elser Elser Laxotica means you can do
other smart classes with all their other brands, right? So I think that implies right that there will
be future brands that have metatech. We'd love to do it. Yeah. Yeah. So you see it as like
building out a kind of constellation of all these different form factors and price points.
That's the goal. Yeah. Yeah. What about all these other
AI wearables that aren't glasses that are happening.
Like there's the friend pendant, I'm sure you've seen.
There's all these like displayless non-glasses devices.
Sam and Johnny Iver apparently working on something.
There's a lot of interest in this.
And I'm curious, is this something that since AI has really taken off in the last few years,
you see opportunity there in addition to glasses, or are you still just the main focus
is glasses?
Well, our main focus is glasses because I think glasses are the best form factor.
for this for all the reasons that we talked about before. I think that anything else that you have to
kind of fiddle with takes your attention away from the physical world around you. I don't think
that there's any other form factor that can see what you see, hear what you hear, talk to you throughout
the day, and generate a UI in your vision. And then there's the whole augmented reality part about
blending the physical and digital world. But I don't know. I mean, people use different electronics.
So, I mean, I'm not going to, I mean, I certainly don't think that it's like in the future, all eight billion people in the world are doing the exact same thing. I mean, some people use their phone more. Some people use a computer more. Some people use an iPad instead of a computer, right? Some people, you know, primarily, you know, watch videos on a TV. Some people watch videos primarily on a phone. So I do think that they're going to be different things. My guess is that glasses will be the most important.
I think something like earbuds is kind of interesting too.
I mean, Apple clearly is like by far the leader on that with AirPods.
I think partially because they did a good job and partially because I think they gave
themselves some kind of unfair advantages with how they bundle it and couple it and have
technology that works with the phone that I guess they're now just starting to open up,
which is great.
But for a while, I think it just made it impossible for anyone else to build anything
like the AirPods.
Watches, I think, are interesting.
In some ways, too.
So you're not a fan of the pendant thing,
the pendant trend that's kind of starting right now?
I mean, is it a trend?
I don't know.
It's early.
There's a lot of startups doing this stuff.
I think it's an interesting idea.
I don't want to be too dismissive.
My point is that I actually think that they're,
my guess is that different people are going to like different things,
but that glasses are going to be the most popular.
There was no new quest this year at Connect,
and I'm curious how you're feeling about the quests these days
and VR mixed reality generally as a category.
It seems like glasses are really taken off.
There's obviously a lot more quests out in the world.
They've sold a lot more.
But I'm curious, you know, they're not being a new one this year
and also just how you're feeling about it these days, like the category.
Yeah, no, I mean, I think we're making progress on it.
I mean, this year, what we focused on was the meta-horrorism.
Horizon creation tools. So we announced MetaHorizon Studio and MetaHorizon Engine, which are these
basically foundational tools for creating worlds and content using AI. And that's going to go
towards making it's that people can create a lot more content in VR. But I think that that's
also going to be the case in all that stuff I think should have should translate over to AR2.
I think a lot of this content you'll be able to have there.
I mean, glasses that are, that are see-through may not be quite as immersive as VR,
but you can deliver a lot of the same kind of holographic experience.
And then I think a lot of these things will also end up showing up on phones, right?
I mean, and, you know, I think that there's this huge opportunity with AI to, you know,
you're browsing your feed on Instagram and Facebook, and like, it's like each story
should be its own world that you can jump into. But like the, the, and you're starting to
see some of this with some of the AI models, some of the stuff that Google has put out
recently. For example, or like interesting glimpses of where that could go. But I think that
there's this real sense that the whole stack for how you create those kinds of immersive
experiences needs to get rethought. It's not just going to be people doing things in the same way
that they've created 3D video games historically. I mean, that's a very, very intensive
process where the tools are like very... Yeah, hybrid entry. Yeah. Yeah, I mean... Yeah, I mean,
so my kids are kind of into programming and into making things. And, you know, we try to,
you know, build different 3D worlds. And I think some of the stuff is just like intractable for them.
right it's like it's i mean they're still kids so yeah but if you extract it to like a prompt
i mean then it becomes i think by the yeah so that's but with with the um meta horizon studio
which i've been playing with with them and and you know obviously you know it's this isn't
primarily designed for like an eight year old to be able to use but my my bar is if it's if it's if it's
enough that like i can kind of make something good with my with my eight year old then um that's
pretty cool um you really can create all these interesting things right it's like it you can you can
you can define what the world dynamic is like what kind of you want the world to be if you want
to put stuff in the world you can do it if you want to texture things differently if you want to change
the skybox you can do that um you know if you like so i think it's um it's i think it'd be a very
different way of creating things that's fundamentally a lot more accessible which will then unlock
a lot more creativity and there will just be a lot more interesting worlds and things to do and that
I think is going to be important, not just for VR and AR, but I think it's going to unlock all these
experiences that billions of people will probably first see on their phones at some point.
So that's what we're doing with the MetaHurizon studio work.
It's this kind of agentic AI flow where people at different levels of sophistication can go in
and create really interesting worlds and immersive environments.
And I think that's neat.
then we paired that with MetaHorizon engine, which is basically this custom
graphical rendering engine that we've been creating for two years now.
It's this project that we've had to build it from the ground up because, you know,
previously we were using Unity, which is great, but it's not really built for this use case.
I mean, most games, you know, you load a game.
It takes, you know, 20 seconds to kind of get into the game, which kind of makes sense because
or loading this whole 3D world that you now need to be able to interact with.
But we want the worlds that you can interact with to feel more like jumping between two webpages
or jumping between like two screens within a native app that's like really fast.
So the whole like, okay, it has to take 20 seconds to like page this whole new world into memory.
It was not going to cut it.
So we rebuilt, we basically built Meta Horizon Engine.
from scratch, to be this graphics engine that can support rendering these kinds of worlds
with high concurrency and the avatar system, the photorealistic avatars and all this,
with just a few seconds of load time. So it's more like a website or just to transition an app.
And that's the kind of thing that'll make it so that when you're in VR, you can jump between
worlds easily. It's not like some big commitment or some big decision. You can just feel free to explore
because, you know, it's not like you're going to have to wait 20 or 30 seconds for the next
thing to load. You walk through the portal. You don't like it. You walk back through the portal
of the other direction. And similarly, for things like within Facebook or Instagram, having the ability
to kind of see a post and jump into a world, that's something that needs to have very low friction
to do. So the Meta Horizon engine is this kind of core piece of technology. So yeah, I'd say on the
on the Metaverse side, this year's announcements at Connect were more about kind of the software
foundations than hardware. But you're still committed to the hardware. Yeah, I mean, the way that we do
the hardware is we don't have it plans that there's a new device every single year. Sure. It's,
we have multiple device lines. There's sort of the higher end one where we introduce some new
technology and then we try to get it to be as affordable as possible. So we did Quest 3. Then we did Quest 3S.
But it's not like every year there's one.
It's basically, you know, most years there will be a new one of those two.
And then sometimes there's like an off year where we're pretty much just tuning the software to get ready for the new paradigm.
Got it.
Okay. So Quest is still going.
Yeah.
We're first.
Okay.
Yeah.
I've been saving this.
I've got to ask you about this.
You know there is a tremendous amount of interest in your AI strategy right now.
Unlike anything I've seen.
in the tech industry, honestly.
Well, AI overall.
AI overall, but I think like what you've done over the summer
with the hiring and the super intelligence, you know,
mission that you put out and all of that.
Yeah.
And we've been talking about AI as it relates to the hardware
throughout this conversation.
And, you know, AI was a part of my demo.
It wasn't, I would say, like, front and center.
And it seems like a lot of the work you're doing now
is to get ready for when it will be.
And I'm curious, you know, when I was here actually
for the demo, I got to see the podcast
of the new lab and see them at work and they're in there cranking like you can tell and um i
would love to know like maybe we can start here like when you decided i need to i need to change
things and and why you decided to go about it the way that you did because i think that's the thing
that people were like whoa like this is this is crazy yeah i think if you're on the inside it
doesn't feel as crazy because you know the talent market is very small you know it's kind of rational
if you look at the numbers, but just the strategy.
Like walk me through like when you were like, okay, I want to make a change, this is what
I want to do.
Walk me through that.
Yeah.
I mean, this is an area that I just think it, I think AI and superintelligence are going to be
the most important technologies in our lifetime.
I think it's so important that it sort of demands its own hardware platform, which is a big
part of why I'm so excited about glasses, because I think glasses are going to be the best
kind of hardware device category
to provide personal super intelligence to people.
But I think AI is just this incredibly profound thing.
It's going to change how we run the company.
It's going to change how all companies run.
It's going to change how we build products.
It's going to change what products are possible.
Change how creators do their work.
So change the content that's possible, the mix of content,
all these different things.
So I think being on the frontier there is really critical if you want to continue just doing interesting work and pushing the world forward.
I think just like with mobile, you know, if you didn't invent the mobile phone, you could still do interesting work building apps.
But I do think at some level you can do even more interesting work if you can both pair the software with the hardware experience.
Sure. So, um, so we are definitely committed to being at the frontier in building super
intelligence. I think it's going to be the most important technology, like I just said. And,
um, and because of that, you know, I, I, I just think we're basic, we're, we're, we're very
focused on making sure we build a leading effort. So over the last few years, we stood up, um,
an effort that was improving very quickly. So Lama was a good initial academic project. Lama 2 was a
kind of good initial version of that as an open source release. Lama 3 was a big improvement over
Lama 2. And then Lama 4 introduced some important improvements over Lama 3 too. But I didn't feel
like we were on the trajectory that we needed to be on to basically be at the frontier and be pushing
the field forward. And that was, so I, you know, like, I think every company at some point
goes through periods where you're not on the trajectory that you want to be on something. And
these are decisions that you get to make in your life or in building a company where the real
question is not like, it's not like, is there going to be a moment where you feel like you're not
kind of on the track that you want to be on? It's what do you do in that moment. And so I just decided
that, you know, we should take a step back and build a new lab.
And, yeah, I think part of that was informed by the shape that I thought the effort should be.
We have this real focus on talent density, right?
And the idea is that you really want to have, this is like a group science project, right?
So you want to have the smallest group of people who can fit the whole thing in their heads at once.
And there's not many people who can do that.
No, but you also want the group to be as small as possible.
Right.
Right.
So there are some problems that we're working on around the company where, like, you can
just have more people work on them.
And even if the marginal productivity per person declines, you can just keep on scaling
the net productivity of the effort.
You know, our feed and ads recommendation is an interesting example of this, where
we have a lot of people are just testing different improvements to the systems.
And if one guy's sitting next to you,
If that guy's experiments don't work that well, it doesn't necessarily slow you down that much.
But I think building these language models is not that way, right?
It's like, it's a small group effort.
You want the smallest group of people that can keep the whole thing in their head and do the best work that they can.
So each seat on that boat is incredibly precious and in high demand.
You also don't want a lot of layers of hierarchy.
Because when someone gets into management, their technical skills kind of start decaying pretty
quickly. Even if they were an IC, a researcher a few months ago, you know, now if they're spending
all their time kind of helping to manage, then, you know, okay, after six months, a year, they
might be less than the technical details than they were before. So you kind of, I think that
there's this huge premium on just having a relatively small, extremely talent-dense effort
that is organized to be quite flat.
And you're very hands-on with this team.
Well, yeah, I mean, in the sense that, I mean, I'm not like an AI scientist.
Yeah.
But the thing that I...
They're sitting near you.
I mean, it's clear that this is like the priority.
Yeah.
So the thing that I'm focused on is, one, getting the very best people in the world to
join the team.
So I've spent a lot of time just meeting all of the top researchers and folks around
the field and getting a sense for who I think would be good here.
and who might be at a point in their career where we can give them a better opportunity.
That's one piece.
Another thing that I'm very focused on is making sure that we have significantly higher compute per researcher than any other lab,
which I think we are just way higher on compute per researcher than any other lab today.
And as the founder and CEO, and because we have a strong business model that can support this,
I mean, we make, like, you know, a lot of profit.
A decent amount, yeah.
Yeah, it's a, it's a reasonable amount.
You can just call up Jensen and be like, more GPUs, please?
It's not that simple.
It's not that simple.
And I normally text him with my glasses, but no, but it's, but there's a whole supply chain that goes into it.
And the GPUs are one part of it, but then you also need to build data centers and get energy and get the other pieces and get the networking.
And like, so, and yeah.
the bottom line is we're very committed to that and doing what we need to do to make sure that we
have leading levels of compute. So we talked about recently how we're building this Prometheus
cluster, which I think is going to be the first kind of single contiguous gigawatt plus cluster
for training that I think has been built. In the world, we're building this Hyperion data
center in Louisiana that I think is going to scale to five gigawatts over the coming years. And
And several other of these, what we call them Titan data centers.
They all have different Titan code names that are each going to be, you know, one to multiple
gigawatts.
And that's a significant investment.
I think it took a fair amount of conviction.
So I think unless you're, I think like a bunch of conditions need to be met.
Like basically, you need to have a business model that can support it.
you need to have a CEO who like believes in this very deeply right it's that they're just willing
to like make that kind of investment for that um and then you need to have the technical ability
to actually go build the things and bring them online and i think we're one of if not the only
company in the world that meets all of those criteria so um so yeah um i mean other people
do other interesting things too but now i think that this is going to be very interesting
The other principle, though, that we have for the lab is, you know, it's split into different
efforts, right?
There's the lab that we call TBD, which it...
That's what I saw.
Yeah, that's the research lab.
TBD was the placeholder name, but then it's stuck because it's kind of a good vibe, right?
It's like, all right, it's like a work in progress type vibe.
Then we also have applied research and product in Nat Friedman's group.
And that team is working on a lot of research that goes directly into the products.
So things that may not necessarily directly be on the path to superintelligence, like speech that passes the turning test and things like that.
But are important for the products, nonetheless.
So we're working on all those things.
And the research effort, the TBD effort is truly a long-term research effort.
So one of the principles that we have for the lab is just no deadlines, right?
So people were asking, okay, when is it going to, when are we going to ship the model?
Like my, this is very kind of, it's a strategy and it's also the values that we're trying to put in it is, I mean, all these researchers are very competitive.
They all want to be at the leading edge.
They know the industry is moving quickly.
They're going to put a ton of pressure on themselves.
me telling them that something should get done in nine months or six months or whatever isn't going to help them do their job.
It's only going to put another artificial constraint on it that makes them suboptimized the problem.
And I want them to go for kind of the full thing.
It's like we're going for trying to build AI that can improve itself responsibly and that can,
and where we're basically building these models that bring all these modalities together to deliver the kinds of experiences that we're talking about.
and yeah I mean I think me me putting a deadline on that is not going to be helpful so yeah so I'm very focused on and that's the nature of research right it's not engineering it's not like yeah engineering is when you know how to do something and you go and you need to put together a complex process to build it research is when there's several unknown problems in an AI I don't even think we have a sense of how many unknown problems there are um for something like glasses we have a sense it's like okay there's like 10 areas of unknown problems that we need to
go solve like how do we get the right wave guides how do we get the right laser display no one's
ever done this but like we can try 10 different things in each one of those and kind of run it forward
and i i don't think anyone can definitively tell you how deep the problem space is um so it's
very much research and that's fascinating um so yeah so what do you do to to to do that as well as well as
possible you get the very best team talent density um make sure that people have the resources that
they need and clear all the other stuff that comes from running a big company out of the way,
and that's kind of my job for them. Yeah. You were talking about the CAPEX and the data centers.
You obviously see something on the other side of that that will warrant that being worth it.
But I'm wondering, do you ascribe to these bubble fears at all that people are talking about,
that we're in this massive overspending, getting ahead of the skis bubble? And maybe a company like
meta will be okay because you guys do have a core business that makes a lot of money.
but how do you think about this bubble talk
that has been going on
for the last few months especially?
I mean, I think it's quite possible.
I mean, I think basic,
if you look at most other major infrastructure buildups
and history, you know, whether it's railroads
or fiber for the internet, you know,
in the dot-com bubble,
these things were all chasing something
that ended up being fundamentally very valuable.
In most cases, it ended up being even more valuable than the people who were kind of pushing the bubble thought it was going to be.
But in at least all of these past cases, the infrastructure gets built out, people take on too much debt, and then you hit some blip, whether it's some macroeconomic thing, or maybe you just have like a couple of years where the demand for the product doesn't quite materialize.
And then a lot of the companies end up going out of business.
and then the assets get distressed
and then it's a great opportunity to go buy more.
So I think that, it's obviously impossible to predict
what will happen here.
There are compelling arguments for why AI could be an outlier
and basically just, you know,
if the models keep on growing in capability year over year
and demand keeps growing,
then maybe there is no collapse or something.
But I do think that there's definitely a possibility, at least empirically, based on past large infrastructure buildouts and how they led to bubbles, that something like that would happen here.
From meta's perspective, I think the strategy is actually pretty simple.
it's um you know it's at least in terms of building out the infrastructure you know it's
no one knows when superintelligence is going to be possible is it going to be three years
it's going to be five years can be eight years whatever is it never going to happen but i don't
think it's never going to happen i i'm more ambitious i or optimistic i think it's going to be
the sooner side but let's say let's say that you weren't sure if it was going to be three or five
years like in a conservative business situation maybe you'd like hedge building out your infrastructure
because you're worried that if you build it out, assuming it's going to be three years and it takes
five, then you've lost maybe a couple hundred billion dollars or something. I mean, my view is
that... That's a lot of money. Well, no, well, I was going to say in the grand scheme,
it is, it is objectively a huge amount of money. Yeah. Right. I mean, didn't you just tell Trump you
were going to spend like 600 billion? I mean, that's... I did. Yeah, through 2028, which is... It's a lot of
money. It is. And if and if we end up misspending a couple of hundred billion dollars, I think that
that is going to be very unfortunate, obviously. But what I'd say is I actually think the risk is
higher on the other side. If you, if you build too slowly and then superintelligence is possible
in three years, but you built it out assuming it would be there in five years, then you're just
out of position on what I think is going to be the most important technology that enables
the most new products and innovation and value creation and history. So I don't know.
I mean, I don't want to be kind of cavalier about it. I mean, obviously these are very large
amounts of money and we're trying to get it right. But I think the risk, at least for a company
like meta is probably in not being aggressive enough rather than being somewhat too aggressive.
But part of that is like we're not at risk of going out of business or something like that.
If you're one of these companies that like an open AI or an anthropic or something like
that where they're raising money as the way that they're funding their build out and there's obviously
this open question of to what extent are they going to be able to keep on raising money. And that's
dependent both to some degree on their performance and how AI does, but also all these macroeconomic
factors that are out of their control. I mean, the market could get bearish for reasons that
have nothing to do with AI. Maybe something bad happens internationally. And then it could
just be impossible to fulfill the kind of the compute funding, like the compute obligation.
So I think that's, that's, it might be a different situation if you're in one of their shoes.
but I think for us you I think the clear strategy is it's just I think it's creates more value
for the world if we kind of assume pretty aggressive assumptions about when this is going to be
possible and take some risk that maybe it takes a little bit longer do you feel like the US is
in a better place now to help with this and to help American companies succeed I think
you've done a lot of work with this new administration, and when we were here last year,
you were saying you wanted to kind of stay out of it, but it seems like you realize,
it was the realization that this is just so important that I have to play ball with this?
Oh, well, I mean, the thing that I want to stay out of is partisan politics.
Okay. So, but, I mean, our, we will always want to work with and have a good partnership
and collaboration with governments, right? So, um,
And that's going to be especially true in our home country, but it's also true in other countries around the world where we serve large amounts of people.
So, yeah, but I'd say yes.
I mean, I think this administration, for a number of reasons, is definitely more forward-leaning on wanting to help build out infrastructure.
And that has, I think, been positive.
And I think that that aspect of this is, I think the next few years are going to be very important for the AI buildout and the AI infrastructure build out.
And I think having a government, which is important, both of the federal level and in the states where you work, that want that build out to happen is fundamentally a helpful thing.
Yeah.
There was a line in your super intelligence memo that you wrote where you said,
over the last few months
we have begun to see glimpses
of our AI systems improving themselves.
I was really interested in that line.
What specifically did you see
that made you write that?
Well, I mean, one of the early examples
that we saw
was a team
that was working on Facebook
that took a version of Lama 4
and made this autonomous agent
that could start to improve parts
of the Facebook
algorithm. And it basically checked in a number of changes that are the type of thing that
like a mid-level engineer would get promoted for. Really? So yeah. So I think that's like
very neat. It's like you basically have built an AI that is building AI that makes the product
better that improves the quality that people observe. To be clear, to be clear,
this is still a low percentage of the overall improvements that we're making to Facebook and
Instagram. But I think it'll grow over time. So that's one of the things that I was talking
about when I said glimpses. I mean, this isn't like, okay, the AI is improving itself
at like an exponentially fast rate or something like that. It's I think that what we're seeing
are early examples of AI is autonomously improving AI in ways that.
are having a positive impact on the experience that people get to happen.
And that's how to think about super intelligence broadly is that it's when you're there,
it's AI that is rapidly improving itself.
That's what it means?
Or is that too simple stuff?
Yeah, I think AI that can improve itself.
And that's beyond human level.
I think that there's, there's this dynamic today where all the AIs are trained on data and knowledge
that people have produced.
So a lot of the systems seem to be kind of very broad and have all the knowledge of humanity.
So maybe that's in some dimensions.
It might feel like it is smarter than any one given person and kind of the breadth of what it knows.
But I still think today the systems are basically gated on human knowledge.
And there is a world beyond that, right, where I think you're starting to get into that with some of the thinking models where they can go out.
and solve problems in the future that no person can solve and then can learn from having solved
that problem. And the pace of that improvement to me is somewhat less important than just the process
of it. I'm not like a super fast takeoff believer because in the sense that I don't think it's
going to be, okay, one day it can improve itself and then the next day it's going to take over
everything. I mean, I think that there's, like, way more physical constraints. Like,
it takes time to build data centers. Like, if you, a lot of frontier human knowledge
comes from empirical experimentation, right? So you, you know, if you're, you develop some new
drug, you want to see A, if it works and B, if it's safe for people, how do you do that? You
run a test where you give it to a handful of people. Um,
maybe more than a handful, but some statistically significant group of people. And you observe
how that goes for a while to see both whether the positive effects are kind of long-lived
and whether there's any negative effects. Okay, well, if you're trying to run a six-month or a
12-month trial, you can't do that in less than six or 12 months. I mean, maybe you can get a
negative result sooner, but you're not going to be able to validate that you can get the positive
result that you're looking for without having done that test. So I think that's also going to be
with AI, right? There are going to be some things that maybe something that's like a super
intelligent system can just intuit or reason from first principles using the knowledge that we
already have. But I think a lot of learning is going to be experimental. And I do just think these
things take time, right? You have to run long-term experiments if you're trying to make long-term
changes in the world. And that I think is going to be true for the AI too. Now, maybe it'll,
on average, run smarter experiments. So per experiment, maybe it'll learn more. I think it will
probably be able to figure out some things from first principle. It will definitely be able to
figure out a bunch of things from first principles. But I don't know, I'm not on the camp of
people who think it's going to be like overnight this changes. I think it's going to be this
very steady progression. We're just making our lives better.
All right. Well, it's going to be a wild few years. Yeah, Mark, I appreciate you doing this.
Yeah, happy to. Yeah, congrats on the new show. Thank you. Appreciate it.
Alex, I enjoyed that interview, but I have to ask, how does it feel to have Mark Zuckerberg
in your neuro band? I hope he's not actually in the band, but I guess we don't know for sure.
But seriously, thanks to Zuck for taking the time to be the first guest on Access. You can read more
about what we talked about in my newsletter, Sources.News, and Ellis, what are you plug in?
Yeah, you could find me on Twitter, which I will continue to call Twitter, at at hamburger,
and at meaning.company. Access is produced in partnership with Vox Media. Please follow us.
We're a new show. We need your support. We need your follows. Hitting that notification button
to get new episodes. You can find us on Spotify, Apple Podcasts.
all the other podcast apps that I don't know about.
We're also on YouTube and video.
Please check us out there at AccessPod.
You can also find us on all of the socials at AccessPod.
Smash that like button.
Smash it.
All right.
Alice, that's our first episode.
We'll see everyone next week.
