Big Technology Podcast - Mark Zuckerberg’s Personal Superintelligence, Layoffs and Payoffs, Writing With AI — With M.G. Siegler
Episode Date: August 4, 2025M.G. Siegler is the author of Spyglass. He joins Big Technology podcast for the latest of our first Monday of the month discussion about Big Tech strategy and AI. Today we cover Mark Zuckerberg's vis...ion for personal superintelligence and whether it's more of a recruiting play or a real difference in the way the company builds AI. We also cover the massive bets on AI and whether they'll ever pay off, how interlinked big tech is with AI, and whether you should outsource your writing to AI. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Mark Zuckerberg wants to build personal super intelligence, but does the term have meaning beyond marketing hype?
We're seeing 100 million dollar payouts and lots of layoffs in Big Tech. How does that work?
Plus should you use AI to write that's coming up right after this
Welcome to Big Technology podcast first Monday of the month edition where we talk about everything happening in Big Tech and AI with the great
MG Siegler of Spyglass. You could
find his writing at spyglass.org and we have so much to speak with you about today including what
super intelligence actually means, whether people are actually worth 100 million dollars as big tech
companies are laying folks off, and of course whether you should use artificial intelligence
to write. MG it's great to have you back on the show. Welcome
back. Thanks for having me, Alex, as always. And yeah, there is so much going on right now,
so I'm excited to dive into it a bit. Definitely. All right, so we're going to discuss the definition
of superintelligence on this show, or the lack of definition of superintelligence on this show.
I think the first way for us to get into that is just looking at this
I think the first way for us to get into that is just looking at this
document that Mark Zuckerberg has laid out about personal super intelligence. So I'll just start off. He says, over the last few months
we have begun to see glimpses of our AI systems improving themselves. The improvement is slow for now, but undeniable.
Developing super intelligence is now in sight. Okay, so before we get into exactly what Mark Zuckerberg wants to build, what is this guy seeing and in fact what are
all the AI labs seeing? Is this just them talking up the potential of the
technology to get bigger funding rounds and better valuations on the public
markets or do you think that in concert they are all seeing that we're at the
beginning of this curve? This memo is very interesting on a few fronts.
I do think sort of everything you're talking about is sort of true, right?
It can be true that we are at a unique time and that these companies are seeing that AI
is starting to, it sounds like a lot of them are framing it that they're now able to sort of work on their
own and sort of are levering up to sort of this new level of intelligence.
And again, we'll go into the definitions in a bit.
But I also think Zuck's in particular angle here is unique because of course, he's not
trying to necessarily raise money, though they are, it sounds like I think raising debt
and stuff to help fund some of their expansion. but he's not raising from VCs, obviously,
as they've long been a public company, of course.
But he has a different incentive, which is that he's trying to bring in all the
talent in the world that he possibly can in AI.
And to me, when I first saw this memo, it felt sort of like a response to, um,
sort of some of the stories that you hear
that some people are reluctant to go over to meta, right?
And so he needs to lay out a vision,
a cohesive vision for what meta is trying to do
that's both different from sort of what some
of the other players in the space are trying to do,
and also, you know, hopefully inspiring to some people,
because again, I think a lot of,
there's at least in the press,
there's a lot of sort of shade being thrown at, at Metta that look, are you going to go there
to build the next, um, you know, uh, sort of silly social AI element while these other
places are trying to cure diseases and, and do, and, you know, basically solve the future
of work and do all these other things.
And so Zuckerberg, I think, you know, realized that his one angle that he can definitely play up is obviously the historical strength of Facebook of meta dating back to the Facebook days and all the other sort of services and apps that they both built and bought is obviously around social media.
But he won't sort of use that framing these days, but it's more around personalization and doing things for personal reasons,
interpersonal reasons, and so I think that he's viewing
that as his angle.
We can go into a bit whether or not I think that that's
a good take to get, but I appreciate that he's trying
to do it, he's trying to differentiate, and again,
the high level of the memo itself, it sort of is almost
like a Sam Altman memo when I was reading it first.
Which is weird. is almost like a Sam Altman memo when I was reading it first. Oh exactly.
Which is weird.
You even look at the formatting.
And this is a post that was written,
meta.com slash superintelligence.
He didn't even post a thing on Facebook.
And folks, if you haven't seen it,
this is what it looks like.
It legitimately is like black text on a white background.
It looks like the AI researcher websites where they like kind of want to show that they don't
really care about branding even though they really do and they're like in courier and like blue links
and stuff like that. It has exactly that feel and it does have just if you look at the optics of
this and Mark Zuckerberg knows this very well, it is sort of an attempt to position himself as
A, someone who knows what's going on with AI research and B, you know, one of these AI visionaries.
Yeah, this is like the old school version of, you know, when CEOs might write like a Tumblr post
back in the day, right? Exactly. Because they're being cool and edgy and they're meeting their
employee base where they are rather than... You should have just written the thing in an entirely lower case,
and then he really would have been,
which is the way that Sam Altman writes,
then he really would have been on point.
Or you should have, yeah, done a copy and paste
of having written it in a terminal,
and then basically used a screenshot of it.
Yeah, and I know it's easy to make fun of,
but I actually think this is a smart play.
You kind of have to show to these folks
that you're speaking their language.
Yeah. I mean, again, I think that he's doing what he feels like he needs to.
I think he's been out there obviously talking to all sorts of people,
you know, in terms of who he could potentially bring on board.
Some have come on board, many have not, it sounds like, you know,
from all the various reports on it. And I think he's probably, you know know using those conversations as inputs in his own head of like what's resonating what's not what do i need to do differently what do i need to
you know how do i need to frame this and again i think one of the key things is like having a cohesive message
about what meta is trying to do and hopefully one that's different from what the other competitors are trying to do. And I think that that's where this sort of netted out.
So let's get into that message.
So he says,
It's clear in the coming years,
AI will improve all our existing systems and enable the creation and discovery of new things that aren't imaginable today.
But it is an open question what we will direct super intelligence towards.
And now here he is.
He says,
Super intelligence has the potential to begin a new era of personal empowerment
where people will have greater agency to improve the world in the direction they
choose. And he says, Metta's vision is to bring superintelligence to everyone. This
is distinct from others in the industry who believe superintelligence should be
directed centrally towards automating all valuable work and then humanity will live
on a doll of its output." So he's basically saying like, listen, there are
others that want to use this technology for automation. We want to give everybody
the benefit of being able to use this technology and there he is setting that,
well, let's say unique vision for now, but I'm actually gonna question it.
But is that the right read on it, MG?
I think so.
Yeah, I mean, it's basically like, look,
there's all of these other AI companies out there,
including Google, by the way, which are saying
that they're closing in on solving,
again, we'll get to the branding,
but whatever you want to call it, AGI or super intelligence.
And that sense that it's though this nebulous term,
which I think everyone would agree with, right,
that it can be used for anything.
So maybe it's a problem because no one really knows then
what it's actually going to be used for.
And in some cases, that's scary to people, to a lot of people, right?
And so what I think Zuckerberg is trying to frame it as like, we're not trying to boil
the ocean, we're not going to use this technology to do every single thing under the sun.
We don't necessarily focus on enterprise like some of the competitors do.
We focus on personal relationships.
And so we're going to use, we're going to get to that technology, this level of technology, and we're going to use it for your own personal purposes,
to empower you to use it for your own personal purposes, right? And so it's trying to be, again,
an empowering message versus some of the others where it maybe sounds a bit, not nefarious,
but a little bit nebulous and scary in that regard because it can, you know, who knows what it will
ultimately be used for and so
Again the framing I understand I'm with you that I'm sort of skeptical about
Yeah, that that necessarily that framing of it
But again, I think from a pure comms perspective when you're trying to make this this message to the market
I think that that at least makes sense
So now yeah
I said I have some hesitancy about going fully along with Zuckerberg
saying that they're going to be distinct from the other labs in terms of personal empowerment
because if you look at what Chat Sheet BT is, I mean that seems to me like that's the goal. Now,
was Chat Sheet BT a demo maybe so that they could sell their API? Yes, but has it overtaken that API
business at OpenAI? I'm pretty sure it has
So I think that like all of these chatbots if the models improve if we get to let's call it super intelligence or even a GI
Every single one of these chatbots are gonna have that property. Do you think yeah?
I get the the thing that immediately popped into my mind when I was reading that is
Along the lines of what you're saying.
It's basically like, it reminds me of trying to say that, look, we've built the first
personal computer, but it only can be used for personal reasons, right?
It's not going to be used for work.
And it's like, silly, a personal computer, a PC can be used for anything and it should
be used for anything, right?
We use it for work.
We use it for personal endeavors.
And so it's both.
And I think that AI, I mean, in my head, at least,
it's going to play out the same way,
that yeah, we'll have tools like ChatGBT and other services,
and people use them for both work and personal usage.
And that's what's happening right now.
And so Zuckerberg trying to sort of, yeah,
narrow this into just one use case, again,
it's like, it feels
like unnecessarily confining and I don't think it's going to play out that way.
Right.
So I don't think that meta will necessarily differentiate itself that way, although you
can choose the types of features and the types of functionality you want to focus on to make
them stronger.
For instance, Dario told me on Wednesday that Anthropic chose to focus its energy on making coding better,
and that's obviously, you know, worked out for them, or that it's actually shown in the product.
So Zuckerberg says,
everyone having a personal super intelligence will help you achieve your goals, create what you want to see in the world,
experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be. I think he
leaves out that you'll also, I think in their vision, will become a friend to
that super intelligence in meta products, but I guess that's neither here nor
there. Now despite the fact that I don't think that they're gonna have the
stranglehold on this potential product route,
seems to me that this is, if AI continues a pace,
all these things are actually gonna be
one of the main uses of the technology.
The vision isn't exactly off,
I just don't know if they could do it,
but the vision sounds like,
yep, that might be where we're going.
Yeah, and remember the other side of this,
which is like Zuckerberg must be looking at the market and knowing like, what advantages do I have?
And the one major one, you know, there's a few, few one, I think there's a few that
he has, but one of the major ones is the fact that they run services, social networks that
are used by billions of people.
And so if you believe, you know, that this is going to be, as you're saying, like a sliver of what AI is,
they have a distribution advantage that others don't in terms of being able to leverage the technology for these sort of more social use cases.
And so I think that he's smartly sort of honing in on that, that fact of it.
Yeah, he definitely addresses how this could benefit the Facebook or the meta business model in a pretty direct way. He says if trends continue then you
expect people to spend less time in productivity software and more time
creating and connecting. It's like, alright, so if you're able to get your
work done in five hours then maybe you're gonna want to post, you know, your
AI slop on Facebook. Now that's a little snarky, but it seems like that's what he's thinking.
Yeah, and then that's sort of using some of the arguments
that are out there from Sam Altman and others
against them, right, where they're saying,
like, look, in the future,
this technology is going to enable us all
to have a lot more leisure time,
and Zuckerberg's saying,
we know what to do with leisure time, right?
That's one of our specialties,
and so we're ready for that future,
and we're gonna help enable it,
and we're gonna help sort of fill in that gaps. As an aside, I'm sort of
skeptical of like the idea of, you know, not necessarily productivity gains.
I do think that those will happen, but I sort of go to the analogy of like when a
freeway is overcrowded and so they expand it and you know, you hope like, oh,
well then traffic will free up and instead just more traffic happens, right?
And so I feel like when we use technology to free up time,
we're just going to do more work, you know, in different ways with that time.
I don't think that everyone's just going to be sitting around like Wally,
you know, watching the videos on our hovercraft things.
I do think that, you know, that's maybe again playing into what Zuckerberg hopes happens, but I'm not sure it's gonna play out that way
Okay, but then let me ask you this obviously Facebook started on the premise of connecting with your friends on the internet
That's gone well in some ways, but like really not well in others and I'm not even talking about meta scandals
I'm talking about the fact that like
there was a clear shift on the social internet
where people stopped caring about what their friends were doing and they just wanted to watch
fun stuff. And we went from the friend feed to the for you feed and we don't really speak to our
friends online. Maybe we do in messaging apps. I mean we do in messaging apps. So I'm curious if
you think, and this is kind of more on the weirder side, but
what does Facebook become if it instead of like using the internet,
instead of enabling us to use the internet to connect with our friends, it like
basically helps us replace our friends with AI friends?
Yeah, I mean in some ways this is a very fundamental debate, right?
And yeah, it dates to sort of the earlier days of social networking.
And I do think also, you know, with young kids about trying to extrapolate how this
plays out, it feels like there's a definite sort of downside scenario that you're alluding
to where it sort of accelerates the trend of loneliness,
if you wanna call it that,
and people sort of being more insular
and spending more time with machines
from obviously computers and phones on down
to not need to spend time with friends
and not need to go outside and not do face to face time
because we have all these new tools that sort of make it more fun in some ways, I guess, to interact with people.
It's like, you know, in his ideal world would, and he would never, of course, frame it this way.
But again, if you're just trying to extrapolate out what he's sort of suggesting, it's like, is it better to not go outside and play with your friends and instead do,
you know, like a game with your friends? It's built by AI, you know, personalized for the two of you
in real time. And, you know, again, that sounds dystopian. I want to be mindful that, you know,
like, there's various different ways, obviously, that people will use these technologies, and some
of them will be incredible, and some of them will be good. And I do think it's like, I, I,
I'm not of the mind that, um,
social networking and even connectivity is,
is fully to blame for loneliness epidemic and whatnot.
I think it's part of it, but, but I don't think it's, you know,
fully to blame for all that. And I do think it can help in some ways. The,
the sort of end state though of that though is,
is what you were hitting at earlier where it's like,
it's not necessarily another human friend that you're playing with, but instead an AI friend who's
just been generated and sort of what that means for your own internal psyche.
Yeah, it could end up being that we become less lonely but more insular, which is not like a trend that you would anticipate. But there was, we had Elon Borough on the show, she's an AI
researcher and CEO who used to work at Metta in the FAIR organization and she
brought up this great point where like there are social limits to like what you
can do with friends. Like you you really can't like continue to call a friend at
like 3 a.m. in the morning, you know, if you're
walking through a problem or something like that.
You can sort of make unlimited phone calls to AI and that's fine.
So like humans have limits, AI doesn't, is that good?
You know, not, we don't know yet, but potentially.
So all right.
Yeah, again, I think that there's, you can make the both sides argument to this.
And I do think that there will be good elements to it.
But that nuance is way too hard to try to explain in an open letter, right?
As Mike is doing here.
Yeah, that's right.
He prefers bluntness and he does end with a very blunt mic drop.
Meta believes strongly in building personal super intelligence that empowers everyone.
We have the resources and the expertise
to build the massive infrastructure required
and the capability and we'll deliver new technology
to billions of people across our products.
I'm excited to focus Meta's efforts
towards building this future."
So obviously like Zuck has had these moments
where he's like said, like video or you know,
VR is the thing we're focused on.
He puts all the company's attention
and himself on the project.
Clearly that is AI.
It does seem like this is the recruiting pitch, by the way.
It's like, we got the money, we have the expertise,
we have the infrastructure, get on board
because whatever else you're doing
is not gonna be around for much longer
after we're done with this project.
And that is sort of the final note that he sends to the public
and maybe to those who've spurned him and maybe those who are still considering
his offers. Yeah, but and I wrote about this a little bit when sort of the the
news was first coming out after the scale AI acquisition about what they
were trying to build with the super intelligence labs. I'm skeptical that it
will work, you know, and we, you
know, we've talked about that, but I think that there's a little bit of the
Lucy pulling the football thing that Metta has long had, both with the public
in terms of, yeah, pivot to video and all the news organizations rush to pivot to
video and then Metta, Facebook decides that they want something else for, you
know, their algorithms prefer something else when it comes to engagement and so
You know, they basically have to meet the media organizations and do something else
But the same thing is true with both I think users and also their employee base, right where it's like
yeah, we're gonna focus on this thing and
obviously they pivoted the entire company to be around the metaverse and it's like
They pivoted the entire company to be around the metaverse. And it's like, we're not hearing too much about that
these days aside from as it relates to AI.
And again, they're sort of doing some hindsight narration
about how, oh, it was always a part of the plan and AI.
You know, as Zuckerberg has said in the past, right?
I didn't real, I maybe didn't realize that AI
would take off ahead of the metaverse taking off.
We thought it'd be sort of the inverse of it, but there was always a plan that the two of them were going
to interact together. And so, again, I think that that's the argument made in hindsight. But
if anyone who joined meta for the metaverse build out and this was going to be the future,
like what are you thinking right now as you have colleagues who are getting offered hundreds of
millions of dollars potentially to build out AI.
I don't think any of the metaverse team is getting those packages right now.
Yeah, you're big sad.
I mean, they changed the name of the company to Metaverse or Meta.
You've been laboring away on the VR goggles and some guy who's been working at two years on AI, at OpenAI,
just got $100 million
to come over.
I mean, I'm sort of being facetious here,
but that wouldn't make me happy.
Although I sort of think you sort of respect the game.
Like if LeBron James comes to your team,
you know he's gonna make you better.
But I don't know.
It's hard to see all these LeBron Jameses coming over
and you're still in the metaverse unit.
So, yeah, go ahead.
The one other thing I would just add to this
that I was just reminded of is we often bring up,
yeah, the metaverse element of it as a easy one
to mention that Zuck was obsessed with
and then sort of quickly change focus.
But if you remember, he didn't change focus right away to AI because it obviously
took a little bit for it to come around.
At first there was the focus then became, um, if I remember right in terms of order
at one point he was highly focused on, um, like encryption and personal, um, one
on one conversations versus, you know, he wrote this whole memo to private.
Yeah.
Pivot to privacy. Yeah. Yeah. Yeah. Pave it to privacy.
Yeah.
Yeah.
Yeah.
And so that was a focus for a few months.
It seems like, because then, then crypto stuff started happening.
Right.
And they had a whole team that they spun up that was early on it.
And like Kevin wheel was on that team, if you recall, and there were many other
people within met at the time who were on that team, uh, who, you know, it seems like they were going to be at the forefront of, you know, a new crypto boom
that of course they were just way too early for, if you want to consider it happening now.
And Libra, I think was, yeah, the name of the product that they were working on.
But again, that was a focus for a while until they just totally got rid of that
because it was not advantageous to them anymore.
while until they just totally got rid of that because it was not advantageous to them anymore.
That's right and so Pivot to Privacy was their attempt to capture where the social sharing was going like I mentioned earlier that we weren't sharing in the feeds anymore and all sharing was
happening in messaging apps. So that was their way of saying that's where it's going and we're going
to try to be at the forefront of that and And that, you know, in some ways worked. They have WhatsApp and that is the leading messaging app in the
world. So I think there is, I mean they had obviously bought it years beforehand
but clearly put a lot of product focus there for better or worse. They obviously
also just introduced advertising if they're promising they would not. So I
think there is some merit in running around what seems like on the outside
as chickens without a head trying to go trend to trend. There are definite stories of big companies
in the past who've been like, oh we're too cool for that next new trend and then you know they're
making scanners and copying machines as the world has moved on. So I think that that meta is, is, um,
looks a little ridiculous for the strategy, but there is some wisdom to it.
Yeah. And, um, you know, I,
I do think that because, and this is a big part of, of Zuckerberg's pitch,
right? Because they have this fundamental underlying advertising
infrastructure across their different social properties, they can take a lot of swings,
whereas other companies cannot, right? And they are, if you remember, like,
Google used to have definitely the notion thrown at them that they were a one trick pony, right?
That they could only ever do search ads and that was the only business.
Now it's still obviously the primary element of the business,
but they've done a pretty good job, I think, diversifying in many ways, right?
They have the cloud business, they have Waymo now, which is up and coming.
They have all different sorts of ways that they can potentially make money.
YouTube, that's a different form of ways that they can potentially make money. YouTube, that's a
different form of advertising, but still it's a different product line now that they have
that's fully monetized, which is incredible. And Meta still hasn't found that yet, right?
Like they've been trying with the Metaverse stuff and selling VR goggles and selling different
types of products, but still almost all of it comes from social,
feed advertising right now.
But again, you wanna take these swings
while you have that in the position of power.
And if you believe that that business
is gonna get disrupted in some way, be it by AI,
people using other products,
people spending more time on YouTube or Netflix and whatnot.
You know, you want to get it
done while you can. Take all the swings while you can.
Okay, so this entire conversation we've been talking about Metta's super intelligence effort, personal super intelligence.
And as with many conversations, we have yet to define it.
But that is intentional because we are going to make time to say
What is this word? Why do people keep using it? And what could it possibly mean? And we're going to do that right after this
Hey everyone
Let me tell you about the hustle daily show a podcast filled with business
Tech news and original stories to keep you in the loop on what's trending. More than 2 million professionals read the Hustle's daily email for its irreverent and
informative takes on business and tech news. Now they have a daily podcast
called the Hustle Daily Show where their team of writers break down the biggest
business headlines in 15 minutes or less and explain why you should care about
them. So search for the Hustle Daily Show and your favorite podcast app like the
one you're using right now. And we're back here on Big Technology Podcast, MG Siegler Monday of first Monday of the
month edition. You can find MG's writing at Spyglass.org. And speaking of the definition
of super super intelligence, MG you actually have a piece on this. The endless rebranding of AI.
Basically saying that for a while AI was good
enough, then people went to gen AI, then they started talking about AGI, and now
AGI isn't enough, and now we're talking about super intelligence. What's next?
What does super intelligence mean or is it purely marketing hype? I mean, this is...
So, obviously the term has been around for a bit.
It first really came on my radar,
I think actually when Ilya Sutskever spun out
and made his company called Safe Superintelligence, right?
And presumably he did that, obviously he did that on purpose,
but presumably he did it also to help differentiate
from what OpenAI was doing, right? They were, at the time it was all the talk was to march towards AGI
Famously now they had it baked into their contract with Microsoft, which is still an ongoing dispute. It sounds like
And so, you know, they're deciding like, okay. Well AGI
Nebulously defined as sort of computers reaching human level intelligence
Some would disagree with that and then super intelligence is a level above that, right?
It's going beyond what humans are capable of.
Again, there are, depending on what you read, depending on who you talk to, people disagree about this.
Part of the problem is no one ever really defined what a GI itself was and then you know
Going back again as you did like with with AI itself. It's like that was the all-encompassing thing
but then we got all different sorts of like subsections of it because
People were wanted to be more focused on and they thought AI was was too nebulous a term
And then a GI came along and then a GI was too nebulous a term
And so now super intelligence and now super intelligence is too nebulous a term and so now super intelligence
and now super intelligence is too nebulous a term because we have personal super intelligence
and we have safe super intelligence and we and Microsoft is working on their own variants of
super intelligence that they that they want to talk about and so it's getting a little silly it
feels all it just is obviously all marketing
at this point.
And so I do feel like that, you know,
you gotta at some point pin people down to say like,
what are you actually trying to build
and what is the outcome of that going to be?
Yeah, I think we can agree here.
And this is something that we've been talking about
in the first half.
It's the same damn thing.
It's the same thing. Everyone's the same thing everyone's building the same thing
But people would argue with you about that right because they say no AGI again is sort of human level intelligence
And we have it we're close to that maybe but we haven't quite gotten there yet with you know
All the various math tests and and the different you know tests that they've built to to prove that and super and safe
Or sorry regular super intelligence is just a level beyond that.
There's lots of people who would argue about that
and lots of people would argue for
that they're the same exact thing.
I'll stand on the table here and say it's the same thing.
I think it's, if there were different techniques,
I think this is really what it comes down to.
If there were different techniques to build AGI
and super intelligence, I would say, okay, sure,
Facebook's building super intelligence over here and open AI is stuck building,
you know, you know, stale old AGI over there,
but they're just training large language models.
Yeah. And it's wild that we're trying to define the sort of offshoots of any of them
when we haven't defined the
first part of it and dating back to the original definitions of these things.
It's a little bit silly and that's obviously why I wrote the post once I saw that Microsoft
was jumping in the ring as well to try to come up with their own individual branding
for super intelligence. And then of course you can see it so
Zuckerberg's new memo that we're talking about it's titled that right but he started dropping this the first time I heard him mention this
was in an
Interview with Jessica lesson of the information where he just started like to casually mention
Personal super intelligence and it's like oh did he is he just trying to like
say something offhandedly about how they're you know they have a sort of micro focus but no he
just kept saying it over and over again that they clearly were going to try to make this a branding
thing for meta yeah i mean it's i think that it's very exciting when the word super is in there i
mean really all right when are we going to ultra intelligence? And I'm ready for ultra. Super ultra mega intelligence. It's gonna it's gonna just blow away what we
had. Remember when we were talking about AGI? Forget that. We were such losers back then. Now
super mega ultra intelligence. That's where it's at. One interesting thing I saw in your story was
that Microsoft cannot actually work on AGI legally. This is what you write, the pesky open AI contract
says as much which sheds light on the talking points that Mustafa Suleiman delivered over the past year or so that
Microsoft is happy to see the frontier of Mala work to open AI. So maybe this is maybe this is
meaningful that like well Microsoft can't build AGI because open AI is doing it
but it can work on super intelligence because OpenAI is doing it,
but it can work on super intelligence
because that's not in the contract?
Yeah, and so this, I feel like,
is definitely under-talked about.
It's been reported before,
but I feel like now because OpenAI and Microsoft
are maybe in the midst, in the heat of a new negotiation
over the AGI term and how that's going to play
out for their relationship going forward.
It has come to light again that, yeah, apparently a part of their original contract with the
quote unquote clause in it around AGI was that Microsoft said that they would not themselves
go to work on, go after AGI and working on on that and that in hindsight brings up all sorts of yeah
Interesting points. I think you know when you've talked to even
Mustafa Suleiman he's brought up the point of like yeah, you know, we're happy to let open AI work on the the cutting-edge
You know foundation models and we'll just we'll sit back and work on sort of more tailored custom tailored one when the reality was
Yeah
They're they're technically not allowed to work at the cutting edge
if it's going to lead to AGI. Now they could say maybe it wouldn't, but everyone else would
probably disagree with that. And I also think at an even higher level, part of the reason why,
I would imagine that you even saw Sam Altman sort of pivot his talking points from AGI to
super intelligence at one point in various memos that he's written himself was because of, yeah, the clause in the Microsoft contract.
And it was clearly a point of tension between those two companies.
And it's like you've heard Satya Nadella talk on, I think, a podcast about like the, I think
at one point he called it ridiculous, the notion of sort of that we were close to AGI,
right?
Because obviously he's very incentivized for them that we were close to AGI, right?
Because obviously he's very incentivized for them not to be close to that because it would potentially sever the business terms with OpenAI.
And so it was almost like was Sam Altman sort of throwing him a bone by starting to talk about other things other than AGI and rebranding as super intelligence.
But yeah, if the clause is in there originally that Microsoft is not able to work on AGI and now they're,
they're happy to have this rebranding to where Mustafa Suleiman can work on all,
all the variants of safe,
of super intelligence that he wants to because it's not AGI.
It sort of brings us to this point that we're talking about how Microsoft and
open AI are linked and they have these like weird contractual terms.
And you also point out that something
that's not talked about enough is just that big tech owns
so much of this AI moment, even if they go by,
you know, different names.
So this is the list that you put together.
Nvidia own stakes in OpenAI, XAI, Mistral,
Perplexity, Coheer, ScaleAI.
Oh, that's a bunch. I didn't realize they had stakes in all those. It's even more than that Perplexity, Coheer, ScaleAI, oh that's a bunch,
I didn't realize they had stakes in all those.
It's even more than that, those are just like
the major ones that I was pulling out.
They own so many stakes in every one.
So Microsoft own stakes in OpenAI and Mistral,
Apple own stakes in none of the big AI startups,
but you know there's potential for maybe Perplexity,
okay I'm gonna let go of that dream at this point.
Google owns a stake in Anthropic, Amazon owns a stake in Anthropic, Meta owns
stakes in Scale AI and Safe Superintelligence now, and Oracle owns a
stake in Cohere. You say this is a lot of hedging. Does that mean basically that
like if this AI moment takes off like Big Tech is just gonna be you know
they're gonna be good whether it happens in a start, whether they build super intelligence at home at Metta
or whether they, a startup actually is able to do this.
I mean, obviously, to be clear, they want to do it themselves, right?
And they want to capture that full opportunity.
But it is, it is interesting to me that this isn't talked about all that often, except
around, you know, when new fundraisers happen in particular with Anthropic, right?
Because Amazon and Google own such a lot relative large portions of that company and famously
Anthropic, you know, apparently built into their fundraising documents that those companies
can only own up to a certain threshold because they didn't want to be beholden.
You know, famously when when Dario left OpenAI, there was some talk that he didn't like the Microsoft element getting
involved right with with OpenAI and what that was going to to do to the mission. And so but yeah,
all of these all the big tech companies aside from Apple is mentioned, you know, basically have their
their stakes in these different companies and some of them with multiple stakes now, because these companies have had to raise so much money.
Um, and I do think if you were just to sort of, yeah, extrapolate out in the ultimate
success state of AI and say, say that open AI and anthropic and maybe a couple others
are where are, are the next multi-trillion dollar companies.
If say Microsoft, again,
in the negotiations that they're going through right now, if they end up owning one-third
of OpenAI and it's one day worth $10 trillion, that's meaningful money even to Microsoft.
It's already meaningful money to them because that company is so highly valued, but it's
not more meaningful than a lot of their other sort of core businesses.
But again, this is relative early days.
If you believe this is early days of AI and these players own these giant stakes, it will
end up being a huge, huge element of those companies.
I was reminded of when, back in the day, when Microsoft invested in Facebook before they went public.
And people were like up in arms about the deal,
thinking it was such a bad deal,
like Steve Ballmer was doing another boneheaded thing,
and it ended up being one of the best deals
that he ever struck, certainly.
Easy, of course, to say in hindsight,
but I think it was a savvy move,
because they did it alongside a partnership,
I think it was involved with ad sales
and various other things at the time, maybe Bing was involved in it as well. It was a
long time ago. But but I do sort of view it through that lens. Now, their mistake was selling off
that stake in in Facebook, you know, a little bit after perhaps it went public, whereas now
that would be worth billions and billions of dollars. And again, if Microsoft, if Google,
if Amazon hold on to these AI stakes, and these companies end up being worth and billions of dollars. And again, if Microsoft, if Google, if Amazon, hold on to these AI stakes
and these companies end up being worth trillions of dollars,
these stakes themselves could be worth trillions of dollars down the road.
Yep. You always got to get Bing involved. That's important.
I can't let it leave Bing out of the equation, but it, you know,
it's interesting because it, it really underscores how much money,
resources, attention.
Like we just spoke about how much money
and attention Zuckerberg is putting into this.
We talked about, of course, like these massive investments.
Like when we talk about Google has some of Anthropic,
Amazon has some of Anthropic,
actually it's I think three and $8 billion respectively.
They own something like, you had this in your story,
something like 30% or more of the company.
Actually Google and Amazon own more of Anthropic than Microsoft owns of OpenAI. So it's
wild. And I just wonder, I mean if we go back to like everyone building the same
thing, is there a way that these bets pay off? Like what would it take? So you've
spent a lot of time as an investor. What would it take for these bets to pay off? Like what would it take? So you've spent a lot of time as an investor,
what would it take for these bets to pay off? Like what do we need to see for this money to be money well spent?
So and just to be clear, yeah, Google and Amazon combined,
I believe if my math is correct from all these court documents that I sort of went through,
but I believe it's it's roughly correct that I think those two combined own
something just north of 36%, whereas Microsoft, again, potentially in this scenario would
own 33 or around that percent of OpenAI if and when they can convert to actually have
equity ownership in the company.
To answer your question though, what would it take? It takes a business model first
and foremost. Both OpenAI and Anthropic certainly are, it seems like are growing well from an ARR
perspective. Their models are obviously slightly different, but they're doing a good job in
different markets selling their wares.
But it's certainly not enough to turn either one of them profitable at the moment.
And so they're going to need, they're going to have to do a number of different things.
Either they have to keep growing at the incredible growth rates that they've been growing at.
And that's possible up to a certain extent.
More likely that they're going to have to add other layers to their
business, other stools of their business to figure out how to make sort of the money work.
I think the main wild card though is much like Google back in the day, it might end
up being that these companies in the ultimate success state, if they're able to reach it,
end up creating sort of new models and not necessarily new that we've never heard of
before but much like
Google and even you know meta to some extent right they basically created new forms of advertising that ended up being more lucrative
Than previous forms of advertising because of search ads because of intent right and then with meta because of the feed and so these are different
Forms of an old-school version of of monetization, but so what does that look like an old school version of monetization. But
so what does that look like in the future version of AI and, you know, open AI is famously
sort of said, right, that Sam Altman didn't want to go down the path of advertising. And
I mean, I would chalk that up to I do think that they will eventually go down that path.
But I also chalk that up to the fact that, again, I think that if they go down that path,
and if it's meaningful to them, it will look different than what sort of search
advertising has historically looked like.
I think there will be a part of that because obviously search is a part of chat GPT right
now, but I think it just look, it probably has to look different.
And again, you ask about what it takes to make these companies ultimately successful.
I think it's going to be multi sort of stools of their business, so not just selling APIs
or not just selling access and premium versions and whatnot of their products.
I think that they're going to have to have a lot of different things, including probably
devices, right?
Obviously, we know OpenAI is working on that, but I do think that there will be different
flavors of advertising, newfangled versions of different old business models is what leads to their success. With the investor hat on, with the VC hat on,
how do you try to model that out? I mean, again, you can model out where you think the growth rate
is just of the current business and what that looks like. And I think OpenAI has said in their models that they're gonna get to profitability by 2029.
The late, again, having had some experience now
with OpenAI in particular on the investor side,
but with other companies, just generic X companies say,
those models that are five-year, 10-year Horizon models,
those never play out the way that they're modeled out to.
Now there can be upside to that.
Maybe they play out better in some cases,
but they never exactly like it, right?
It's just to try to paint a picture
of how you can get to a space
where you could say be profitable,
even when you're burning billions and billions of dollars.
This is just the extreme version of that with AI.
Right, so I spoke with Ravi Matra from Lightspeed
for the Dario profile that I wrote, and I said,
tell me how you thought about this investment. And I'm going to get it like just directionally
accurate, it's not going to be precise. But he basically is like, well, you know, we took a look
at the entire pool for like knowledge labor, and then said, well, could this just kind of like,
either augment or replace that? Yeah, yeah. And he said he said that's a yeah, the market is 15 to 20 trillion dollars.
He goes, you work backwards and just say at 60 billion or 100 billion,
could you get a venture style return?
You absolutely could.
Sometimes it's about how you size the markets top down.
And then he told me that they put a billion dollar check into Anthropic.
So it's just fascinating how these
so unknown and such wild projections to get to the numbers you have to get to. I'll give you one just example of like, you know, a last generation of company that sort of this
played out at that, you know, that I was a small part of, which was, you know, around an Uber
investment where it's like you're looking
at the market, they just had the black cab market
at that point, and it's like how do you do
what you're talking about basically, the TAM analysis,
the total addressable market, and how do you think about that?
And with Uber back in the day, it always had to be like
that there would be something else to it,
and there was the notion, I think the early notion
of what UberX would become where it more democratizes
the model and it's not just about black cars,
but it wasn't fully baked yet, it wasn't rolled out yet.
And so you sort of had to buy into this vision of it.
It ended up obviously working out that way,
but there still were other elements that were key
to what they were doing, including things like Uber Eats,
right, which came along, which you never would have sort of envisioned was necessarily a part of it.
There was some inkling like, yeah, could they be like the next version of, you know, again,
like a democratized FedEx or something like that, right?
Because they have all these drivers on the road and you can move parcels around and move
different things around.
But it's all just theoretical.
And so yeah, you try to do like this,
these market analysis and figure it out,
but it's all very silly to look back upon
at the end of the day,
because no one actually knows,
including the companies, right?
It's just, it almost always goes back to the notion of like,
build a good product, if people love using it,
they're gonna continue using it.
And that doesn't necessarily mean that you're going to figure out if people love using it, they're gonna continue using it, and that doesn't necessarily mean
that you're going to figure out the right model for it,
but if you have enough users and you figure out a way
to monetize those users,
it's going to be a massive opportunity, obviously,
and at least the good news for these AI companies,
unlike some of the other sort of, if you wanna call them,
VC subsidized businesses some of the other,
if you want to call them VC subsidized businesses back in the day,
they have models that are working.
There were so many different companies even just a few years ago where, again, it was VC subsidized.
They had no actual models and it's just a matter of, are they going to be able
to ever slow down the spend that they have to do
to train these models in order to get to a scale
at which they can actually finally equalize that a bit.
And in that context, how would you analyze the investments in people
that they're making? Because think about it this way, something Amatra told me, Amazon
went public at 400 million dollars. Mark Zuckerberg is handing reportedly
billion dollar offers to people. Yep, yep. So what would you need to do to get the
payoff on that?
This is where I am sympathetic to Zuckerberg's arguments.
He's made the argument that basically,
it essentially boils down to, and I wrote a post about this,
that it's all relative, right?
Like, okay, if you are Metta and you're spending whatever it is,
$75 billion on CapEx right now per year on this on infrastructure build out for AI.
Does it not make sense to spend some subset of that on the people who can actually make this work?
And then it's just, you know, it sort of becomes like this, this math equation of like, well,
what makes sense? And you could certainly make the argument that to date, the human level of investment has
been way under indexed versus what the, again, the capex or the infrastructure has been.
It's a weird dynamic for reasons that we were talking about before where he's creating this
weird disturbance in the market because it's totally changing what the historical norms
have been in terms of compensation.
So you can have people who are getting paid what the quote unquote normal rate was for
an engineer before, which is say in the hundreds of thousands of dollars at one of these tech
companies.
And now you have people getting paid hundreds of millions of dollars potentially to come
work on this.
And again, trying to be sympathetic to Zuckerberg's position here, if you're needing
to catch up in AI, you're spending $75 billion plus a year on infrastructure, the amount
that you're spending on people is going to be, even with these $100 million contracts,
is going to be such a small subset of what you're spending on infrastructure and if they're
a key ingredient to making it work, you got to do it.
Right. And so we've already poured one out to the, you know, poor engineers who are working on the metaverse.
But, you know, part of the title of this episode is layoffs and payoffs.
And what do you make of the fact that we're seeing this moment in the tech industry where like literally tens of thousands of people are being laid off as these mega offers come in.
It seems like they don't really need to lay off
those people from a financial standpoint.
So why would they cut so many?
I mean, thinking about Microsoft in particular.
Yeah, and because Sachin Nadella wrote a memo about this,
which he released, the internal memo,
which he released publicly, I'm sure,
because it was gonna leak.
And so, you know, they wanted to get ahead of this anyway.
But he's basically trying to explain, they've done,
I think the number is in total now,
15,000 people have been laid off from Microsoft
over a couple rounds or a few rounds this year.
And that's, again, in the same time as we're talking about,
that they're spending insane amount of money on CapEx
and just to hire people like Mustafa Suleiman to come over and acquiring his company.
How do you sort of square that circle? Satya Nadella basically tried to do it. He called it
an enigma, I think, of a time and that it doesn't make a lot of sense on paper, but that basically
we're in this transformational moment
with AI, yada yada, what everyone says.
But I do think there's a number of things going on here,
one level deeper, which is one,
I do think that we're, businesses in general,
are sort of looking at the landscape and figuring out
do they need all of these people,
tech companies in particular now,
almost all of them are around 100,000 or more people on their payroll.
And a lot of those people are doing things which, the unfortunate truth is that the internal
powers that be, the leadership infrastructure probably views them as like, not necessarily,
they're not bad people, but they're doing stuff that was last year's innovations, right? Or they're they're doing stuff that they they might think that AI is able to
Eventually replace if not now but sooner rather than later. I
Also think that there's a part of it where look they know that
Wall Street at some point is gonna get skittish on this capex spend and especially if the AI
revenue isn't you know growing in in concert with it at some point soon.
And so I do think that this is a way to throw a bone
to Wall Street and say like, look, even though,
it looks like that we're being sort of wild on our spend
when it comes to CapEx, we are being prudent financially.
We're thinking about how we basically organize
the business overall.
And that includes letting go of some of the people that. And, and that includes, you know,
letting go of some of the people that we've had previously that we feel like we
don't need anymore. Uh, while we continue to bring on new people that we do feel
like we need, you know, from, from the AI perspective and, and building that out.
And so the, the weird things of how this plays out to me,
it's almost like a microcosm, right? Of like the,
the overall debate being had about, um, you know, as I titled my
thing, like sort of the haves and haves, nots, but it's a great title.
No, you, you write the have lots and the have lots, which I think is just,
it really is above just having it's having more money than as I, as I put, I
think at one point it's basically they're getting offers more money than,
than anyone gets offered for any position,
including the CEOs of these companies.
Maybe there's some star athletes who make something comparable when you consider their
endorsement deals alongside whatever they make in salary, but it's just an incredible
amount of money.
And how do you weigh that against these layoffs?
And again, I think of it as like a sort of a microcosm of yeah, like the
the world in which we live in which you know, there's the extremes of society with the ultra billionaires, while people are struggling, you know, on the ground on a daily basis and,
and the microcosm that that can be within a company itself. What does that lead to in terms
of internal politics and in weird inner company dynamics? And I think. And I think that's part of why I think Satya Nadella and others have written
that basic memo of like, uh,
I think that they're trying to get ahead of that. And also lastly,
I would just say, cause we've seen this a few times now,
and I think from, from Andy Jassy and others,
they're basically giving the message that look,
you got to incorporate AI into what you do, or,
or you got to get off board right now because this is coming and this is how we're all gonna do business
so do it right now or else you're gone. Yep you go full tale of two cities at
the beginning of that piece it was the best of times it was the worst of times
I mean that is the way to start that piece. Okay so let's end here AI for
writing it's pretty controversial I think AI is getting to the point where
it's starting to write adequately. Not well, but adequately. I don't use it for
writing, but when I read AI written stuff I no longer have a gag reflex. But you
write, I think, something very very spot-on about what it means for our society, our
lives, and it's that writing is not
just a way to communicate information, it's a way to organize your thoughts,
it's a way to think. And so we focus so much on the output of AI writing but
very little on the input and that's where it's gonna hurt if people start
relying entirely on these AI tools to convey their thoughts. So kind of take us through your thinking there
and talk a little bit about whether you think AI writing
is gonna be a net positive or negative.
Yeah, this came from me just trying to extrapolate out
how I would view using AI to write,
which I don't like you, I don't right now.
And it's like, would I ever?
And I was trying to think through like,
yeah, how good is the technology now? And to your point, it seems it seems like you know it's getting good enough for a lot of different types
of writing
And I've also been thinking a lot about one thing
I hate is email as many people do but I've long written about my sort of dislike of it famous long time ago
15 years ago I quit for a long time doing email
And I wrote about it in TechCrunch when I when I wrote there back in the day
And I got a lot of a lot of flack for that story.
But I think it also resonated with people
because it's like, well, you actually need to do it.
And I think we're getting to the point with AI
where you probably don't need to do it anymore, right?
Like, because you could basically have AI
write your emails for you.
And then I do think eventually the way that it plays out
is that then you have AI reading those emails
on the other end and sort of it ends up these bots sort of writing to other bots and then sending you like a to-do list
of whatever needs the action items need to be from that.
And so I was thinking about it from that perspective, but then I then I go to, you know, my actual
love of writing and sort of what I what I do and why I love to do it and I would never use AI to write say an article about even AI
For me because that would take away what I the value I get out of actually writing which is writing like many people
It helps me think it helps me form my own thoughts
And so I do think that that's true of a lot of people even if they don't necessarily realize it
I think that people who write for a living or write a lot
Do realize that you know real element to it, but I think people who don't you know, probably gloss over that fact
But I you know, I do think like as a society I would imagine that it plays out in a way where maybe there's some level of writing
Like email like the tedious stuff that does get automated away
writing like email, like the tedious stuff that does get automated away, but the sort of certainly the creative endeavors, but also even just yeah, like the memos that help you
sort of formulate your own thoughts.
Those don't go away because it's just about the process, it's just as much about the process
of writing as it is about what you put down on paper, what goes on in your head and how
you formulate those thoughts. It does make me wonder about whether, you know,
I see the value, obviously, in writing,
but I also know what's happening right now,
where like, there's these great screenshots
that go around Twitter of like,
professors scorning their students
for like being late on their assignment
and ending it with chat GP, ask chat GPT like they're
copying and pasting it directly out of the chat bot and those students are going to be like okay
well here's the assignment and then they're going to paste their assignment and it's going to say
ask chat GPT so we're just going to be a society I think of just people shuffling AI written text
back and forth which like give it I mean you're right, given the amount of tedious stuff
that we've been writing in our lives,
that might not be the worst thing,
but I think there is that risk of the fact that like,
yeah, we're not gonna think as deeply about things anymore
without writing.
Look, the last thing I would say about that is,
student cheating is nothing new.
It's always existed, right?
Just began with large language models.
Calculators and everything, you know, before that.
And so I think ultimately, you know, It began with large language models. Calculators and everything before that.
And so I think ultimately that will sort of suss itself out in some way.
I do think it's a real issue, obviously, because of how good these LLMs are and how new this
whole wave is right now.
And so it's going to take a lot of learning curve to get to the point where we sort of,
again, reach an equilibrium of where it where it makes sense where it doesn't but I do think at the end of the day like if you're doing something for your own
you know purposes be it writing be it many other different uh elements of life like you're only
robbing yourself if you're using like a tool you know to do it in an automated fashion so yeah for
maybe for some writing assignments that you don't want to do fine you didn't want to do it in an automated fashion. So, yeah, for maybe for some writing assignments
that you don't wanna do, fine.
You didn't wanna do them, that's fine.
But ultimately there's gonna be something
that you do wanna do or that you get value out of that.
I think that that will work itself out.
I agree completely.
All right, folks, the website is spyglass.org.
MG Siegler joins us the first Monday of every month.
This is part two of what I hope will be a long series.
MG, great to see you again.
Thanks for joining.
Thanks so much, Alex.
Talk soon.
All right, everybody.
Thank you for listening.
Amjad Massad, the CEO of Replit, will be on with us
on Wednesday to talk about vibe coding.
What is it?
Will it work in the long run?
Is it sustainable and what will it change?
So stay tuned for that and we'll see you next time
on Big Technology Podcast.