TBPN Live - Satya Nadella LIVE on TBPN | Alexander Embiricos, Kyle Daigle, Jay Parikh, Jared Palmer, Michael Grinich
Episode Date: October 28, 2025(17:52) - Satya Nadella, CEO of Microsoft, discusses the evolution of Microsoft's partnership with OpenAI, highlighting their shared commitment to advancing AI research and democratizing AI t...echnologies. He reflects on the initial collaboration, noting that Azure was OpenAI's first cloud provider, and emphasizes the importance of building a system that integrates innovations across the ecosystem into an organizing layer. Nadella also addresses the balance between competition and collaboration, stating that while both companies build applications and there will be competition, partnerships are essential to progress. (53:35) - Alexander Embiricos, a product lead at OpenAI's Codex team, discusses the integration of Codex into GitHub and VS Code, enabling developers to utilize Codex's AI capabilities seamlessly within their existing workflows. He emphasizes Codex's role as an AI software engineering teammate that collaborates across various tools, enhancing productivity by assisting in tasks like code generation and review. Embiricos also highlights the importance of making Codex widely accessible, noting that users with a Copilot Pro Plus account can access Codex without needing a separate ChatGPT subscription. (01:11:41) - Kyle Daigle, Chief Operating Officer at GitHub, joined the company in 2013 and has been instrumental in scaling its ecosystem engineering teams and overseeing key acquisitions. In the conversation, he reflects on GitHub's growth from 140 employees to over 3,000, discusses the integration of AI tools like Copilot to enhance developer productivity, and emphasizes the importance of maintaining an open platform to foster collaboration and innovation. (01:27:04) - Jay Parikh, Executive Vice President of Core AI at Microsoft, discusses the company's collaborative approach with OpenAI towards achieving Artificial General Intelligence (AGI), emphasizing their shared mission to empower developers and unlock creativity through AI. He highlights the potential for a significant increase in software creation over the next decade, facilitated by AI technologies, and underscores the importance of providing developers with the right tools, guardrails, and observability to harness this potential effectively. Parikh also touches on the evolving nature of software development roles, suggesting that AI advancements will make software creation more accessible to a broader range of individuals with ideas. (01:46:08) - Jared Palmer, currently the VP of Product, CoreAI at Microsoft and SVP of GitHub, has a rich background in AI and developer tools, including his tenure as VP of AI at Vercel and the creation of v0.dev and the AI SDK. In the conversation, he discusses the evolution of AI in developer tools, emphasizing the potential for AI to manage coding tasks and the future role of AI in engineering management. (01:59:35) - Michael Grinich, founder and CEO of WorkOS, a developer platform that enables companies to become enterprise-ready, discusses the importance of repeated messaging in advertising, referencing David Ogilvy's concept of addressing a "moving parade" rather than a "standing army." He emphasizes the need for continuous engagement due to the ever-changing audience and distractions, highlighting GitHub's annual Universe conference as an example of evolving messaging. Grinich also shares his passion for building platforms that empower developers, drawing parallels to Microsoft's role in enabling software development through Windows. TBPN.com is made possible by: Ramp - https://ramp.comFigma - https://figma.comVanta - https://vanta.comLinear - https://linear.appEight Sleep - https://eightsleep.com/tbpnWander - https://wander.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - https://getbezel.com Numeral - https://www.numeralhq.comPolymarket - https://polymarket.comAttio - https://attio.com/tbpnFin - https://fin.ai/tbpnGraphite - https://graphite.devRestream - https://restream.ioProfound - https://tryprofound.comJulius AI - https://julius.aiturbopuffer - https://turbopuffer.comfal - https://fal.aiPrivy - https://www.privy.ioCognition - https://cognition.aiGemini - https://gemini.google.comFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://www.youtube.com/@TBPNLive
Transcript
Discussion (0)
You're watching TVPN.
Today is Tuesday, October 28th, 2025.
We are live from GitHub Universe here in the Fort Mason, in San Francisco.
And we have a ton of exciting interviews today.
We are interviewing the CEO of Microsoft, Sacha, Nadella.
We're very excited.
We've been on a quest to interview Bag 7 CEOs, and we are very excited to sit down with him today.
And it's a huge day because Microsoft to just today announced
that they have entered the next phase of the partnership
between Microsoft and OpenAI.
There of course are dueling blog posts,
one on OpenAI's website,
one on Microsoft's website.
We will go through some of the Microsoft update
to give a little bit of background
before we go into our interview with Satcha Nadella.
But first, let me tell you about RAMP.com,
time is money, save both.
Easy use corporate cards, bell payments,
accounting a whole lot more, all in one place.
Let's go.
So this all started back in 2019,
2019, Microsoft and Open AI, it says here, has a shared vision to advanced artificial intelligence
responsibly and make its benefits broadly accessible. What began as an investment in a research
organization has grown into one of the most successful partnerships in our industry. And I think
that it might be one of the greatest deals of all time in business history. It is a remarkable,
remarkable deal. I was digging through some of the other deals that were big tech companies
worked with each other or bought stakes in each other and there are some wild ones that people
might not know about. I think it might be worthwhile to go through. Before we do, let me tell you about
re-stream, one live stream, 30 plus destinations, multi-stream reach your audience. Yeah, there they are.
So in, back in, what was it, 1997, Microsoft bought $150 million of non-voting Apple stock,
which settled some litigation,
committed to,
they were going to put
Microsoft Office on the Mac for five years,
and it made Internet Explorer
the default browser on the Mac.
And so...
Sadly before my time.
I got it was born, but...
They did this deal,
but by 2001, Microsoft had converted
all of the shares into common stock,
netting the company approximately
18 million shares of Apple,
and then by 2003,
they'd exited the position.
Which, I don't know if that's a good deal.
Maybe they should have diamond hands it, but it's still a wild, wild moment.
Yeah, I think, you know, something I'm excited to talk to Satya about is just like how much, how much foresight he had.
Yeah.
Whether he knew, whether he was expecting a base hit or he really felt like it'd be a home run.
Yeah, yeah, yeah.
It is a very fascinating thing.
It's like you're doing a deal with this nonprofit.
Sam Altman's obviously a big character in, even in 2019, Sam Altman was an important figure in tech of force.
But at the same time, I was running the numbers, and I was like, at least today, Microsoft makes like a billion dollars in revenue every single day.
And so if you think about it, I don't know if you actually think about it this way, but if you just think about it, like, it's your job as the CEO to steward capital.
And a billion dollars sounds like a lot.
Yeah.
But you're making a billion dollars every business day.
Yeah.
Like there's five business days a week, 52 weeks a year.
You're basically like revenue for Microsoft right now.
It's about a billion dollars a day.
And so do you treat that deal like it's just another day?
Microsoft or is it something that there's weeks of negotiation because you do have a sense that this is going to be one of the more important deals. And I always, I always go interesting to go back. When you look at the checked size relative to the capital available, it looks like a flyer if you were a VC fund, right? And, you know, I think it's fascinating because there's so many, you know, scaled platform VCs that have to just be faced. They have to look at this announcement to see that
Microsoft owns 27% of potentially the most consequential company to come out of the 2010, right?
Yeah, yeah, yeah.
And they just have to look at that announcement.
I haven't done it, but I've seen there's a little bit of sour grapes from the venture capital community.
Saying you didn't get enough of, we didn't get enough.
Yeah, if you look at the risk reward, the, the amount of capital that the seed investors deployed into the company relative to their ownership today,
it looks uh you know certainly they made a great return on paper but but uh didn't they actually
make a great return relative to the risk of uh you know investing in a company that had uh went against
every YC practice there is right like YC says like there's so many videos of Sarah Martin saying
don't don't reinvent the wheel let me do let me reinvent the wheel and and ultimately I mean
this has led to like so much of the K you know as as Chad should be
has exploded, the chaos around the company has almost entirely centered around the corporate structure.
So I wonder, you know, this may be, you know, the final company for Sam, right, in terms of, but I wonder what he would do next time around.
I mean, we have run the experiment, right? Because he has a bunch more companies.
There are most of them, I think, are pretty clean C-Corp. But then again, Worldcoin has a token and stuff.
Like, there are multiple things going on. But I think probably if we dug into his, his, his,
BCI company. We would see a cleaner C Corp. Yeah. And I'm excited in the fullness of time when we get the,
when we get the books and the documentaries on both Open AI and this investment. I can't wait to
see and try to understand where, what Open AI was really what they were facing at that moment
when they did this series of deals with Microsoft, right? Because the ultimate deal of, you know,
selling such a large amount of the company
with a rev share attached
which is a refsure wild
ask you know a YC partner
if a portfolio company
if one of the companies in their group
came to them and said yeah I have this investor
it's a large tech company
they want to invest like they want to buy like
a lot of the company and they want a 20%
rev share for like the next 10 years
they'd be like you need to walk away from that deal
immediately but Sam
Sam and Satya did it and who they are
How about two on 20, safe note.
Good old fashioned way.
Good old fashion.
Why are we reinventing the wheel?
There are a couple other interesting cross-industry deals.
Jeff Bezos famously had a big stake of Google.
Wasn't he an angel investor.
There's also the time that Intel, TSC, and Samsung came together to invest in ASML,
which of course makes the lithography machines kind of pulling forward.
Kind of the initial like weird, you know, circular deal that people point to.
But it worked out.
But that one worked out for sure.
And, yeah, it's interesting seeing the evolution of this deal in particular.
Some quick history.
July 22nd, 2019, Microsoft invests $1 billion in OpenAI.
Azure is named the exclusive cloud provider.
Microsoft is named the preferred commercialization partner.
In 2020, Microsoft receives an exclusive GPT3 license for its products and services.
And the foresight here is just from Zatya is incredible.
Like this is like at the time like it wasn't that.
Yeah.
Like only a couple years prior, Elon was basically walking away because he said like there was no, there were,
there was material progress internally, but to have that level of, of understanding and
that love that much conviction to invest a billion dollars when you're years out.
It's more complicated than that, I think, because there are plenty of big tech CEOs who have
taken $1 billion flyers on crazy ideas.
We see that all the time with the, you know, oh, you want to build some new hardware thing
or how much, how much did Apple spend on the car?
Or they probably spent a billion dollars working on that car already.
And like, you know, they kind of like, the risk adjusted reward, the risk adjusted bet made
sense.
But then ultimately they pulled back from that.
And that's happened probably all over the place.
And Sontz himself probably has other times when he's.
put down a big investment for something that was risky and it didn't pan out.
What is interesting about the opening I deal is that I know investors personally who
are looking at the deal before that and they couldn't get over the complicated structure.
And so they dipped out for that.
And so it's not that it's like, oh, wow, we need to give a round of applause for someone who's
helming a trillion dollar company to write a billion dollar check.
Like that's not that crazy.
That happens all the time.
What is crazy is to get over all the lawyers being like, you're doing one.
what and how it's structured? What are you talking about? There's a nonprofit involved. Why are we doing that
back to me? Exactly. Exactly. And so, but knowing that that we do live in a society where if things,
if value is created, if a new platform emerges, everyone overcome it can overcome any chaos.
All the crazy. It's complexity. Yeah. And so 2021, Microsoft followed on with more investment.
and then the Open AI service went general availability on Azure in 2023.
Before we move on, let me tell you about Privy wallet infrastructure for every bank.
Privy makes it easy to build on crypto rail.
Subfield, family, labor, friends, and sanctions, integrate,
all of you want to simple API.
And the reason that I say this is so notable and this conviction really matters is that
think of when you look at other scale, you know, hyper-scalers,
how slow how is how even years after the that the sort of chat chbt moment granted everything we've
talked about so far predates chat gvety yeah and and years after this chat chabuti moment we still
have people that are only now coming around to saying like we're we're ramping up we're ramping up
capex right and so uh i just think that sotia was incredibly ahead of the curve here yeah uh it's it's uh yeah
It's easy to look at the deal and through the lens of, oh, well, co-pilot was the glimpse of value creation out of what is effectively like a nonprofit academic lab.
But it's important to remember, co-pilot happened three years after.
I'd have co-pilot launch two or three years.
2020.
Yeah, two or three years fully after the, that initial $1 billion investment.
So, yeah, remarkable progress.
Let's go back to the Microsoft announcement today.
We can go to some of the key points.
Details on what's evolved.
Do you want to read some?
Yeah, so what is evolved?
Once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel.
So I'm assuming they're going to get Joe Rogan, Andrew Huberman, Lex Friedman, and get a panel of podcasters to decide this.
This is a question that I want to do.
dig in with Satya in just a few minutes trying to write today like nobody you know some folks
can agree on it on the definition of AGI but it's it's very much in flux right Tyler Cowan
was on our show a few months ago saying that he felt AGI had already been achieved that we keep
moving the goalposts yes others believe that we're in this era of spiky intelligence and we
need sort of you know more broad intelligence before we can get to true general intelligence
But going forward, Microsoft's IP rights for both models and products are extended through 2032
and now includes models post-AGI with appropriate safety guardrails.
That feels significant.
Microsoft's IP rights to research defined as the confidential methods used in the development of models and systems
will remain until either the expert panel verifies AGI or through 2030, whichever is first.
Research IP includes, for example, models intended for internal deployment or research only.
Beyond that, research IP does not include model architecture,
model weights, inference code, fine-tuning code, or any IP related to data center hardware
and software, and Microsoft retains these non-research rights. Microsoft IP rights now
exclude OpenAI's consumer hardware. That's notable. They need to start figuring out carve-outs.
And OpenAI can now jointly develop some products with third parties. API products
develop with third parties will be exclusive to Azure. Non-API products may be served on
any cloud provider. So again,
if you're just joining us,
Satcha Nadella will be joining us in five minutes to break all of this down live on
TBPN. For right now,
we are setting the table with some analysis and looking through the details of the
story that emerged today.
Yep. So Microsoft can now independently pursue AGI loan or in partnership with third
parties. If Microsoft uses OpenAI's IP to develop AGI prior to AGI being declared,
the models will be subject to compute thresholds.
those thresholds are significantly larger than the size of systems used to train leading models today.
The revenue share agreement remains until the expert panel verifies AGI,
the payments will be made over a longer period of time.
OpenAI has contracted to purchase an incremental $250 billion of Azure services,
and Microsoft will no longer have a right of first refusal to be Open AIs compute partner.
Again, like that $250 billion number is, you know,
You know, it's certainly not the biggest number we've heard, but it's a quarter of a trillion is nothing to scoff at.
Yep.
And OpenAI can now provide API access to U.S. government national security customers regardless of the cloud provider.
And finally, Open AI is now able to release open weight models that meet requisite capability criteria.
Yeah.
So, again, I feel like on a number of these points, it feels like they are kicking the can down the road a little bit again.
Obviously, this was important to complete the conversion from the LLC to the public benefit
corporation, which presumably can go public.
But again, my question and my immediate thought is, how many of these things are going to
be critical to iron out before the IPO?
Is there going to be enough demand that it doesn't matter again in the same way that
certain investors, you know, our friend Josh over at Thrive and others, had incredible
conviction to be, you know, deploying again and again and again into Open AIs for-profit
subsidiary, even when there was so much uncertainty around the structure, right?
Yeah. A big open question in how Microsoft's internal AI research efforts evolve now that this
is a little bit more concrete will be very interesting to see. Yeah, if Microsoft uses Open AIs IP
to develop AGI. Prior to AGI being declared. We'd love that. Opening eyes.
dribbling, dribbling towards the basket.
Satir comes in with...
Who knows? Who knows?
If you're just joining, we'll be live with Satya Nadella in one minute and 17 seconds.
There we go.
According to our timer.
In the meantime, let me tell you about cognition.
They're the makers of Devin, the AI software engineer.
Crush your backlog with your personal AI engineering team.
So...
Yeah, so...
Yes.
Again, there's so many of these points that leave kind of open...
open questions, they'll need to be effectively renegotiated again and again down the road,
but at least this provides a pathway and it's no longer the elephant in the room.
Yeah, it does feel like the cap table is getting slightly cleaner.
And you're moving towards something where, I mean, if you look at the history of the Microsoft deal with Apple,
they had a position, they eventually rotated out of that, sold out of that, because there's this question of, you know,
if you're the CEO of Microsoft, you're such a Nadella.
Should you be a venture capitalist as well?
Like, oftentimes big tech companies do make investments, minority investments.
Yeah.
Sometimes they make whole co-acquisitions, but is that?
It feels like ultimately this comes down to feeling like potentially one of the
greatest corporate venture investments of all time.
And so I'm not coming up with any that are better.
It's pretty good in terms of not just owning a massive piece of a generational company.
Yeah. And a future, you know, potentially what looks like a future, you know, a hyperscaler.
Yeah.
But also giving your business just this incredible strategic advantage in the race broadly.
So. Yeah. To go any more impactful, you need to move over into the whole co-acquisition world.
You have to talk about Instagram. But even that is tough. But it's a very different, very different deal structure.
Yeah. And something that is just down the fair way.
the entire company, buy the entire product, as opposed to make this bizarro minority investment
and then grow from there.
Yeah, it's worth noting, too, that OpenAI and Microsoft's office are already on a collision course,
right?
Like, you can imagine that over time, these products, you know, overlap today.
You can use co-pilot for a lot of things that you can use ChatGPT for.
that's only going to become,
there's still going to be this
massive tension there.
Yep.
And we'll be covering it a lot.
Well, let me tell you about figma.com.
Think bigger, build faster.
Figma helped design and development teams
build great products together.
And I believe we're ready
for our first guest of the show,
Satcha Nadella, the serial of Microsoft.
Welcome to the show of Sacha.
Great to see you.
I know you and great team.
Thank you so much for doing this.
Please, there were a ton of bullet points in the announcement today.
Can you just zoom out and explain it to me like I'm five?
What actually changed?
Because you've been in partnership with Open AI for six years now,
but this feels like an important moment.
What happened?
Yeah, look, first of all, it just feels, yeah, you're, you said right,
it's a good, it's an important moment and the story continues.
Sure.
But the story actually got started, even the Open AI one.
I've known Sam for a long time.
since his first company, pre-YC days.
All the way back then, wow.
That's right.
The Dublin Polo days.
I remember him being at WWC,
DC presenting it in the double polo.
It's iconic, but I didn't realize
that you were doing business with him back then.
And it started, I think, in 2016.
In fact, we were.
Azure was the first cloud provider.
That's right.
When Open AI got started.
In fact, I think Elon sent me the mail
asking for Azure credits.
So that's how it got started.
And they're saying, hey, I have this nonprofit.
Yeah, come on.
That's right.
And they were obviously into reinforcement learning.
They were doing Doda and all of that stuff.
And then at some point it reached where I think they went off.
I think they went to other clouds.
And so I lost touch for a while.
And then I think in 2019, Sam came and talked about sort of,
hey, we're going to really, we think the scaling stuff works.
I forget now, it's a little hazy
when I read the paper.
In fact, the paper was written by Dario,
Ilya, the scaling laws paper,
and the thing that's, you know,
Microsoft has been obsessed since Bill started
at Microsoft Research in 95 is natural language.
It's just, you know, being the thing,
we're an office company,
we're a knowledge work company,
and so we've always thought about text
and AI has applied to text in natural language.
So you could say it's the prepared
mind when sort of Sam said, hey, we're going to go take a run, you ought to be on it,
that's sort of what led to really coming together on this. Yeah, it was a research lab,
it is a non-profit. And I opposed to if they had stayed on the previous tech tree path of
they were doing some autoid robotics and we're doing some video game stuff and Dota too,
that doesn't jump out to you as immediately relevant in GitHub. It's interesting. You bring that up
because obviously RL has come back in a big way in, in relation to sort of
these large language models.
But yeah, it's probably, you know, this is the funky, path-dependent way things happen.
Because I don't think I would have gone in full-on to say, hey, let's go, you know, partner
with these guys, build a computer that scales it if it's not natural language.
I'm glad we started there and then all now is improving the quality of these models, for
sure.
How big of a deal was writing a $1 billion check back then?
I mean, it's a big company, Microsoft.
We think it makes, it's like revenues.
around a billion dollars a business day.
Was it one day of work from you?
Or was it, or was it, you know, weeks of negotiation seriously?
Did you build memos?
Like, were we building Excel sheets?
Like, what were you thinking?
Even at Microsoft, you kind of have to get a board approval.
Just to throw a billion dollars out of there.
But, you know, I must say it was not that hard to convince anyone that this is an important
area and it's going to be risky.
Like, I mean, in retrospect, I mean, who would have thought, hey, I didn't put in that?
that, you know, billion dollars saying, oh, yeah, this is going to be a 100-bagger.
And I mean, that's not like what was going through our head.
This is a partnership that, I mean, by the way, remember, this was a non-profit, right?
And I think, you know, Billy even said, yeah, you've got to burn this billion dollars, right?
And, yeah, we kind of had a little bit of high-risk tolerance.
And we said, we want to go and give this a shot.
And then, of course, we subsequently, you know, we here we are at GitHub Universe.
In fact, this is probably the place where that billion to 10 billion having, because in 21 is when I first saw GitHub copilot.
And I said, man, this is working.
This a year before the release.
No, actually, in fact, I was fact checking my thing.
I think GitHub co-pilot launched in 21, chat GPD in 22.
Sure.
And if I remember, right, 23 is the blip, the November blip with open AI, and then it has been smooth since the bliff.
Except the nicest way you could put that. That's fantastic. Yeah. So that makes it sense. Obviously, it's been a wild ride up and down. You started with just natural language. Let's predict the next word. Now we're let's rewrite the entire global economy. How do you think about the territory that you at Microsoft have kind of claimed? And what do you want to hold on to? What's most important to map out where Microsoft, where the edges of your territory are?
And then where founders and other business people can build in partnership with you.
Yeah.
So look, if you take, sort of I always say Microsoft's a, you know, a platform company and a partner company.
And we define platforms as where the value capture about the platform is higher than by the platform.
That's kind of who we are.
And that's, you know, GitHub is a great place.
So if you think about even at GitHub universe today, it's interesting, right?
As you said, we first started by saying, hey, code completions.
Yeah. Then we said, let's chat. Instead of getting distracted, stay in the flow of coding, you bring the information to the flow. And that chat became the thing. Then we said agent mode. Yeah. Then we said, hey, let's have autonomous agents. And then now we have multiple autonomous agents working across all these different branches, then bringing the PRs to me. And so with this entire conference is about what we call Agent HQ. And mission control where you have codex, you have Claude, you have GROC, every model you
want, each working across their own branches. Then you have the IDE, so you can bring up
VS code where you can do the diff on each of the branches output. And then the story goes on. So
therefore, to me, building a system that really brings the innovation across the ecosystem
into some kind of an organizing layer is what platform companies do well. Have you seen anyone here
that you think might be working on AGI? Do you have a personal definition for AGA? Yeah, that if you look
it pretty much all the deal points.
It, you know, it keeps coming back to this moment when AGI will be declared, right?
It's a, there'll be a panel of experts.
Maybe that panel is still being decided, but a lot of experts today have differing definitions.
So I wanted to kind of better sense of how you, how you imagine that kind of decision-making
process will go when the time comes.
We'll put you guys on the panel.
We've been doing evaluations on, on AGI, specifically around comedy.
Can it rates fun?
Yeah, yeah, yeah.
Comedy event.
To be, I think, first of all, I think one of the reasons why, quite frankly, both Sam and I, I think, agree on this, which is it's become a bit of a nonsensical word.
I mean, it's just changing and everybody defines it differently.
And we now know what the issue is, right?
We know everybody describes even the intelligence we have, which has been exceptional as jagged.
Yep, yeah.
Spiky.
Yeah.
Spiky intelligence.
All right.
And so if you sort of say, well, we have spiky intelligence.
And in fact, I think Andridge Karpathy's point in one of the podcast recently, which is a good one, which is even if you're having exception, you know, let's call it exponential growth in one of the spikes.
Sure.
It's not as if the jags are getting worked out.
That's the nine's problem.
Yeah.
That is each nine is maybe linear, even sublinear problem.
Yep.
Right, in terms of rate of progress.
So first step to me is even to get to broad intelligence, forget sort of general intelligence.
We've got to get rid of these jagged problems, and that, I think, is the first place to do.
So if you ask me, I think what may happen is we will achieve more robustness, let's call it that, for different systems, right?
So coding is a good one.
I think the entire goal with GitHub and GitHub mission control and Agent HQ is can I, just like how I use compilers, can I use agents to generate better coding artifacts.
Right? Today, coding on it, I mean, so I sometimes think vibe coding is at sort of a slightly
unfortunate term because it does lead to a lot of swap, right? I mean, it's kind of like,
I'm sure you code away and then you lose control of the project and they got to put everything
back into a mocked out. And you're so, yeah, and even traditional knowledge work that's
happening in the office suite, it's not like you want the biggest Excel model, right? You want
the one. But even Excel, it's a classic one. In fact, one of the other things that's happened is
even when I see Microsoft 365 copilot,
man, the amount, just like right now,
the number of repos on GitHub is exploding.
Yeah.
The other thing is everybody's generating PowerPoint
and slide decks and sort of Excel models.
Yeah.
The problem with Excel models is,
you know when intelligence is created Excel model.
I mean, it is like a thing of beauty,
the assumptions are clear, the formulas are there,
even the formatting and the subpoena, all the stuff.
And you can iterate on it.
Yeah, it tells a story.
It's not like a one shot.
And what I want to change, I can't go back and say zero shot the entire thing.
So, in fact, the agent mode in Excel, which I like is it understands office.js.
It puts the formulas.
I can then iterate like I iterate on GitHub copilot.
So those systems, so if you asked me about, you know, first, how do you get rid of this jagged intelligence problem is you build a great knowledge work system that is multi-agent, multi-model, multi-form factor.
get to a great benchmark in an e-val where you can trust it at two-nines, three-nines, four-nines.
And until you achieve that, you're not going to be able to move and say, hey,
we have anything like general intelligence.
Yeah, it feels like there's, there was a lot of uncertainty in the tech community around
all this super intelligence going to come out of the lab tomorrow and there's going to be this
fast takeoff.
Now it feels like there's more opportunity, both for Microsoft to add those nines to
products that you have and then also to entrepreneurs who are building products maybe on top of
Microsoft. How are you thinking about the entrepreneurial opportunity in the age of a very good
point? Because at some level, if you sort of buy the argument I made, there's a lot more invention
to happen. By the way, the other thing that we should also talk about, I always say to this
right, right, today it's all the conventional wisdom is, oh, intelligence is just simple, straightforward,
log of compute, so throw more computing. More energy. Who the heck knows, man? One of the researchers
comes from here, comes out of here and says,
you know what, I got it.
It requires as compute.
Any of you guys have thought about.
Like, that's a game change.
So, yeah, oh, by the way,
we're all, like, excited about reinforcement learning.
Guess what?
Pre-training is a more efficient form of training
because you can advertise it.
So there's a, I think pre-training will have new breakthroughs,
mid-training will have new breakthroughs,
RL will continue to improve.
We will then have to add more innovation to it.
And by the way, this is another part of this partnership,
which is I'm glad, you know, Open AI is continuing to do great work, Jakob, Mark, others are great,
and we'll partner with them and we'll continue to do so.
And Mustafa has built a world-class team, right?
You know, Karen, Amar, Landau, these are, I mean, we have now three cool models,
whether it's speech or image or text, and we're going to continually have it.
So we'll write our worst as well.
Yeah, how are you thinking about the interplay between Open AI,
what you do internally at Microsoft?
So our simple...
There's certain things that you can take your foot off the gap.
because you're like actually opening eyes got that handled or do you want a duopoly like actually
we're going to fight it out on everything i'm much more like again my mindset is all platform man like on
azure do you run windows yeah do you run Linux yeah you're on sequence over yeah do you love postgres
apodot net java hey i'm happy with open a i would love to have anthropic m a i to rock anyone
if google wants to put jemini on azure please do so what is that like culturally like what does it what does it mean
for the next thought
in Adela.
Somebody who's
working their way
up in Microsoft
do they need to
be, okay,
I'm building
something internally
but my company
isn't going to favor me?
I need to fight it out
with all my competitors
across.
We all grew up
in that culture
where it doesn't mean
because it's always
we're going to bring
our pieces together.
Sure.
We are going to innovate
across these scenes.
Yeah.
But as a platform company
you kind of want
to support everything.
Sure.
And like most people
Office was born on the mat
before Windows was
yeah if you don't if you don't give people choice
developers here like
will churn right they'll find other platforms right
if you concepts Bill had
when he started Microsoft was hey we're a software
factory we love all software
categories and we're just going to go create
software and so to me
we definitely want to sort of have that
same attitude to
innovation we definitely need to
stitch our stuff together so that they come
together to solve bigger and greater
problems, but doesn't mean we can't create opportunities.
And important, the other thing that I grew up, like, for example, you know, building SQL
server with SAP, and so we've always partnered and, or Intel Microsoft, right?
I mean, we won't have the PC industry, but, you know, it's called the Grove Grades
model.
Yeah.
That's a good model to create value.
Do you think that there's increasing returns?
This is going to sound like a loaded question, but I promise you it's not.
There's increasing returns right now to being a deals guy or.
or innovating on the deal structuring side.
And what I mean by that is there's all these difficult problems
to solve with energy and data centers.
And it feels like there's innovation in tech
that we normally think of as like the code or the algorithm
or the design of the system.
But then there's also this difficulty sometimes
to just marshal the resources.
And is that like a new phenomenon?
Has that always been true?
Is there, if somebody's pursuing a career in tech
is becoming a,
a great deals guy or deal maker,
like an important path now?
Yeah, I was just thinking about it, man,
which is, yeah, you have this great investment
and it has great return and no carry all the value
to my shareholders.
That's awesome.
Maybe we should start a venture for...
You might be well there.
I think the thing you're touching on
is something that actually platform companies
should think about, which is what's the ecosystem
upstream and downstream?
Yeah.
Right? To your point, right now, we have to as an industry.
Like, the reality is let's take power, right?
Which is if they sort of say intelligence is about tokens per dollar per watt,
we've got to get efficient on all of them.
Yeah.
In order to get more efficient on it, you've got to really think about,
even in our own industry, the token factory itself,
really getting better order of magnitude.
This is like, again, a Renaissance time for systems architecture.
And so, you know, obviously, NVDA is doing great work.
AMD is doing stuff, broadcoms doing stuff.
All of us are doing great work to just push that.
Then the next barrier is going to be, man, can we generate energy faster, can we build
faster, can we build a cooling?
I mean, I mean, I now know more about campus cooling systems than I ever thought I'll
know, right?
I mean, and these are all short points.
How much do you want that to live within Microsoft versus you want to just be a buyer and
all the different power players are out there building nuclear winds?
solar and you're just dealing with it at a higher level of infrastructure. Sure, the vast
majority of this infrastructure now. Now that, you know, if you think about back at it, right,
our data center bills mostly be built and we lease some because no one was in the business
of building at the scale at which we were built. But now I think there's going to be opportunities
for us to lease. Yeah. And there's going to be significant competition amongst builders.
So therefore, the lease prices also. Yeah. Do you think you're more ROI focus than others?
that are throwing around big numbers?
I mean, I'm always focused on long-term return.
Well, and we're at a time right now
where there's people that have come out
and effectively said, I don't actually care about R.O.I.
I just care about winning, right?
And it seems from your...
Yeah, a couple years ago, there was the mood of like,
if you, this might be the laugh invention.
If you always have someone else willing to give you the billion dollars
or the $10 billion, you can always be about I'm out winning.
Sure, yeah.
There was a ton.
But at some point, that part...
party ends and everybody needs to sort of have a plan. In that context, in these platform shifts,
to be short-term oriented doesn't help at all, right? Because you kind of have, you know,
I always say long before it's conventional wisdom. I mean, if you look back, you asked how we put
the billion and the reality is we put the 10 billion. That's right. Or the 13 and a half was
fully committed before it became a thing. And remember that was all done before Chad GPT became a thing.
And so yeah, how do you, I think there's a general consensus now that it's, it feels very possible to predict like a year out, two years out, and then 10 years out is extremely fuzzy.
What's your view on that given that you look like going back to the original Open AI investment and the original partnership?
But it seems like you've had at least really good like six year kind of like foresight ability to sort of invest against like a six year time horizon.
but how are you thinking about managing over, you know, the next decade?
Yeah, I mean, I think, you know, to me, you know, one of the things about tech is as a percentage of GDP,
get around 4 or 5%.
Yeah.
And if you ask me five years from now, 10 years from now, is that percentage going to be higher or lower?
I think the answer is pretty straightforward.
It's going to be higher.
It's just a question, is it going to be 10 or 15?
So why is that?
Because the rest of the pie, the rest of the pie, the rest of the higher, I think.
rest of the GDP would have grown faster. So that's why I always go back to it. At the end of the
day, the only rate limiter here is the overall economic growth and the factors of sort of input to
it. So tech as an input, I think AI and everything that it entails is going to be a core
driver. And some of it will come from just this intelligence and its sort of continual march
of capability, but it'll also come from, I'll call it, great engineering and product making
around it. Like, when I look at GitHub co-pilot today with Agent HQ and what have you,
that's great, because right now I'm inundated with multiple models, and everything is slightly
different, except I have one repo, and I want all of these agents to come work on all of my
repo in different branches. So you need great product making to bring more coherence to the
the chaos, and that, I think, is going to be the big difference maker.
I was talking to Eric Lyman at Ramp, who makes the show possible, of course.
And he had a question about how you, like, what advice you would give to someone running a
decacorn, thousand plus employees in this age of spiky intelligence where there is the
possibility that tools are going to get better very rapidly.
And maybe you don't want to scale up too fast and then up to do layoffs or retraining.
Like you run a huge organization.
How do you think about managing human capital in what feels like an uncertain time?
Does it feel more uncertain to you now than it did 10 years ago?
It's a great point.
Eric's a great founder.
I know I'm well, and they're doing some unbelievable work.
And so, in fact, whenever I've talked to him, in fact, I learn from him even how he's rapidly changing.
The agents they have built.
So to some degree, I think at, there you said, Microsoft over with its RAM, I think the key is learning the new production function.
So when I look back at Microsoft, I feel like, hey, look, platform ships, we've navigated.
I joined Microsoft when our existential competitor was Novell, right?
And so, you know, in 92, and here we are.
And so, but the bottom line is we've, over the years, navigated many platform shares.
Yeah.
We've also navigated tough business model shares, right?
When you suddenly have, you know, you have a 98, 99% gross margin server business,
and you have moved to the cloud, and you don't even know, man, is there a margin here?
Yet you have to make the shift and figure it out.
This one is interestingly enough, both the tech shift, a business model shift,
because this is the first time you have marginal cost of software.
It's not like COGS of the SaaS world, but true marginal cost.
And three, the way you produce your artifact, your software, is changing.
So the product development process is completely getting ripped and replaced.
And that is a matter, whether it's for RAM,
Even the competitive dynamic too
because you have people that can say,
hey, we can build this product in two months
previously would have taken us 12 months.
Why don't we enter that category?
And it's kind of like rewiring yourself, right?
Unlearning is the hardest part.
Learning is easy at times.
If you have to unlearn and learn,
it's much harder.
So to me, that I think is what all of us have to do.
I mean, it's funny.
I met to a bunch of student developers right here.
It is the first cohort of developers
who grew up with,
get up co-pilot a standard issue
completely different in movement
there was a word
before get up co-pilot
crazy it like I don't want to live in
create a completely different of structural
on the topic of changing
business model shifting your business model
it seems like the console wars are over
take me through the journey
you're a peacetime CEO now we're a peacetime CEO
the war's over but take me through the evolution
of the of the
business model shift on the gaming
side of the business. It's one of the most
interesting pieces of Microsoft.
I mean, I think you've got to remember. In fact,
Flight Simulator, I think, was the first product
Microsoft built even before
I think our dev tools
first. Flight Simulator was second.
That says so much about the culture.
As soon as you gave the developers
the ability to write code, they were like,
let's make a game. It's amazing.
And so to me, remember,
the biggest gaming business is the Windows
business. To us, gaming
on Windows. Yeah. And
And of course, Steam has built a massive marketplace on top of it and done a very successful
job.
So to us, the way we are thinking about gaming is, first of all, now we're the largest
publisher after the Activision.
So therefore, we want to be a fantastic publisher, similar approach to what we did with Office.
We're going to be everywhere in every platform.
So we want to make sure whether it's consoles, whether it's the PC, whether it's mobile,
whether it's cloud gaming, or the TV.
So we just want to make sure the games of it.
being enjoyed by gamers everywhere.
Yeah.
Second, we also wanted to innovative work in the system side on the console and on the
PC.
Yeah.
And bring, you know, it's kind of funny that, you know, people think about the console PC
as two different things.
We built the console because we wanted to build a better PC, which could then perform
for gaming.
Yeah.
And so I kind of want to revisit some of that conventional wisdom.
But at the end of the day, console has an experience that is unparalleled.
It delivers performance that's unparalleled.
That pushes, I think, the system forward.
So I'm really looking forward to the next console, the next PC gaming.
But most importantly, the game business model has to be where we have to invent maybe some new interactive media as well.
Because after all, the gaming's competition is not other gaming.
Gaming's competition is short-form video.
And so if we as an industry don't continue to innovate, both how we produce, what we produce,
how we think about distribution.
The economic model, right?
Best way to innovate is to have good margins.
Yeah, because that's the way you can fund.
So interesting saying gaming's competition is short form video.
It feels like the entire world's competition is short video.
Yeah.
I mean, we've heard this thing a while ago.
It just comes up again and again with public SaaS companies
that are maybe a little bit more of a point solution,
and they have to go through a business model transition.
and that can be harder than a tech transition.
And we hear about, oh, well, if you want to change your business model,
maybe you want to be private.
But it feels like, is there some sort of advantage of being a hyper-skiller at $4 trillion
company that you can go and retool a piece of the business over here,
change the business model, and have almost the privilege of, you know,
not having shareholders come to you and beat you down about a slight shift to the business model
in a subdivision that you don't get that.
I can deny that, you know,
diversity of business models, diversity of the portfolio that Microsoft has, has been helpful.
I mean, it's kind of, but that said, I don't think you can take that and say somehow you can make it if you don't reinvent yourself.
So I think what happens in tech, unfortunately, is that when these shifts happen, whether you like it or not, you have to first be relevant after having, it doesn't matter what the business model is.
The business model made me like, hey, I had whatever, 90% margins.
You are going to, at best, have 10%, but you have to jump all in because even that 90 is going to zero.
And so given the binary nature, you've got to make it to the other side.
But then the category economics matters.
Because if you can't sustain long-term innovation, if there is no category economics.
I mean, hyperscale is a great one.
In fact, the best day in hyperscale business was the day Amazon announced their operating margins.
Oh, it would be a SIPR.
Yeah, you know, because that's when everybody knew.
Hyper-scale business is an unbelievable business.
It's a commodity, but at scale, nothing is a commodity.
And so to me, that is kind of going to be the key here, even SaaS applications.
Quickly, unpack why that was so good for you again,
just because the market recognized that you were in the same business, and it was fantastic.
That is one.
And more importantly, it was much more expensive, right?
I mean, think about our server business.
It's super profitable.
Yeah.
Except it was one-tenth the size, but I look at it compared to Azure.
So we, like, we sold a few servers.
But man, we sell a lot of cloud VMs or containers.
Sure.
Who would have thought how expansive the cloud consumption model is going to be
in terms of people being able to sort of.
It's kind of the Jevons Paradox that sort of really played out in a massive way.
Just with Dougherty.
Yeah.
So I think on the business.
Well time Jevins Paradox posts, by the way, back during the Deep Seek moment.
Oh, yeah.
It was an important.
Bought on with that analysis.
I wish we could keep going.
I think we have to wrap up.
We would line this gong.
And if it's a hit for 27%.
Wow.
That's a good hit.
It's a very strong hit.
It's as big as possible.
Oh, there we go.
That is a fantastic signature.
Thank you so much.
Thank you for coming on.
Thank you for having it.
This is a really great.
I've always, whenever these big news,
these big tech news things happen,
I always wish I could talk to the person who's making the news.
And now I get to.
And so what a wonderful conversation.
What a moment.
And what a CEO.
Yeah, what, what a moment in the, in the, in the tech world.
Well, thank you.
If you're new here and you tuned in just because of Satcha Nadella, the CEO of Microsoft, live on TBPN, please follow us.
Leave us a comment.
Add us to your RSS feed.
We have a 15-minute version of the show called Diet TBBN.
You can hear both.
All the flavors.
All the flavor.
Only have seen calories.
seeing calories. You'll also
hear ads from our sponsors like Banta.
Automate compliance, manage risk, fruit
trust, continuously. Vantas, trust,
Ernst McLaughone takes the manual work out of your security
and compliance process and replaces it with
continuous automation, whether you're pursuing
your first framework or managing a complex program.
You could have kept going forever.
I know, absolutely. We both could have.
We should do a gigastream
with Sotia sometime. Just 12 hours
straight. It would be super easy to get that on
this calendar. Sure there's a 12 hour block.
For sure. Somewhere out in like the 2030s.
that we could lock in.
But, I mean, there is real so much.
I mean, that's the, that's the problem
with these, these conglomerate CEOs.
They just got too many business lines.
You know, you could do it.
You could do a whole hour just on Xbox and Activision.
Yeah, they didn't even, that didn't know, that stands out to me.
The thing that stands out to me is when you see some of these other
hyperscalers or players, their reaction time just,
Satya makes them look incredibly slow, right?
Oh, yeah.
He's been making these, like, sizable bets.
Like he's been seeing the future, and yet only this year you've had other players who I won't directly name, deciding like, okay, I want to get in the game now.
It's like, what were you doing when Satya was in the kitchen cook?
It's always fun.
What a wild day.
What else is on the timeline?
What other news should we bring the folks while we are huge?
I'm seeing mostly people talking about 996 Porsches versus 996ing working hard.
What else is driving the news cycle today?
Of course, OvenAI just did a live stream with Sam Altman and the head of research over there talking about their side of the deal.
All parties kind of align to, hey, we got a clean cap table.
Let's move forward.
BPC.
Yeah, which is
what Anthropic is as well, right?
Yep.
Microsoft now owns
27% or
$135 billion stake in OpenAI.
And Open AI is contracted
to buy $250 billion of Azure services.
That's a lot of Azure.
Should make it easy to
underwrite future CAPEX
on the Azure side.
Earnings is tomorrow.
We'll be back in Los Angeles
at our,
at the TBPN Ultradome.
And we probably won't be live
when earnings drops post
closed, but we will be
bringing you the news on Thursday, of course.
Meta
also reporting earnings
tomorrow, which will be notable. Yes.
We also have Alex, the product lead on
Codex, but we're
wiring him up so that
we... Yeah, no second. I go here
through a post from semi-analysis.
They have a green text here
to say, be me, Qualcomm. Time to
to Nvidia AI chip market. Invita is making money. How hard can it be? Spend years developing AI 200 chip.
Finally ready for big announcement. Make fancy slide deck. Put 768 gigabytes of memory on there. Sounds
big. 160 kilowatt power consumption sounds powerful. Add liquid cool. Sounds cool. Oh, S-H-I-T. JPEG.
What about flops? Hmm. Decides just to not mention it. Also don't mention price or how many chips
per rack or actual benchmark numbers. Just vibes. Launch presentation.
Qualcomm AI 200. It exists and uses electricity. Refuse to elaborate. Stock goes up 15%.
That feeling when investors don't know what flops are either. My face went greater than 10x with no baseline. Ships in 2026. AI 250 ships in 2027. Still won't tell you the specs by then probably. Low TCO, trust me, bro. Confidential computing. The performance is confidential. Unreal. I have more there. But let me first.
tell you about Julius. What analysis do you want to run? Chat with your data and get expert
level insights in seconds. No, you know what's funny is that in the 2019 blog post from
OpenAI announcing the deal with Microsoft, they say Microsoft is investing $1 billion in OpenAI
to help us support building AGI. But specifically, we're partnering with Microsoft to develop
a hardware and software platform within Microsoft Azure. And so it feels like, like, I
I imagine what they mean by hardware platform within Azure is just like a bunch of
Nvidia GPUs at that moment in time.
Yeah.
But it does lead to these, like, the natural questions of like, how deep do you go in the stack?
And if you're, if you're Sam and your Open AI and you're seeing that you're ultimately
limited on cloud capacity and then chips and then at first dollars, because there was
plenty of Azure capacity, it's not like in 2019, they used all of it.
but they needed the money, then they needed the data centers, then they needed the chips,
and now they needed electricity, and it's just going deeper and deeper in the stock.
Yeah, the thing that's notable is just how married OpenAI and Microsoft are.
When you look at OpenAI's relationships with Nvidia, AMD, Broadcom, and these other players,
everybody in the chip space is sort of like, you know, in these sort of like complex dynamics, right?
We got some backstory on the dynamic between just like the whole series of events between OpenAI and Nvidia and AMD and how that all came together.
And it seems like everybody in the chip side is sort of sort of, I wouldn't say desperately, but desperately sort of like competing for Open AI's attention and resources.
Meanwhile, Sautja is able to just kind of sit back and ride this partnership out.
So I wonder where Qualcomm is retraced, by the way.
It's now only up 8.5% over the past five days, so dropped a little bit after the...
Is that on public.com investing for those to take it seriously?
They got multi-asset investing in the food is reals.
They're trusted by Maryland.
We got to throw out a post here from Spook's early, early friend of the show.
He's quoting a post from Samuel Hammond, who said,
Melaton in the U.S. is sold in 5 milligram doses with the effective dosage range is 0.3 to 3.3.
milligrams. Americans essentially overdose
on melatonin by default for no good reason.
Spook says, okay, well, if there's
an easier way to soul speak with my ancestors
in the dream realm, please let me know.
I don't know if this is going to read
it all on this particular streak. We don't even have your
tweets. Blotam banger.
Rockopedia is now alive.
Do you think opening I will launch a
grocopedia competitor?
I clicked on a grocopedia
yeah, yes. But I clicked
on a grocopedia like entry
and I was like, oh wow, it's like a
pre-baked deep research report on something that I already would have wanted to search for.
It's actually effectively a great product.
I mean, deep research has, if deep research has disrupted,
in the same way that chat GPT has disrupted search,
deep research has disrupted Wikipedia.
And there's so many, so many prompts that I run that I'm like,
I shouldn't be like burning up the GPUs for this.
Yes, yes, yeah.
This should be stored in a database somewhere I should be able to access it.
This is the friendiest thing is that, like,
overtime opening
We have a size changes
from Kodak. Alex, welcome to the stream.
Let me tell you about fall
while he hops on.
Jenner would need the platform for developers.
Nice to meet you.
Welcome back to the show.
Thank you.
Congratulations.
Incredible event.
How many have you been to?
Give me a little read on the ground
of like what's the scale.
Is this the biggest ever?
Tell me a little bit about what's going on today.
So we just had a opening eye Deb day
two weeks ago.
Yeah.
That was awesome.
It was massive.
Actually, I got COVID, like, the day before, so I was not there.
So, yeah, this is actually the biggest event like this, I've been to this year.
It's tough.
With the COVID now, you don't get the same level of, you don't get the same level of, like, oh, I'm saying.
It's like, okay, so you're saying.
I didn't think it was a thing anymore.
But anyways, here today, we announced a couple things.
We announced that Codex is coming natively to GitHub.
Okay.
And then we announced that today, actually, we're bringing Codex to co-pilot V-S code.
Sorry, I'm going to mumble this.
Sure.
Cool pilot, throw plus subscribers and VS code can use rowing mad part of this.
Yeah, okay.
Yeah, that's awesome.
Amazing.
Walk me through like the different flows and like the different actual like user journeys
because there's something very interesting about GitHub has the ability to even host pages.
Yeah.
And then Codex allows me to from my phone potentially like write a web app that then can be deployed.
Is that helpful to actually like instantiate?
a web app on the fly that actually lives on the internet
that I can send to a friend.
Is there the beginnings?
Are you starting to see what the next era
of vibe coding might look like?
So totally.
I mean, so mostly codex is used by professional software engineers,
although we have a good amount of people
who aren't as familiar with coding using this as well.
But like, I think the best analogy
is to think of codex kind of like a human teammate, right?
Sure.
If we're working together, I could talk to you in Slack.
I could talk to you in GitHub.
I could text you.
I could come by your desk and we could, like, jam on something on your computer.
So it's kind of the same you, but you're present in all those tools.
But that's what we're trying to build codex into.
It's just like an AI software engineering team.
Yep, that works with you wherever you like building.
Sure, sure, sure, sure.
So, yeah, use it from your phone and, like, make some updates there.
Maybe that for it's a PR, you push into GitHub.
Yep.
You know, maybe you use codex to review your PR and GitHub, and then you land the PR.
Like, all those things, no matter where it is, it's just the same codex agent.
Yeah.
Yeah.
What is the, is there some sort of like business model flow through them?
Like you have to be subscribed to Open AI, but then you also subscribe over to GitHub.
And that's just like the default stack for a lot of people.
So this is like actually a kind of an interesting part of the deal.
So to use Codex today, the main way that most of our users use us is to have a Chachy-Chi-T account.
Of course.
You know, they're on pro-plash or enterprise.
And then they can use codex.
Yeah.
Now, as part of this deal, what we figured out with GitHub is how to partner.
that if you have a copilot pro plus account and you don't need to have a chat chip to the
full power of codex anyways and when i say the full power i mean you get to use our model and you get to
use our model harness which is kind of like the code that provides the bomb there's rules to run lu
yeah um and so you know our goal in this is just to like make codex as ubiquitously available as
possible um and so yeah you don't need there's no like flow through there it's just like you know
you just need your co-pilot account there's somewhere out there there's uh somebody
that just started CS in college,
and they're only going to live a life
that they're just running codex
in GitHub just naturally.
Yeah.
Well, I mean, look, so actually, it's interesting, right?
Like, if you think of GitHub,
it, GitHub does a lot of things, right?
But at least, like, where I personally spend the most time in GitHub
is, like, actually collaborating with the other people
contributing to the code base.
Right.
Yeah.
And so, like, I actually think it's quite unlikely
that you would only spend your time doing that type of activity.
You're also going to spend a ton of time
in tools like,
BS code or like the Codex CLI
or ID extension because that's where you're doing your work
yourself, right? Like, yeah. Again, like, I think
the human teammate analogy kind of
works pretty well here. It's like most
of us, 90% of the work we're doing is kind of
at our desk. We're not like having a meet
hopefully not having meetings like 90%
of the time as a software engineer, right?
So probably that, you know, that
person who's going to become an engineer but it's currently in college
they'll spend a lot of their time
with superpowers but working at their computer
doing stuff individually like
commanding fleets of agents, right? And then
they'll spend some of their time collaborating with their team,
like obviously quite a lot of their time,
but not all of it.
I don't think the sort of individual productivity is going away.
Yeah.
How much, one kind of follow up question,
like how much,
how when Gordon is the metric of like how long Codex is spending working?
Is that something that you guys are like explicitly like tracking
and trying to scale because it just means that it goes from like creating more value.
Two hours.
We're tracking like the meter thing.
It's about like 200 years.
I click it and I come back my ancestor.
come.
And they watch, was it into stuff,
that's, I want that.
ABLE or,
give me the 200 year age of.
So, like, this, it's interesting.
I actually shared at the keynote today
that we lost the leak and then here on the Kodix team,
ran Kodex for over 60 hours on the single incredibly hard task.
And that's crazy and we're like,
we're excited about the capability of that from the perspective.
That is my mind.
It means the model is actually able to do like very productive work for a long time.
Yeah.
And it means the model and the harness are working together.
Sure.
Manage the context window.
Yeah.
It's more than one context length.
length. However, it's not like we have an eval that's like how long do the model work and let's
maximize that. That's much more sort of a lagging indicator of like the intelligence capability
of the model. What we're really trying to drive is like how smart is the model? Right. And how
how easy is it to work with? And then it just turns out that as you make the model smarter and
smarter, it can work, it can take on like longer and longer tasks. Oh, how, how, what do you
think about the user experience if you're setting codex off to go work for 60 hours, uh,
There's some risk that, you know, it's doing things maybe incorrectly and you come back after
60 hours and you're like, I just, I kind of just blew 60 hours that I could have been doing this
myself.
Sure.
I mean, in that 60 hours, all of Game of Thrones.
That's possible.
Or you're watching subway surfers here.
We got a video.
No, but my question is like the workflow that feels like the most natural, give it, using the teammate analogy, is you just get a ping and it's like, hey, can you double check this before I continue?
is that is that is that is that a workflow that you're thinking about like you just your developer you get a push notification you're at the gym and you're like yeah it looks good yeah so i think like there's two things you said that make a lot of sense to me like one is just like steerability that's something we're working on you want us to be able to steer the model like short task long task whatever right yeah the other thing is kind of like for activity right like again when when you hire someone onto your team maybe at the beginning you're hanging out you're collaborating directly then you start delegating small tasks and at some point you know you will give them like a 60 hour task without like
specifically prompting every single detail, right?
Yeah.
And what you expect of, like, a good employee is that they know when to ask you questions, right?
Yeah.
And so it flips from, like, the bottleneck of my productivity is how frequently I'm able
to prompt an LLM to the bottleneck of my, of my productivity is actually kind of like how I
can structure the work.
Yep.
So that, like, independent agents or humans or whatever can, like, go do the work and then
ask me questions when they ask.
It's so important because in, uh, we're so used to now, like, trusting, uh,
a CRUD app, right?
Like, you can trust that you can put data in
and you're going to come back
and it's going to be there, right?
We have that faith.
And it feels like with agents as a product category broadly,
the agent, if you're building agents,
you need to be focusing on, like,
how do I develop trust with the user?
Yeah.
And it's come, you know, again,
maybe it's like focusing on short-term tasks initially
and having that seerability.
So this, so if you think about it, like right now,
the place where most people are using coding agents
is to write code, right?
Like code gen.
And again, I keep coming back to the human analogy, but like, imagine you had a human
teammate and the only thing they can do is write code.
They can't read user feedback.
They're not in Slack.
If there's an outage, they're not going to see it.
Sure.
Right?
So, like, Zahar, you met human teammate, even if this is the smartest human teammate in the
world, like, no, right?
So, like, what we need to do to, like, build this trust is we need to, like, extend what
agents can, like, look at across the software development life cycle, right?
So they're, like, present in more of the team conversations and ideation and prioritization
and planning.
they're also able like more and more capable at the code review stage and actually that's one of the recent product releases we ship is like codex code review and people loving that but more present at code review more present at the deployment stage and like code maintenance stage aware of like what's going on and like your telemetry tools yeah and like I think that's actually how you get to the trust so some of this is like increasing model capability but a lot of this is actually changing the form factor of like how these models are harnessed so that they can like interact with more of what you need so it feels like a couple years ago we were just trying to
predict the next token, and it was like, oh, wow, it can do poetry. Oh, wow, it can write code.
Like, this is amazing and very, like, undirected fundamental research. Now we're in the age
of spiky intelligence. Is there a feedback loop where you're actually trying to take feedback from,
okay, Carpathie says that the agents can't write, you know, nano-GPT, so let's go work on
that. Oh, we've seen it in the data that it's easy to write a, you know, website and Django and
Python, but if you're trying to use Fortran to refactor some obscure the ivory trading system or
something, like, we're falling down on that. Let's actually put a team on this. Like, how does
how does feedback work now and like, what are the humans on the team doing? Or is it all just
zoom out and hope that the emergent property is like solve? Like, is it deizmachina?
You know, this is open AI. So the way that we build is like constantly evolving. Like the codex team
was just like five engineers like a few months ago. And now we're like, I actually don't know.
I think we're like 25.
Okay.
So it is constantly evolving.
But what I can say is that our team is like possibly slightly unhealthily on social media,
just like reading all the feedback.
Sure.
We love it when people send eback to us.
We're also starting to like try to get like a better understanding of like, okay, like how do
different like model snapshots and stuff compare.
Sure.
And so yeah, we're starting to build up and more in our systematic way of doing it.
But I still think it's like quite early days.
Yeah.
And there's still a lot of taste involved.
Yeah.
It does feel like we're entering the era where,
if you see in the data that
maybe there's developers here that are like
I want to use codex for this specific thing
you're falling down on this
you're great at everything else but you're not with this
there is the world where you can actually go in
and running it actually
I want to make sure you don't get the wrong signal
from social media
because like there's a certain type of person
who posts about
post product feedback publicly
and then there's for every one person
that does that there could be a number of people
that just turn and never say anything.
There could be a number of people that are just like super hungry power users.
And yet, you know, getting a push notification and somebody is saying like, you know,
they're DMing you or something that somebody said about Codex.
It's like you need to make sure that it doesn't, you know, consume like 100% of your like worldview
on how the product is actually resonating with users.
Totally.
So, yeah, let me answer that.
And actually I want to go quickly back to the fourth time where I'll pick as well.
But on that note, I think like, yeah, a lot of the feedback.
you're going to get on social media is like your power users.
Yep.
Right.
And so power users, I think are really good.
Like, I kind of put that in the like, how do we advance the capabilities?
And like, we are trying to advance capabilities.
So the feel good feedback.
Like, what are they doing?
Like, what should the product do make easier for them?
Yeah.
And then at the same time, I think I like to balance that kind of with like, just like,
what is the first mile of the product?
Like, literally like the first 20 keystrokes.
Like, what are those?
Right.
And I think for me, I just like kind of focus on mostly these two extremes.
Yeah.
So we're constantly looking like, okay, what is the new experience to get to the product?
And frankly, I think there's a ways to go.
reproduct and lots improve there on the sort of like the four trying question like one of the
interesting things about building codex and open source which we're doing is that we're seeing like
larger enterprises with very bespoke needs who are excited about the capability you know maybe
an engineer was using codex on the side wants to bring it to work notice is like oh in this
codebase it's not doing as well as it's doing like the code basis that open a i sure has more
you know see more so like what we're actually seeing is like certain customers are starting to like
fork the cly or work with us to deploy it in a very specific way
where you can inject, like, you know, more instructions for the, like, company-specific language, like, into the context.
So if I'm a big enterprise and I have a million lines of Fortran, for whatever reason, and I, you know, authenticate with Codex, you're not training on my code, which is good for privacy, but maybe bad for performance.
So what you're saying is that there's a world where we could work together to figure out how to actually fine-tune the model or train the model or work.
work together to have the actual product work better on my car base.
Yeah.
And I think like fine-tuning and training are like definitely levers that exist.
But I think even before that, there's like a ton of work you can do in the harness.
Yeah, in the harness.
Even in like in terms of like, you know, agents dot MD and just like how you tell the model what
it needs to know.
So in every layer of abstraction, we can do.
There's there's opportunity to squeeze out extra performance before we go back to like,
hey, let's pre-train on your data.
Exactly.
Like I think there's like important.
Yeah.
I think there's a giant capability overhang from models today.
And so, yeah, it's kind of, it's exciting.
We're seeing like a lot of pull from Enterprise now.
Yeah.
And it's exciting because we get to kind of like go deep, right?
And invest a lot of time on our side to figure out how to make it work even with like
the current model on the current.
The pull from Enterprise, do enterprises care about benchmarks or do they feel like
they've been hacked going back to the social media thing?
I think people were really into benchmarks and then pretty quickly everyone kind of assumed
that they were saturated, they were gameable.
But what are you hearing on the Enterprise side?
I think, yeah, I don't.
I don't hear a ton about benchmarks, to be completely honest.
I mean, maybe folks read it.
But I think it very quickly comes down to you.
Big of the SaaS there.
It's like our CRM is split second, actually, fast to the competitor.
Like, you should use us instead, actually.
Yeah.
So mostly, I think what it comes down to, at least on a lot of things we're seeing,
well, it's actually this kind of two motions.
Sure.
One is like, it's just like they give the tooling to developers and it's like, do you like it?
Yeah.
Right.
And it's just like, what do developers like more?
Luckily, developers love codex.
That's great.
The other side is actually like, hey, like we have this like really big project that we want to do.
It's like a re-platforming, like a migration from one cloud provider to another or something like that.
And that's where we're actually like working more closely with enterprises to figure out like, okay, let's actually set up like a meta harness almost for like Herodex to do this work.
This started with like some customers like Instacart runs codex like in one of these.
I don't know if they don't probably don't call it a meta harness.
Sure.
But like in basically a system that runs codex automatically to do stuff that, you know, that they want to do for code maintenance.
And then now we've been like, okay, this is actually a pretty good idea.
Like, we can go help customers who have these larger things that they want to do
to set up this kind of like workflow automation.
So there's kind of the two sides.
Last question. Are you feeling GPU rich or GPU poor right now?
Any requests for Sam and Sarah?
We, we, so the Codex team is getting a ton of support.
You're feeling GPU to writ?
I think the Codex team feels supported.
But I think Open AI, like, we could definitely, like, things are growing and more GPUs would be more good.
So, yeah, definitely far.
Okay.
So internally GPU, Rich.
Yeah, every time, I wouldn't say that.
That's probably overdue you.
No, every time I'm in chat, GBT now, and I prompt it on something that it should really think, you know, you should think a little bit.
And it's like, you know, I had a request yesterday for, like, give me a list of, like, 50 companies that meet these criteria.
And it was like, I can't do that.
And I was like, yes, you can.
But then in that time, it was like, yeah, the GPUs were like, you guys, I'm sure.
And somewhere out there, there's millions of codexations running.
Yeah, this is the, this is the, the endless product feedback.
But thank you so much for coming.
It's great.
It's great.
It's great.
Thanks for coming up, you guys.
How about it.
We have a few more people joining this show.
If you're tuning in for the first time, please subscribe, follow us on YouTube or add
us to your Spotify.
And before we're on our next guest, we're on LinkedIn.
Don't forget about LinkedIn, yes.
Honestly, honestly, I know a lot of you who are listening on another platform.
Do not follow us on LinkedIn.
Head over there.
We are actively hiring someone.
We just hired someone.
We're working on ramping up our LinkedIn presence.
Very excited for that.
We're also excited to tell you about TurboPuffer.
Search every byte.
Serverless vector in full-text search.
Go from first principles on object storage.
Fast, 10x Cheever.
Thank you.
Next up, we have Kyle C-O-O of GitHub.
Very excited.
Very excited.
This is where we are wiring him up.
We'll be coming on in a second.
Next, we'll have Jay.
EVP of Core AI.
I saw it to Jay a couple months ago at this point.
I was in the back of a car.
It was kind of hard to get to bring like the level of enthusiasm that I had,
but I'm very excited to talk to him.
Because a lot of the questions that I got when I asked people,
hey, we're going to Microsoft.
We're talking about their relationship with Open AI.
We want to hear what Microsoft's doing inside.
What is the Core AI team?
Where's that driving the business?
So we're excited for that.
And the next we'll have Jared Palmer, VP of Product, Core AI, and the SVP of GitHub.
And then we'll finish it off with Michael, founder and CEO of Work.
Well, that's very exciting.
Excited about as well.
Can heads back to the timeline.
Tomorrow for the show, we'll have to dive more into Grochpedia.
We've got to start using it, checking it out.
While we have time, there's an app launched by a product designer at Meta.
Did you see this?
AI app that if you can't afford a vacation and the verge is saying an AI app will sell you pictures
of ones. You upload images of yourself. I saw it. And then it creates a vacation photos for you. So
I guess short the tourism industry. This was announced by a founder who just said like I wanted
to feel the feeling of like the warm and fuzzy vacation photos. And so I used AI to generate those.
It was it was pretty well received originally. But I think that they're spinning it. And so the
narrative might be getting away for them. But we have Kyle from GitHub coming into the studio.
Hey, guys. Great theater. Good to be here. Great speaker. Thank you so much.
Stop on the shell. Yeah, Mass Day, beautiful event. Good weather, too. I know. It's a little
here when the weather is not great, but when it is, it's awesome. Give me a little background on you.
How'd you wind up here? How'd you wind up at Microsoft? Yeah, so I joined GitHub 12 years ago.
Before we had managers.
We call it open allocation now, but it was anarchy.
Freeman, Nat joined in 2018 as part of the acquisition.
So, yeah, we were underrated 40 employees when I joined back then.
Wow.
And do you even think about employee count in GitHub now because it's so merged into Microsoft,
but it's still its own brand?
Yeah, I mean, we have over 3,000 employees that have worked full time on GitHub.
Then obviously, we partner with Microsoft.
Microsoft teams do a lot of, you know, the AI model hosting, training, and so on and so forth.
On sort of a meta question, I mean, AI and, you know, AI can do so much, how are you thinking
about scaling that team over the next 10 years? Is it harder to forecast, like, human capital
allocation in the age of AI? Yeah, I mean, part of the problem is that there's places where
AI is, like, incredibly helpful, like in software, I think, for sure. And there's a ton of places
that AI hasn't hit, you know? I mean, there's what we talk.
about like IT so many of the sort of business operations side that AI hasn't proven to be as
valuable to us yet but I think over time it'll get there events like this take people yeah you
know full in full out and so if there's just an imbalance a little bit of where software has been
so great and then the rest of what makes gethub gethub yes these people still yeah uh earlier with
satea before we jumped on we were catching up with him and I said uh my words GitHub uh
co-pilot is criminally under-hyped.
And I think the reason for that is, like,
you guys don't need to go out and raise a venture round
every couple months.
And, you know,
you're obviously well-capitalized,
but can you give us a sense of the scale
and kind of the growth of co-pilot
over the last couple years?
Yeah, I mean, you know,
GitHub is used by, like,
80% of everyone that joins GitHub right now.
It's 36 million, I believe,
joined in last year the first week.
like one of the first things they do when they join so we're definitely still hitting the world's
developers with copilot yeah all the time like every day now these days just like i don't know 10
years ago devs are using whatever tool they want and they're changing so quick you got to keep up
you got to try all these tools but we keep seeing folks using you know co-pilot over here and then
trying out a new tool or using co-pilot over here and finding this new flow that's a big part of
this like reopening up like get hub is done over and over yeah let's bring us all together
so you can collaborate in that single place while you're going to pick whatever tool you're
going to use and that's cool i can't tell you what's part of being that's part of being a platform i was
saying what was i was on the air it's like you can't uh if you want to be a platform is the more
closed off you get the more you're encouraging other people to go elsewhere because the tools
are changing so quickly you just you need to be able to give people that flexibility right and
we had alex on from codex and that's a good example of it yeah i mean this ai moment it feels a little
bit before every app had an API back in the day because that all like that wasn't the norm and
now we're in a kind of quasi walled garden moment where everyone's making their thing really really
great you know their model their app their service whatever but in order for all those tools
and agents to actually be valuable we have to interconnect them so my hope is that just in general
not just for software devs we can go back to that platform first approach just like as an industry
because then each of our products will be more valuable for our customers
because we're not going to have to deal with,
well, how do I actually place the grocery order?
Yeah.
Because there's not an API for that, you know?
What were the kind of key moments for you and understanding that AI would completely change software engineering?
Because when you, you know, we're just going back through like history,
even of Microsoft's investments in OpenAI, obviously because of the announcement today.
And it just feels like Satya had this incredible foresight, you know, in 2019, in the early 2020s that only now a lot of other CEOs are kind of reacting to.
But I want to know for you, you've been here getting up for 12 years, like what were kind of the key moments that were eye opening to you where you thought, I've seen the future we need to just invest heavily, heavily in this.
Yeah, I mean, the first moment was kind of a pure open source moment, right?
Like when it first started happening, and we were talking about transformers and everything,
like you see the ground swell on GitHub from the open source side.
So we started talking about that.
And then when we got access to the first, you know, GPT3, I think, you know, model to ultimately the copilot.
The thing that was so interesting was, we were building it to write docs.
Like that was.
Oh, yeah.
Copilot was taking that model.
We were going, oh, yeah, what we're going to do?
Like ride and code.
And they hate documentation.
So we're going to generate the docks.
And then what happened?
Wait, code is
same in the same character
is actually.
Exactly.
And so then we flipped it
and then we got to this
from dot and go to code.
Exactly.
And then I think,
you know,
while it seems very simple now,
like the idea of ghost text,
like more than just like
an auto-completing or whatever,
the first time we used it,
that was truly the moment
where we've said,
oh crap,
because no one had to do something differently.
And I feel like that's the big problem
with some of the AI tools,
you got to go interact with them
in a way that's not normal.
You got to go, okay, I want to go write an email,
write an email for me that that's not how our brains work.
We just start typing.
And when we were able to do that with the IDE,
that very, very quickly kind of shook all of us
because it meant, oh, it won't be the same anymore.
Now with agents and whatnot, like,
because we can verify the code,
we have an advantage in software
versus some other agents where it's harder to verify.
But that all started that first time you, like,
wrote something and then it just,
appeared. And I didn't have to learn anything. It just happened. Yeah. And now we just,
you know, we all take that for granted because it's de facto. Yeah. Do you have a philosophy of
how where inference happens will change over the next few years? Like, in a lot of worlds,
there's, it used to be there's a decision between like fire off an agent, wait 20 minutes,
wait an hour or something, or do it quicker, you know, a couple of seconds. But then there is the
world where a lot of the work that's
going on in software development is so high value
like why not just do all three
inference it locally immediately and then
also in the fast model and then also
kick off an agent for every single task
is that where we go or
is there sort of like some sort of shift
in where inference happens
over time? Yeah I think you know
we clearly are going to have
more and more inference test that should
happen locally. Yeah. That seems
pretty obvious at this point I think
and then I think when we're talking about you know
how much we're going to kick off into what cloud A&s and models, et cetera.
I think the thing that's really interesting is that we're pretty close to having that now.
Like we're, you know, we talked about it a lot today.
Other folks have that.
The real problem is, like, the age old, like, garbage and garbage out problem,
which is, like, we talk about abundance and you just fish off five tasks and we pick the winner.
Well, if your input was crappy, then probably all five of those are also kind of crappy.
They just do that variance of crap, I guess.
And so I really think it's about when you're having a discussion with a colleague or you're
on a Zoom call or you're in an issue or in linear using a ticket, that is the moment when we can
actually get as much context as possible and ask questions then.
Like, why isn't co-pilot in that moment going, I think I know what you're trying to build,
but like, are you sure about that?
Yep.
Why do I have to carry that even via a click and then go, let's plan to build this?
Again, humans don't do that.
You just read it and you get started.
So I think it's, it is a bit about where inference happens, but I think it's how early in I have a problem that needs a solution.
I should be doing inference in the background immediately before I ever invoke something, you know, to go say, now it's time to work.
It started while we were in the shower thinking about the idea.
That's, I think, what we need to get the AI to do more of.
Yeah.
I'm thinking about like AI at GitHub is the funniest thing because it's like seven different.
and it's like walk me through your you're thinking on uh with a lot of companies I see
that they put the folder below the fold yeah so above the fold is kind of like I put a search
box and I let you interact with my app my SaaS my crud app be a natural language but then
behind the fold or below the fold is like in the in this behind the scenes I'm running inference
over the data to improve the user experience but I'm instantiating with html basically at the
But with GitHub, you also have inference that you're selling directly.
You have a whole bunch of stuff.
Are you seeing any exciting developments on like that behind the scenes, like using AI to improve GitHub as a product that isn't actually bubbling up to the user experience directly in the form of just like a text box?
Have you seen developments there?
Yeah.
So, I mean, a big part of what we've been figuring out is we have so much information about your interactions.
You know, pull requests, the ones you got closed.
Like I told the story, the first poll request I ever shared didn't make it.
And like, but now you know that about me.
And so I think the thing that we're figuring out is like we have like new models that allow us to really deeply understand your code.
Sure.
Beyond just like the tabbing and asking a question.
Because then if we have that, then we have to understand what does Kyle, how does Kyle work in a poll request?
What mistakes does he make every single time?
I'm super fascinated by this.
That's the thing that.
And we were just talking about with like grocopedia today, where basically it seems like the XAI team went and ran a bunch of deep research reports for all the topics that you'd want to know and then you just have that pre-cached output.
And I'm so fascinated by this idea of you have a ton of user data, you have a ton of inference.
It's going to be really inference expensive.
But what is some sort of, you know, cron job that you can run over your entire user base, all the data and then just have surfaced results or surfaced action items?
That seems like an interesting, like under explored territory.
at.
Yeah.
Yeah.
If you think about today, we're saying we're going to bring all these coding agents.
Sure, sure.
Why does each coding agent have its own memory of how I've interacted with it?
Yeah, yeah.
I've been a developer for 20-something years.
We can just go, here you go, take this with you.
Yeah.
You know, you can understand how I work.
So I'm going to get a result that matches what I'm looking for.
Okay, walk me through the game theory around enterprise pre-training for coding agents.
So if I'm Coke and he's Pepsi
And we both have written a bunch of corporate code
And you have a massive GitHub installation with you.
And if we both say, yeah, we're getting to train on us,
maybe we'll get better models.
Yeah.
But at the same time, we don't want to leak information.
So like what's the current thinking among like big enterprise customers around like
will they jump over and say, yeah, you know what it's worth it?
Or from an actual like when an amp scientist just be like, yeah, I don't need that code anyway.
What's the current thesis?
So we spent a long time trying this.
And the problem is that everyone goes, hey, our code's very different.
It's super unique.
You work a certain way.
You don't.
So many companies don't.
Now, there are examples of where that's not true.
Particularly companies with a really long legacy of like COBOL, mainframe code, etc.
We've been kind of discussing with them like, what would it take to get another 100 million lines of COBOL code?
Sure.
Because then that does matter.
That actually moved the needle on the quality of the coding.
100%.
Because the problem is that the practices and principles don't change that much.
And then most of these companies are also trying to modernize.
So they don't want the code to look like their old code.
They want to use their unique IP and look like the thing they want it to look like in the future.
But if I have 100,000 line Django project and he has 100,000 line Django project,
you're not like, oh, if only we had that.
No, no, because we've like done it.
And like, there's, you know, margin of error.
improvements, but nothing major.
When we look at the
like looking at the chain
of commits, then you can get to some
interesting information. Okay. How was
the enterprise built? Exactly.
Exactly. Why was that choice made? There's a little bit there.
You can obviously still instantiate that on the fly
with an enterprise partnership.
There's just always the question about like, is
there something beneficial that all the companies working
together? But that's very helpful.
Thank you so much for hopping on this. This is a lot of fun.
You have incredible voice for podcasting.
Yeah, come back on. Any time.
If you ever wrap up, we have J. Perrine next.
No, no, we are moving on.
Okay, we are going to take you back to the news.
Back to time.
Thank you for tuning in.
We also have to do an ad read for Google AI Studio.
We are behind enemy lines here at a Microsoft event, but we are presented by Google.
Google AI Studio, it's the fastest way from track to production with Gemini.
Chat with models, vibe code, monitor usage.
We are obviously very happy to be supported by all of our sponsors
who make crazy events like this possible.
We are obviously able to come up here on short notice
due to our sponsors.
And what else?
Jay looks like he's getting miced up here.
He would bring Jay Parique in me.
I brought this up yesterday.
John, you remember I brought up.
I did not know that interstellar,
Christopher Nolan spent $100,000 to plant 500 acres of real corn
in Alberta.
Yes.
Then sold the corn for a profit after filming.
Yes.
And it remains the most profitable commodities trade in Hollywood history.
Yes.
You acted like this was...
This is old news.
I knew about this years ago.
This is what I think...
I didn't have hired in the AI area of like, well, I could just generate the scene with
AI, but I want to...
You're going to miss out on the commodities trade for sure.
Right.
No, I think Christopher Nguyen just got lucky.
here, honestly. It is
pretty hilarious. I also
wonder, you know, how
apocryphal is this story?
Because it doesn't account
for everything else that went, like,
if the corn was planted
by production assistance,
right? Like, you have to
burden that cost
into the actual
oil I. Yeah, it's to plant corn.
Who planted the corn? I'm just saying, like,
who planted the corn? Is this gross,
is this gross profit or net profit? That's what I want to now.
Christopher Nolan. Everyone's talking a big, big game about the interseller trade, and it might not have
been as good as you think. Also, who owns the, who owns the rights? Does the, does the value of the
corn accrue to everyone who has points on the back end? Like, does Matthew McConaughey make a couple
dollars off of that corn trade? I don't know. We'll have to get to the bottom. We have our next
guest. Welcome for the string day. How are you doing? Thank you so much.
Welcome this week, you too, Rick Pee.
Introduce yourself for anyone who's been living under a database or data set.
And explain a little bit about what you're working on today.
I'm Jay Pree, and I am the EVP of Koriath here at Microsoft.
Okay.
Important job.
Do you have anything to share that updates your job today based on the news with OpenAAA?
Or is it just exactly the same?
It's exactly the same.
Yeah.
Really?
Okay.
So are you marching towards AGI?
Is it a race?
Is it a race?
If Microsoft becomes a platform for AGI and OpenAI can compete there and Microsoft's AI,
internal AI team can compete, is there a world where you're racing to AGI against them?
I think we have a process for figuring out what AGI is.
Yeah.
And that is something that both companies will continue to collaborate, work on research.
Yeah, yeah.
And in the meantime.
We have this mission, which is to focus on developers and how we unlock way more creativity
and to build a ton more things.
So I have this idea, which is, or this concept, where you think about all of the potential,
you think about the Hoover Dam, for example, right?
You guys are familiar with the Hoover Dam.
There's 9.3 trillion gallons of water behind that.
It's about one gigawatt, right?
And it's like massive, right, in terms of the amount of energy that it can generate.
Right. So think about all of these large language models, whether they be small ones, big ones, closed, open one, multimodal, video, audio, text, etc. And you think about how we're going to unlock that intelligence. And in order to unlock that intelligence, we have to write a lot of software, right? And so if you think about the history of Microsoft, Satya commented on this earlier, you know, it's like Microsoft has been around 50 years, right? And you think about all the software that's been written by Microsoft and everybody.
in the last 50 years, and I would posit that only 1% or less than 1% of the software that
has been written in history, and that what we're going to see in the next 10 years is just
this like prolific expansion of the other reservoir that's going to create.
You might be the ender phoned against, right?
It's going to be crazy.
Right.
So that is why we're all here today, right?
Which is like, how do we really drive and use agents?
use this technology with the right guardrails, with the observability, being able to customize
this, personalize it, be able to tap in and bring in open source, being able to bring in your
enterprise-specific knowledge and controls and all of that, and to really just change that
trajectory of creation of imagination in a building, right? And I think actually the notion of
even what we think of as a software developer is going to change, right? Now, to make this way
more approachable by anybody who has an idea, being able to translate that into, you know,
showing something, building an app, getting out there, getting feedback, iterating on it way
faster than we've historically been able to.
In CodeGen, like, I want to get your read on how you're thinking about, like, today
developers are, you know, maybe they have some favorite tools, but they're willing to constantly
be experimenting, trying new things. You guys are in a great position to be able to support that
through partnerships and sit, you know, at a foundational layer with GitHub.
But how are you thinking about what's your view on switching costs today and how that might
evolve as, you know, in five years from now, do you believe developers will continue to
just, you know, want to always be trying the latest thing?
Or do you think they'll, like, eventually switching costs will get to the point where
it doesn't make sense to just constantly be looking over in other places and really makes
more sense to just focus on what you hope?
Yeah, I think there's like an element.
to, you know, developers, builders around craft.
And I think you're always going to want to find, like, the best tool or the tools that suit
your sense of craft, right?
Whether you're a wood, like a woodmaker, you're a painter, you know, and there's like a big
element of craft.
So I think that there's going to be use cases where, hey, this is the way to do it.
These are the best tools to, say, modernize or upgrade some version of, like, old Java
code that you may have.
And there may be just like, this is the one or two, just true tried ways of doing it and proper
tools that you use.
Then there's going to be new use cases that we haven't even discovered seen yet today.
You think about some of these rapid prototyping apps.
And I think, you know, it's great for the ecosystem that we're seeing different startups.
We have different, you know, we have GitHub Spark.
We have different ideas that are all kind of competing and trying different versions of this.
Those things do mature and there may be a smaller, narrow field.
but I think right now with this inflection that we're seeing in terms of building and velocity
of change, that there will always be lots and lots of things to go try out.
And I think that's good for developers, right?
And I think our platform is such that we care a lot about that ecosystem of startups,
other companies that can bring that choice, bring those tools into it.
But then we can help, like, hook those things together from an observability controls,
like just a sensibility perspective.
So if you want to scale this adoption inside of your enterprise, you need those rails, so to speak, right?
There's so much, there's so much, like, practical on the ground, just make the piece of software 5% better with AI today.
There's so much low-hanging fruit.
It's a very exciting time.
At the same time, we're in this, like, I feel like we're taking a breather from all the AI fast takeoff.
And it's exciting because they can go build so much enterprise software, so much value, so many new companies, so many things.
built on top of Azure and Microsoft.
But at the same time, it feels like there is a new need for going back to the roots of
academia or these like academic labs or these scientific labs.
Do you have a pitch for if there's someone out there who thinks that they're going to
be the, they're going to write the next attention is all you need.
They're going to write the next transformer paper.
And you know what?
In the short term, they're not actually going to help optimize, you know, knowledge retrieval
or code gen for this next couple of years.
But they believe they want to do it.
Do you have a pitch to them where they can come and work at Microsoft and do that level of research?
Yeah, absolutely.
So I think there's lots of different adventures you can pick this side of Microsoft 84 and focused on, like, builders, developers, right?
Because one of the other fascinating and fun things about the core AI team is we have this super tight collaboration with Microsoft Research.
Yes.
Right.
So Microsoft Research has all of, you know, 30 plus years of history and science research.
and programming language research,
compilers, security, and you name it.
Right.
So we actually have a lot of collaboration and joint problem solving,
right, where they can focus more on that open-ended research,
whether it be, hey, here's how I'm going to go optimize this model.
Here's how I'm going to do formal verification of the code that comes out.
Here's what I'm going to do in terms of how to secure this code better.
And so those things are out there.
They're like big, unsolved problems.
They're longer time horizons.
then as those innovations, those inventions happen in research, we can do the tech transfer.
We can do the combined product making together.
And then that accrues into GitHub or in VS code or into Foundry, whatever is the right avenue to bring that stuff to our customers to developers.
Where do you stand on the should you learn to code debate?
Oh, that's a good one.
I think, yes.
I think you should learn everything you can learn about V6.
systems, because the fundamentals, you know, ultimately, if you can understand, like, how this stuff
shows up and it's instructing a computer, a GPU, a mobile phone, then I think that, and it's
less about maybe even knowing kind of the code, but it's that systems thinking mindset, right?
It's the cultural aspect of it. It's like, hey, I'm creating, I'm prompting them, and guiding
this thing, but here's how the code is going to generate. I understand what these models can and can't
do how to guide them more with a higher efficacy, right? So absolutely, but I think of it more as like
less of a narrow question of like, hey, should I learn the code or not? It's like, how do I understand
the system, the new system for how we're going to build software, build innovation? There's
understanding the hardware, understanding the software, understanding, for example, e-vals. Yeah.
Super, super, like, important concept. Yeah. Totally underreported. Like, yeah, right? In terms of what's
going to happen. You have these offline evals. We have the bench marty. What I was saying is
in the media. Yeah. And sort of like how important that is to get higher like quality outputs of
these things. Because there's the offline evals that we can sit there and we can score and say we got
these evils. Then there's the online or the lived experience, right? When you put this AI into this
product, you're like, wait, that doesn't quite work the way. It is eval said it was not at work, right?
And what Martin is selling, it's zeroing ways right in terms of...
Sorry, I have one more on that.
We got your answer on should you learn to code.
I want to know, should you learn to deal?
Should you learn to do deals?
Is dealmaking underrated in 2025, in the age of AI?
Being a deals guy, understanding incentives, bring people together around a table, iron out a deal.
This is something that's...
It feels like it's growing.
We saw it with the Microsoft Open AI deal.
That was a very unique deal.
That was something that a lot of people, if they were just saying, oh, well.
There's much more than a traditional.
We were trying to re-ins not to do it.
And it got done.
And it's probably one of the greatest deals in tech history.
And so is there value in learning how to do deals and becoming a deals guy?
I don't know that that's a 2025 question.
I got that is a life skill to know how to collaborate and how to negotiate and how to
compromise and how to see.
You know, and sometimes like there isn't a deal to be made.
And other times there's a greater output or there's sort of a greater, like a global maxima that you can attain, right?
And that's where even if you look at the news today with our announcements of partnering with OpenAI and with Anthropic,
bringing that all into this platform together, I think is what we can go build and what we're going to discover and how we're going to accelerate our joint learning, I think is important, right?
And that can turn into a deal, but I think that comes up with this like, hey, there's a greater good, there's a greater.
opportunity there's sort of a greater market there's a greater problem a bigger problem to go solve
then yes figuring out how it's going to work nuts and bolts i like it uh how do you think about
jevin's paradox in the context of code during the deep seek moment sotia uh quickly came out and i think
he posted the wikipedia link to jevin's paradox and it's sort of like steadied uh the market broadly
uh there was people that just weren't weren't familiar but i think it was well-time
from his side but when it when it comes to um you know on our side uh you know we're a media company
and we have a developer on our team and i think that like five years ago we wouldn't have
had a developer and as it's become basically cheaper and faster to create software we now want to
make software and we're a company that historically just wouldn't have so i'm curious how you
think of that in the context you know going back to your earlier point of like we might have a hundred
a thousand, 100,000 times more code.
So what's your view there?
Yeah, I think that's what we want to see the acceleration, right?
I think we talked about today there's 180 million developers in GitHub today, right?
And a new developer is joining GitHub every sector.
Right. Somebody said it's a country for second.
And I was like, that sounds like miles per hour, but like this is just such an abstract concept.
That's how the country's talk.
They're like every second.
There's a baby born every second.
Yeah, but to think about, you know, it's not, you think about the, the types of personalities
and backgrounds, right?
You can be a product person.
You can be a designer.
You can be a, you can be a marketer, you can be a dealmaker.
Like, all of this stuff, you can join GitHub, you can start building, you can start
checking in code.
You could start mashing up different things.
So I actually think it's a super exciting time to see what the industry is doing, right?
And I think it's hard to predict the future, but I do actually really, really fundamentally
believe, like from a mission perspective in core AI, our job really is to unlock that creativity,
both in the AI power tools that you heard about today, plus the platform, making these things
secure. And really, like, anybody who's got an idea, wherever you are in whatever department
you are in an organization or an individual, you should be able to actualize that. Like, you know,
we have this saying in our team, which is like, you know, more demos, less memos, right? It's like all about
building and showing and iterating lots of stuff gets like we don't like it you know yeah but the fact
that i can in 15 minutes go through 15 iterations versus in the past i might get a quarter of an
iteration done that i think is going to yeah no matter how good a memo is like seeing seeing the
product tells you 10 times more it it sort of gets more creativity from the team a small group of
people now we have to make sure we also spend time dealing with the fact that there are
there are gaps in the technologies, right?
They don't, like, work perfectly, right?
So we've got to keep building those guardrails.
We've got to keep building that training.
The models will get better.
The tools have got to get better as well.
And that's where I think the GitHub community working together
with these different partners that we have, the platform,
we just have to keep learning faster and faster and faster.
That's what we're focused on.
So more demos, less memos.
Let's roll play for a set of a deal.
More deal, eventually.
That's the roleplay.
We're trying to do a deal.
If I'm a Fortune 500 CEO and I'm coming to you and I'm saying I want to transform my business with AI, I don't want to make mistakes.
What pattern should I avoid?
What mistakes have you seen broadly trends that I want to stay away from so that I can move forward with something that actually drives your older value and isn't just rah, rah, I'm doing AI now?
Yeah.
So the first thing I would say is like really understand what the top one, two, three outcomes are more like specifically.
Like, hey, I want to transform my business.
Okay, well, what does that mean?
Yeah.
Okay.
Do you know what that means?
Are you saying, hey, I need to, I'm in an understand phase where I even just need to create some bright lines around what is the ideal or kind of my dreams around the outcomes of what transformation means.
So get into the specifics of the what that actually means.
Is it some revenue thing?
Is it some product thing?
Is it some...
You can do just the difference between how you're trying to cut costs
or are you trying to grow top line?
Right.
That's number one.
It's just understanding like one...
How are you in the...
I keep asking why.
Yep.
And to try to get more grounded in what those specific things are.
Number two is one of the things that I will always encourage them or talk to them about
is to then don't just talk about these things, right?
It's like, what are you doing to start learning?
Because if you're early in that journey of understanding AI,
there is only so much that you can sort of like read
and talk about and conduct meetings.
You do need to have like this internal adoption, right?
Where people are, and you're encouraging,
you're incentivizing, you're really driving that experimentation,
that curiosity of your organization,
organization, right? So how do you understand what your base level of curiosity and risk
taking is? If you are a more risk-averse company in a slower moving company, then how do you
change that culture, right? So cultural transformation is, it comes up in 90% of my customer
conversations. We'll talk some tech stuff and then they're like, okay, Jay, how do we do this
people watch? Right. And then the third thing, head count, headcount planning. They're like,
what, what's your plan? Maybe all dot a rat. Right. And then the third thing that,
I always will encourage folks to do and we'll have a conversation is, like, raise your level of
ambition. Like, wherever you think you are in terms of ambition and that outcome, I promise you,
it's not enough. Because of the technology, the models are growing way faster. They're getting
way smarter than we humanly understand. So whatever ambition you have for this fiscal year or this
half or this quarter, take it up a notch or two and then strive and push and lead to that,
to that point. Yeah. Are you, do you have a right line internally with,
I feel like there's some organizations where core AI means not generative AI,
but I don't think you use that exact dividing line.
But should there be a dividing line between like machine learning recommendation systems,
how Netflix recommends me the next thing to watch, for example,
like that is an AI system.
What pops up on my news feed is AI, but it's not generative AI.
It's not what we think of when we think of generative image models.
Is it worthwhile in 2025 to have a bright line between those?
teams or those skill sets? Or is everything bleeding together? I think things are definitely
blurring together and there's stuff that's informing, you know, from one set of techniques to
the other and vice versa. I do think that those systems are very, very sophisticated. They're
very, I would say, powerful in terms of like user experience today. There are definitely places
where people are using Gen AI when they shouldn't be and they should be using, you know.
machine learning techniques that just really work
and are faster, better, cheaper, right?
Or cheaper, yeah, you can imagine a bunch of things.
Those are the things that, you know,
we have to watch for in organizations
where, you know, Gen AI is the hammer
and everything looks like a nail
when we actually have these mature, optimize,
and like really exceptionally bright people
and technology to use those
and not forget about those.
But I do think that in at scale,
the stuff that we've learned in these more,
maybe, you know, more mature, more scale-out machine learning systems,
will feed back into how we make products using Gen Air.
Well, thank you so much for coming on the show.
This is always, and I'll have to you.
Take care.
Come back on.
We have Jared Palmer, the vice president of product, Portner Eye,
and the as the B of Gidham, the S-B-P.
The V, we're both a vice president and a senior vice president.
president?
Yes.
Is this like
a two-faced
three?
Title max.
I might just
stuff.
So to a regional branch
manager.
Yes.
Right.
It's like
assistant to the CEO,
assistant CEO.
Technically, it is VP product
Core AI and
SVP of GitHub.
Okay.
Does this incredible.
Welcome, welcome to the gig.
Thanks.
Yes.
One day.
30?
13.
Oh, 13, yeah.
Okay, what we did we do before?
I was VP of AI, Versel.
Versel.
That was right.
We had a demo on the show yesterday.
I rated a little thing called V0.
V0.
Yeah.
Resolution, yes.
So,
and been in the game for, I don't know,
a little bit doing that stuff.
So, yeah, it's been fun.
It's been great.
Yeah, so, I mean,
have you had time to actually develop,
like, a vision for what you're building here?
Is it too early to ask?
Or are you still adjusting kind of like,
let me assess the tools in the tool chest over here?
Yeah, it's day 13.
So definitely, but I've been a long time GitHub user for very long time.
Like, I don't think over 10 years made my account.
And I imagine you've been thinking about like a broader developer experience
and what this means in the age of AI all through the last.
I mean, the last five years have been like a deafening ring of like AGI and takeoff
and timelines and stuff.
You must have engaged with that, of course.
Yes, yes.
And obviously at Verselle, we thought deeply about developer experience.
I think that's really the vision is how do we bring apart with core AI in the
formation of it. We just had Jay on.
I think by combining Microsoft's assets across the stack, right, I've got DSCode, Visual
Studio, GitHub, and putting these actually all in one work make for the ultimate developer
experience. And that's what our goal has to be. And focusing just on that is, I think,
my first and foremost hood. Yeah, do you think that developer label just melts away eventually?
It feels like, you think there will be a dividing line in five years, ten years?
I don't know, five or ten, I guess. I mean, it just feels like there's a world.
I don't start about that, but...
But it's just feeling great.
You know, like, there was a time when to take a photo, you needed to be a professional
photographer because you needed to understand how to change film in a dark room.
And now everyone has a smartphone camera and everyone's a photographer.
That feels like it's coming.
I don't know.
I just see, like, I can open up an app on my phone type of prompt, get code.
Sure, it's, like, kind of hard for me to...
I need to link my GitHub account and set up pages to, like, actually deploy it.
But, like, we're only a couple months.
from that, I feel like, and then eventually it becomes, like, more prompt-driven, but then
there's still value, I don't know. How does that always play out? I think there's always
going to be market for people who get stuff done. Yeah. Right. Yeah, just high agency people.
So builder who, and whether it shifts into more product focus, knowing how to build just like
systems that are big and large. Yeah, yeah. That may be outside the training set. Sure, sure. I think it's
always going to be important. I also think that some of the, the way I think about it at least is some of the
the pipes, the tooling probably aren't changing as fast as the AI is.
Yeah.
What I mean by that is like the way that packages and code is distributed, tested, built.
I don't think that's going to change as fast as maybe the models will.
Yeah.
That makes sense.
So with that like infrastructure in place, I think you're still going to have
human involvement for quite some time.
I think the things that people will build may be more ambitious.
I think that's really exciting.
And our job is to facilitate that and empower developers and think about, you know, what they need.
But, you know, in five years or so, I still think people are going to be building stuff with,
it's still going to be coding in some respects.
It just may look very different.
Where do you want to see model progress?
People talk about the models are going to get better.
Like, they're just going to get better.
Right.
And plan around that.
But like, when you're talking to labs, like, when you're at Versel or when you're now at Microsoft, like, where specifically are you even thinking and kind of pushing them to say, like, hey, like, it needs to be better here.
Yeah.
I mean, that's a great question.
At Bercel, we worked
deeply with the model labs. We obviously were very
focused with a product like B0 on a
specific subset of what
models can do.
Even in the coding realm, Versel was always focused
on front end, right? And specifically
NextJS, so not just one language, but
one specific tech stack.
And so we're always
engaged with, how can we make it better for
NextJS?
Switching years for second to GitHub,
obviously we're now multi-languages.
We care about everything, but we do care about coding.
That's the primary focus point.
But coding involves so much more than just generating, like, more than auto-coercode, right?
It's more than auto-complete.
We need models to be great at research, we're great at reasoning.
And I think, and then also delivering mergeable code, right?
That's, I think, something different than just complete my comment.
So we've been focusing a lot there and focusing on quality and something that we look to continue to hill climb on as time goes on.
How much have you studied the open source company, like, scalable business model?
Like what Versailles did with NextJS?
Like, are you familiar?
Oh, yeah.
Can you give me like the crash course if I'm like, I'm a developer, I want to build a business,
I'm going to open source a package that does something and then I want to build a business about it?
Like, what are the pitfalls that I need to avoid?
How do I actually balance?
Like, what are the tradeoffs that I'm making to ask?
actually build a great, like, open source for-profit company because there does seem to
be some tension there, but it's held, that model's held for going back to Red Hat Linux all
the way to Versel today. Sure. I think I'll, I'll, I have a controversial take. Please.
Um, there aren't as many pure open source companies with the core product itself is open source.
Sure. I think the more successful strategy is actually, if you really dig into Versel,
is Versel is not open source. Yeah. But Next.js is open source. Yes. And NextJ.
is a complimentary satellite product
that drives attention
that is used by Versel
to make a better product
to get this amazing feedback loop
of internal dog fooding.
But there is a community around the project
which then I think
some, you know,
Bersel has a material amount
of NextJS overall builds
and developers use Versel.
But it's not like Versel
is an open source business.
Yep.
It just has NextJS
is one of its largest pieces
of the open source portfolio,
but it also has,
so as now AISDK.
And with Versel, the idea was to do something
what we used to call framework defined infrastructure.
So framework defined infrastructure.
And the idea was you can build this framework
and with no configuration you can deploy it
and you don't have to think about scaling it.
And so the analogy I would make is like,
imagine you were asked to, I don't know,
cook food for everybody here at Universe.
With Varsel, the idea was like,
oh, what if we gave you the pots and pans
and all you had to focus on was like,
cooking for your, you know, family of four.
And then Versel would worry about, like, scaling it to everybody here.
Yeah.
And so I think to your point about, like, open source, my, my, my, my suggestions for
the crash course is, um, a common pitfall that you should not run into is just assuming
that your free open source users are going to directly translate into paying customers.
Interesting.
I think that's actually really hard because you've set up expectations that you're giving
away a free service.
We have this free, this code, right?
Yeah.
And that all of a sudden they're going to convert and, hey, you X,
a month or you're going to have an enterprise business, which you haven't been really honing in on and
grinding on, and that's just going to happen overnight. I think that's, I think that's,
you need to start from the beginning with both and also set expectations with your, with your user
base that this is paid. This is open source. So if you can find a beautiful symbiosis between those two,
that's where I see like it really being successful. Is there some sort of like barbell strategy where you
should actually go really broad with your with your open source package? Anyone's using it,
but probably like, you know, small developers, startups.
solo indie devs are using it and then if you jump all the way to like oh you notice some big corporations
are using it so you go with an enterprise plan on day one it's like they're not going to be they have
no ground to stand out if they complain they're like so so it's a lot easier than being like
okay actually i'm nerfing the open source thing and now all the indie devs need to pay me
25 bucks a month you know that's way different than going like hey look fortune 100 company was
using this now we got a million dollar contract with them is that best practice it's hard sometimes
big contracts early on can really
be devastated. Oh, sure. Because they can
remove your focus on growing that
inertia that. And so you have to be careful.
Okay. Obviously, they're great.
But focusing on your core value proposition, your core
customers, and it's really great to get feedback by those
enterprises early on. And many, many projects
I've been involved with, whether it was TurboRepo, whether it was NextJS,
even V0. Like, we didn't launch Enterprise for
almost a year or so.
And we even, I even, it was a big debate between me and Gijermo.
I think we were actually early.
You should actually delayed it even further.
Really getting that groundswell is so important.
And you can always do enterprise.
Okay.
You'd be careful.
I say always do enterprise.
You'd be careful.
Yes, somebody could come in, but just driving up, even like Chat ChbT, by the way,
didn't have Enterprise for like a lot.
People were like, I gave you for it.
Yeah.
And then when by day, a lot of companies report, like, yeah, we don't pay for chat
CBT, but our employees all use it.
And then you go to the CISO and you're like, hey, by the way, we have a lot of your
data.
I know exactly.
It is a wild choice to...
How do you think, you know, a lot of...
There's so much excitement around the potential AI and science and law, these other categories,
and obviously adoption is happening, but how do you think adoption will kind of...
How would you imagine adoption will look in those categories?
Because I think AI adoption in software engineering is very natural,
because the people that are building and doing the research
are adopting the product.
And it's a super tight feedback loop.
And you're not really going to see in the same way
in some of these other categories.
So yes.
And I don't know, before I got into software development,
I was actually a banker.
And so.
Let's go.
Yeah, Golda Sachs speak.
Thank you very much.
Let's go.
So I did my sanker.
I did my banker.
Right. So I think I got to tell you, I'll be honest, like, I think, you know, Anthropic, I was talking Mikey, they announced Claude for Excel. I think that's going to do wonders. I think if you talk to any Goldman Sachs analysts, they'll be very excited to have that deeply integrated. And if you're building Del Maras all day. There's whole businesses that are built just on like templates. Totally. And it's like obviously. What's interesting, though, is if you look at Claude for Excel, I think their core foundation is still the coding agent. And that there's something about.
the coding runtime that can be then
augmented to other verticals.
I think that's what you're going to see in next year or so
is these model labs build out these harnesses
and go vertical by vertical,
whether it's banking, health care,
or consulting,
right, they're going to go through that
through knowledge work and they're going to
iterate on that, just like in the hill climb.
Yeah. Yeah, what do you think on
how do you
think about switching costs
now and over time
if you're a products company
and you're leveraging intelligence from a lab,
like do you think the labs will over time
make it harder and harder
to kind of like rip out one model provider
and use another?
Because right now it feels like
there's this land grab happening in enterprise
and this race between Anthropic and Gemini and Open AI.
But like how do you think that evolves?
I think most of the products that I talk to,
like the, from the companies that are on here all the time,
most of their teams are working with multiple models and they have and they're constantly evaluating
whenever sort of product analytics or test harnesses they're looking for any edge they can they're
they're so competitive yeah at least in like the startup space that like and but switching models
is not easy that takes time and especially when there's big rearchitectures like when reasoning
came out for example um that may require a rewrite of all the prompts and all the edge cases that
you've been massaging and these models have different characteristics um but i think
think most of the high performance teams are are dialing in harnesses for each and every lab and
they're just so hungry that switching could do us our house stay goodness switch and cost our high so
the answer is like used you get all of them in the beginning correct and and there may even be
certain subsystems or certain tool calls where you're going to switch models and mix them
together and that just is you know part of the part of the part of this if you look at like what
windsurf did uh with a released uh sui grep that that specialized model for research yeah um you know
they're combining, they're mixing and matching.
I think that's the next, you know,
we'll see that throughout the next year.
I don't think it's like,
oh, we're just going to use anthropic models
or we're just going to use opening eye models
or we're just going to use, you know,
whatever model is.
I think you'll see a lot of accommodation.
Thank you so much coming on the show.
I see a big fan.
Have a great.
I'm back.
Come back.
I'm going to go ahead.
Before we bring in Michael Greenwich from Work OS,
we got some breaking news.
Did you see the blimp?
Did you see the blimp?
Do you know whose blimp that is?
It's Sergei Brin's Blitz.
No way.
Let's go.
He, I mean, this is Bag 7 on Dag 7.
I'm actually going to take, I'm going to take a little bit of credit for that.
I told the Gemini team.
Oh, you told them.
I share.
I haven't.
I have a blip for like 20 years.
Okay.
Hey, good to see you.
Great to finally meet you.
We guys.
See those your friends.
David.
David.
Oh, yeah.
I got you brought you guys both.
Oh, please.
One of our highly coveted super rare.
Enterprise, Enterprise.
Ready.
Back over.
Bro. How are you Enterprise?
All right.
So, where you guys ready?
What does it mean?
Well, pretty much every software company, eventually when they get product market fit and go up market,
there's a ton of stuff they have to add to their app to go sell the enterprise.
Yes.
So the guys on Microsoft and GitHub, they did this years ago.
But if you're a new company, you have to add all the stuff to your product.
Sure.
And it's things like single sign-on, user provisioning, logs, security.
WorkOS just does all that for you as a developer.
Got it.
Yeah.
Okay.
So you can just focus on the core products.
Yeah.
Yeah.
And the same way you use Stripe for payments or Twilio for messaging.
WorkOS is really that or security business because it feels like it's not something that you could just like go through YC and like sell to another startup.
So like who was the first client?
How did you get into this?
What were you doing before?
I started working this a long time ago, almost seven years ago.
So WorkO.
Older night's success.
It's kind of pain for me.
Yeah.
It's like we're also like a pre-AI company.
It's what I called us the other day, which also kind of hurts a little bit.
I know.
Show up my ears.
Dinosaur.
I started as A.
A.N. Native before AI existed.
Yeah.
For real.
I saw this problem with another company at start.
Okay.
We had built an email product.
Got a bunch of usage, got a bunch of adopter.
You're going to enterprise.
Try to sell these guys.
And they said, no way we can let this touch our data.
No, is the CTO, CISO, like...
Usually engineering leaders, co-founder, this VP of Eng.
Whoever is kind of responsible for the technology.
Is it because they want those features or they need them for legal reasons?
They got to have them.
They usually have deals that are blocked because they don't have these features.
Okay, got to.
So you'll start.
growing up market and there'll be some customer that says,
we'd love to use your product, we'd love to roll it out at Coinbase or Microsoft or something,
but we can't do it unless we have these features.
Has demand just been insane because people are building products so quickly
and then they start, you know, employees, employees at companies start adopting them
kind of personally and then they realize.
Compounding, yeah, sure.
So we had a lot of growth, you know, years ago through kind of the early cloud era SaaS.
Like Bricel is one of our customers, Carta, plaid, folks like that.
In the last year and a half, two years, in the last year and a half or two years, what we found is it's actually perfect for all these AI companies.
Yep.
So today we're powering Enterprise off for opening eye, anthropic, perplexity, cursor, Sierra, you know, all these guys that are growing faster.
So.
I got a lot of time.
That button, yeah.
Okay.
Walk me through the Cases.
In the YC era, it became, like the YC trade was basically, you could be a kid and
college, you know, graduate and move to White, move to Mountain View or Silicon Valley. And for
$100,000 and some cloud credits from Azure, whoever, you could set up a website and go kind of
build the first era of consumer. And we got our Airbnbs from there. We got a variety of
consumer companies. But in the AI era, it's becoming easier to go enterprise on day one. Is that real?
Is that a reasonable thesis? Do you see any data to that effect? Absolutely. So I think that
previous era, the privilege that those companies had is they could take a while to get to the
enterprise. So if you look at Dropbox, Figma, it was years. It was like three, four, five, six,
seven years before they actually went after Enterprise. What we're seeing today is AI businesses
get pulled up market way faster. Sure. And it's way more competitive. So companies like
cursor or complexity, pretty much in year one, year one or two, they get pulled up market to the
enterprise. And that's why they need to say yes, because of that competitive dynamic, but also
the tools that they're building, like the enterprise is just so.
so ready for them. There's another piece of it as well. It's not just that they grow faster at
market, but you think about these AI products, they are touching sensitive data. You have one of
these things, it's only valuable if you get access to all of your stuff. You give it access to
do things on your behalf. So suddenly it becomes this huge security concern. Maybe an old product
like Figma, you could say, to the design team, just don't put any sensitive data in it. But you get
one of these agents or something connected. You need it to access everything. And so they're
scrutinized at a higher amount, plus they grow faster, plus in their life cycle. It's a perfect
storm where we come in and help them grow up. Talk to me about domestic versus international.
I imagine a lot of your clients are already international. So does that mean you're
international? Or are you focused on making American companies enterprise ready immediately? And
then maybe you'll go after the European market later. How do you think about that?
So many of our customers are actually right here. Yeah. Like probably at the same universe,
literally right here. We were joking. We could cut up our sales territories by north and south of
Market Street in SF because we have so many businesses that are here that are growing quickly.
What we find is their customers are international.
Of course.
Right.
So they're going and selling to larger organizations elsewhere in the world.
Okay.
So our products that are kind of our customer's customer, those types of things we, you know, we localized.
We just did a big project to translate everything using AI.
So we launched with 100 languages.
Yep.
So we do that kind of stuff.
But we find that the best product, the best companies that are using WorkOS are these high growth AI businesses that are taking off.
And of course, they're mostly here.
Yeah.
You know, they're mostly here.
Yeah.
What's your philosophy around operating the business?
I don't remember, I don't necessarily recall the last time you guys raised money.
Like, I'm sure people are throwing money at you all the time when they see the logos.
Yeah, we raised our series B almost exactly four years ago, actually.
That was the last financing.
Yeah, that was the last financing, which is like an eon in the SaaS era.
Yeah.
Since then we just got next to their dog on our hands.
You've just been building it slowly since then, you know, bit by dilett, brick by brick.
You know, slowly, actually, I do have something to announce.
It's pretty exciting.
You know, you had Asati here previous talking about how they do a billion in revenue every day.
We're very proud to announce that we just crossed 30 million in annualized revenue.
Amazing.
So that's our, that's our big number.
Overnight Success.
We're sitting today.
That's great.
We're a bit smaller than Microsoft, but we're coming for you.
No.
We've been compounding since then.
The AI stuff has really been this huge tailwind for us.
And it's so fun to build infrastructure.
where we get a C into all these companies.
Like, my customers are the fastest-growing,
most exciting AI businesses out.
Do you invests? Do you angel invest?
I do some, yeah.
Yeah, as you're seeing these companies
at this, like, crazy blockchain point.
I've had VCs start asking me for the data.
They want to invest just to get the gross data out of it.
It's a little...
Yeah, it's a little different.
Yeah, we don't share that kind of stuff.
But just through, you know, building for developers
and running events and, I mean, I love GitHub Universe.
This is like Coachella for, like, you know, developer stuff.
meet founders and meet other people building stuff and so uh who are some of the entrepreneurs that
you look up to who wait or what the story from a founder or a business person that you keep coming
back to is like oh that one i'll go go for it's just david center just his story just how he
like us no i i i do i think i think i he is like i i feel like i have the blessing of like i
by being friends with David,
I'm friends with history's greatest entrepreneurs.
It's the most official role model, right?
You have a specific type of business,
but whatever business you're building you can learn from...
Yeah.
I mean, yeah, there's a lot of other things you could have said,
but that one's pretty good.
I was trying to think for my name.
David's great.
I'm just laughing me as like the stakes are like, you know,
Henry Ford inventing the, you know, the...
What's it called?
The actual assembly line.
Yeah, yeah, yeah.
The automated assembly line or, like, you know,
well, yeah.
It's the most impactful versus, like, the most valuable for you.
Yeah, yeah, I guess who do you come back to?
I always turn back to a market.
We're very much in a marketing business.
And so I always come back to a quote that I attribute to David Senra,
but is actually from David Ogilvy,
that you are not advertising to a standing army.
You are advertising to a moving parade.
Yeah.
And so the question is, like, why are people...
Proving one point, you, you heard that for the first...
No, I read OVOVie on advertising.
I'm familiar with the book.
I read it before David read it.
But he did stick it in my brain, and he advertised it to my moving parade.
And I've always liked that idea of, of even if you've shown someone an advertisement once,
or you've sent them a message or you've given them a pitch once, like, you, there is a moving
army, there's so many distractions, there's attention all over the place that you need to be
it's why they don't just do one GitHub universe and say yeah we did it we're good they do
it every single year and the message is different every year yeah too they're evolving yeah man there's so
many there's so many to choose fun i'm i feel like a little bit of an old soul and that when i
when i heard sotia talking about like the early days of microsoft and bill i love building platforms
yeah like i built all this other stuff earlier in my career and as soon as i started building
stuff for developers other people making stuff sure i was like ah that's really sick like you see
other people make stuff with the thing you made and then build their own businesses on top of that.
And to me, Microsoft is like the first big software platform company. You know, Windows enabled
so many developers to build and ship these experiences to change the world. And it's, it's,
you know, previous cycles. I think a lot of people forget it, but it was this huge enabler, this
huge like democratization of access to technology. Is there a specific sales funnel that
flows through GitHub with your product? We do just a ton of stuff with developers. I think,
You know, I mean, we do everything from, you know, sponsoring podcasts and newsletters and meetups and doing developer events.
We did our own conference last week.
We do a lot of open source stuff.
We run a really popular open source design system project called RADX.
Oh, that's.
Yeah.
So that's on GitHub.
But I think my GitHub account is probably one of the earliest, like, personal identities online.
I had an account.
You know, I've had it since before I was in college.
So I'm thrilled to be here.
Yeah.
Yeah, that's very cool.
Are you speaking it all the day or?
I am.
giving a talk tomorrow, all about AI and identity for agents. So this is a new thing. WorkOS is
kind of like an identity security company. You help people with sign in and off. And there's
this big question right now of how we're going to secure agents. Yeah. You know, if we have
seven billion people on the planet, we'll probably have trillions of agents running around and doing
stuff for us, connecting to different systems. And security is even more important. You can think of
an agent kind of like a crazy hyperactive intern. You're going to have access to all of your systems. And
So there's this question of how do you authenticate them,
how do you build security around it, permissions, approval.
Out of the post. Prompt injected.
Right, right. Yeah.
There's this old quote, you know, to error as human,
but to screw up 10,000 times per second, you need a computer to do that.
Agents are kind of like that, right?
They make it really easy to do stuff really quickly, but also make mistakes.
So my talk is all about that and some ideas that we have around security for.
Yeah. Do you think agent security bifurcates along the consumer and business-to-business access?
Do you think there's a discrete enterprise versus B2B layer?
It feels like we talked to the CEO of one password, per example,
and it does feel like the password to my Yelp account,
I might not be using like WorkOS for that in the future.
It's technically going to be some blurring, you know, and, you know, talking about
with like API communities.
Like the way I'm thinking about is like a small business might have 200 agents
that are out in the world, maybe some are selling.
maybe some are doing customer support and it's like when when should a CX agent be able to share
account data right when when can a sales agent like provide pricing I mean like there's so many
different things and you you know CEOs and they're working with teams right they have like
processes in place for individual people and so I think I think it's like really important problem
area it's it's completely changing the way people think about security I think if you go to any of
these like security focus conferences. It's the topic on everyone's mind. Right hat. Yeah, exactly.
Within, exactly, yeah, because within companies, previously you've had these kind of silos of
information or control. You have permissioning systems that are pretty static. With agents,
you know, you might have 200 today and zero tomorrow. You might spin them up and down depending
on a task, depending on a project. And so that permissioning model is like completely changing.
And it's really exciting. You know, we're right in the middle of it. Like us working with all these
different AI businesses, they themselves are building their own agentic workflow.
whether it's, you know, stuff like Codex or ClaudeCode or what cursors building with their background.
Is there being an AI horror story in security yet?
Oh, yeah.
They didn't...
People could talk about it.
I mean, there's ones like I, yeah, yeah.
Do my security or death, you think it steps down to the ESOD.
It feels notable.
There has a bit like a specific day on the internet that everyone was like...
Oh, we went down because of AI.
Yeah.
That hasn't happened yet.
Not, not yet.
I mean, probably just...
Maybe the latest about ABS outage, like, I don't think...
No one pin that.
on AI. No one pinned that on on generative AI or stochastic systems. Yeah. There was one one a few
months ago where Jason Lenkin, you know, from Saster, he was vibe coding an app on Replit. Oh, I saw.
Remember that? And he was like, it was pure prompting, right? It's not writing any code. He's like just
talking to the thing. And he asked the agent to do something and it deleted the full production
database. Yes. And then he was like, what the hell? And then the Asia lied about it. It was like,
no, I didn't do that, you know. I think it's like, yeah, he was like, yeah, it's like, yeah,
covering, you know, it's like, oh, yeah, what point I think the agent did just say, yeah,
mine bad, yeah, my bad. But, but I do think they were able to roll it back. So I read your
driving in the thread and kind of just to close it out, not leave the, yeah, and they've built a lot
of stuff since then as guardrails, but that just shows you like early on people pushing these
systems to their limit. And they can have catastrophic effects if you don't put up these guardrails.
Yeah. So that's what the talks about. We're doing a lot of innovation and research here,
but it's, it's going to take a while to get right. Yeah. Yeah. Well, awesome. Amazing to finally have
you on the show.
Thanks so much.
Been a long-time fan.
Thank you so much.
Have you in person?
Come back on again.
I will.
Take care.
See it.
We'll talk to you soon.
There are a lot of posts here in the timeline that I want to share that we can't.
We can't.
We can't.
We're not paying tomorrow.
We're being held back by, I want to just come back to them tomorrow.
Okay.
We're held back.
Tomorrow, you have our word.
We're doing lots of timeline.
If you're new here, leave us a subscription.
Follow us on.
A six.
A free subscription.
everywhere.
Yeah, sign up for our newsletter.
TBPN.com.
We bring you the news in text form.
Yeah.
Well, it has been a fantastic day here in San Francisco.
Thank you to everyone.
Stunning out.
It is.
I'm going to go try to find out of a blimp.
We got to find out of it.
It's not a Gemini blimp.
I'm telling you.
It's Sergei's Blimp.
It's not a Gemini project.
It's not a Google project.
I thought you said it was branded.
No, no.
brand it.
Individually
funding a blimp company.
It's so funny because I sat down with Logan and the Gemini team
where we were just talking about marketing ideas.
And I was like, the obvious thing that you should do is get a blimp.
Wrap it with Gemini branding and just fly it around San Francisco.
Founder Moffay.
And we were talking.
It's like, okay, finding a blimp.
I was doing some research.
There's like six active blimps.
I was like, man, this is going to be hard to find a blimp.
I can get to us.
that can be wrapped. And of course, Google
incredible foresight
from Sergei to create a
beautiful billboard in the sky.
That's just waiting for branding.
But, um, waiting for it.
Uh, well, a super fun day.
It's a surreal moment.
A lot of fun.
Talking to a great time.
One of the greatest living CEOs.
And, um,
thank you to everyone on the Microsoft team.
Thank you to everyone on the GitHub team
who helped organize this.
Thank you.
to our sponsors.
Profound, linear, numeralhcue.com, sales tax on autopilot.
Here, got a knife.
Thin.
A.I.
The number one AI agent for customer service.
Adio, of course, customer relationship magic.
Aid sleep.
Didn't sleep on my aid sleep last night.
Can't wait to get back to it tonight.
Of course, we mentioned public.com.
Also, ad quick.com.
Getbezzled.com.
Your bezel concier was able now.
So what was going to be back?
Jared Palmer.
had a nice guy master it was good that was looking good and of course
wanderer find your happy place book of wander with inspiring views hotel great amenities dreamy
vans top tier cleaning in 24 seven concierge service it's a vacation home but better thank you folks
we will be back in hollywood tomorrow the podcasting will continue 11 a.m. sharp pacific
cheers see you then goodbye bye
