TBPN Live - Meta Taps Nat Friedman & Daniel Gross for AI Push, Starship Rocket Explodes | Mike Knoop, David Cahn, Walden Yan, Eoghan McCabe, Jeff Weinstein, Garrett Lord, Tanay Tandon
Episode Date: June 19, 2025(04:10) - Meta Eyes Nat Friedman & Daniel Gross for AI Push (37:47) - Lakers Sold to Mark Walter (43:39) - Surge Hits $1B+ in Capital Without Outside Revenue (56:20) - Mike Knoop, co-f...ounder of Zapier and CEO of Ndea, discusses the evolving landscape of AI reasoning models, highlighting the trade-offs between accuracy and efficiency among leading systems like OpenAI's o3 and DeepSeek's R1. He emphasizes the importance of considering both cost and performance when evaluating AI models, noting that no single model currently dominates across all metrics. Knoop also underscores the need for innovative approaches, such as program synthesis, to advance toward artificial general intelligence. (01:20:23) - David Cahn, a Partner at Sequoia Capital, previously served as General Partner and COO of Venture at Coatue Management, where he led investments in companies like Hugging Face and Runway. In the transcript, he discusses the substantial investments in AI talent by major tech companies, emphasizing the human dynamics and strategic decisions driving these developments. He also explores the competitive landscape, the significance of data centers, and the economic implications of AI advancements. (01:37:55) - Walden Yan, Chief Product Officer and co-founder, discusses the integration of AI models in product development, emphasizing the importance of balancing model intelligence with user experience. He highlights the challenges of managing trade-offs between model responsiveness and accuracy, advocating for systems that abstract model complexity to enhance usability. Yan also addresses the evolving role of AI in software engineering, noting the potential for AI to autonomously perform tasks like coding and debugging, thereby transforming traditional workflows. (01:57:41) - Eoghan McCabe, co-founder and CEO of Intercom, discusses revitalizing his 15-year-old SaaS business by returning to core fundamentals like customer-centric pricing and simplifying the sales process, leading to a tenfold increase in growth over eight quarters. He emphasizes the importance of maintaining high energy and passion from leadership to prevent stagnation, noting that AI innovations have reinvigorated both him and the company. McCabe also highlights the challenges of scaling a company, stressing the need for agility and the willingness to embrace chaos by empowering young, dynamic individuals in leadership roles. (02:16:00) - Jeff Weinstein, a product lead at Stripe, discusses the company's initiatives to integrate AI into commerce, emphasizing the development of agentic commerce where AI agents facilitate seamless transactions. He highlights collaborations with companies like Perplexity and IP Camp to enhance e-commerce experiences and mentions the introduction of Stripe's order intents API, enabling agents to execute purchases on behalf of users. Weinstein also addresses the evolving role of payment methods, including stablecoins, in agentic commerce and underscores the importance of permissioned, secure transactions in this new landscape. (02:32:00) - Garrett Lord, co-founder and CEO of Handshake, discusses the company's evolution from addressing his personal challenges in securing internships at Michigan Tech to becoming the leading early-career network in the U.S., connecting 18 million students and young professionals with a million employers. He highlights Handshake's role in supplying experts to frontier AI labs, emphasizing the demand for specialized data from PhDs and master's students to enhance AI models. Lord also outlines plans to leverage AI in automating recruiting processes and envisions a future where participants can showcase their skills through contributions to AI development, thereby enhancing their professional profiles. (02:44:44) - Tanay Tandon, CEO of Commure, discusses the merger of Athelas and Commure, highlighting their combined efforts to transform healthcare through AI-powered solutions. He emphasizes the importance of automating administrative tasks to improve efficiency and patient care, and outlines the company's strategy to expand its product offerings and customer base. Tandon also shares his vision for the future of healthcare, aiming to eliminate inefficiencies and enhance the overall patient experience. TBPN.com is made possible by: Ramp - https://ramp.comFigma - https://figma.comVanta - https://vanta.comLinear - https://linear.appEight Sleep - https://eightsleep.com/tbpnWander - https://wander.com/tbpnPublic - https://public.comAdQuick - https://adquick.comBezel - https://getbezel.com Numeral - https://www.numeralhq.comPolymarket - https://polymarket.comAttio - https://attio.comFollow TBPN: https://TBPN.comhttps://x.com/tbpnhttps://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235https://youtube.com/@technologybrotherspod?si=lpk53xTE9WBEcIjV
Transcript
Discussion (0)
You're watching TBPN. Today is Thursday, June 19th, 2025. We are live from the TBPN
ultra dome, the temple of technology, the fortress of finance, the capital of capital.
We have a great show for you today, folks. There's some breaking news that's dropping
right now. I think we got to go to the printer cam really because we have an update from
friend to the show. Let's see if this works. Do I need this? No, it printer cam. Really? Because we have an update from friend of the show.
Let's see if this works.
Do I need this?
No, we need a moment of silence gong.
Because it's about the Elon Musk news out of SpaceX.
But we got an update from friend of the show,
Ashley Vance coming in here hot.
He says, I happened to be at Neuralink last night
when Starship went boom. And so was Elon Musk until well past midnight
Pacific time he was in three hour plus
He was in a three hour plus long meeting when the explosion happened meeting ended
I assumed that's when he learned about it and then he went back to work Wow an absolute dog
What a great video was absolutely insane. It was.
We will cover it in a little bit.
Yep.
You had shared a transcript from it with me earlier
this morning.
I had seen the video of it going boom.
You shared this transcript where one of the engineers
is saying, hey, really quick, Sawyer.
We just observed a couple of vents
coming from the common dome
in between the LOX tank and the methane tank.
And from this angle, it almost looks like the methane tank
is gone.
Then he actually says, question mark, is that normal?
Jack, is that normal?
I'm seeing some venting.
Is that unusual?
Is that normal?
And the guy goes, yeah, it's probably normal and I literally say we just
You have the video. I just said the video. Let's play it's plays. I have no idea if this is actually related
We'll have to get someone on the show to dig into like exactly what happened. I'm sure they'll be a post
Like a post mortem on the explosion did say famous last words and then shortly afterwards it exploded the entire
It is a crazy, crazy video.
And not really quick, so are we?
Let's see if we can pull this up.
In the meantime, let's tell you about ramp.com.
Time is money, save both, easy to use corporate cards,
bill payments, accounting, and a whole lot more
all in one place.
Go to ramp.com to get started, of course.
The other two major news stories we want to cover today,
10 billion dollars, the price to buy
the Los Angeles Lakers, 15 billion, the price to buy the Los Angeles Lakers
15 billion the price to buy meta and AI leadership team fantastic post by Alex Conrad
Let's just observe a couple of
Events coming from the common dome in between the locks tank and the methane tank
And from this angle it almost looks like the methane tank is gone? Question mark?
Jack is that normal?
And then I actually edited out.
I'm seeing some venting coming from in between the methane tank and the locks tank. Is that usual? Is that normal?
Yeah, it's probably normal.
Keyword probably. Famous last word last word yeah weasel words they miss last weasel words
these guys are just growing out on a live stream watching watching like a
static firing test this is not a launch they're not gonna they're not trying to
launch the rocket they're just they're just putting on the tape on the test
stand firing it up to make sure that everything works
and it just completely exploded.
We'll go deeper into that and some of the reaction
and the news in a little bit.
But in the meantime, let's talk about the other piece
of breaking news that came out of the printer
just after we got off the stream yesterday.
Oh, right, right.
This is news from the information,
Metta is in talks to hire former github CEO Nat Friedman and Daniel Gross to join AI efforts and partially buyout their venture fund.
Yes. So there's a ton of details here. So let's read through the information article and then we'll go to some of the reactions.
So meta platforms is in advanced talks not just talks advanced talks.
Has that been defined, quantified?
What does that mean?
Are we past coffee meeting?
Is this a 30 minute, is this a two hour conversation?
How long are the talks until they become advanced?
Yeah, terms, you know, exact numbers
are being thrown around, it's very possible.
Yep, so they're thinking about bringing in
Nat Friedman and Daniel Gross to help lead AI efforts.
Part of those talks, Metta is in discussion
about partially buying out Friedman
and Gross's venture capital firm, NFDG,
which holds stakes in top AI startups
and is worth billions of dollars on paper.
If the talks are successful, Gross would leave
safe super intelligence, which he co-founded
with former OpenAI chief scientist, Ilya Sutskever, last year.
At Metta, Gross is expected to work mostly on AI products
while Friedman's remit is expected to be broader.
Both Gross and Friedman are expected to work closely
with Meta CEO, Mark Zuckerberg and Scale AI CEO,
Alexander Wang, whose hiring by Meta was finalized
last week in a $14.3 billion deal.
Big numbers. Big numbers.
Big numbers being thrown around.
I think this is gong worthy,
even though we're just in advanced talks.
We gotta hit the gong.
We try not to hit the gong for advanced talks,
but scoops or scoops.
It's such a big number.
We've hit, strong hit.
As part of the tux.
Yeah, I guess some immediately a couple of things
I was thinking about when I saw the headline.
One, I don't know if this is common knowledge,
but my understanding was that some of the money from NFDG
was Zucks, right?
So they were already investing on behalf of Zuck,
to some degree.
So this shouldn't be a huge surprise.
And then I think the bigger thing
is what does this say about SSI, right?
If DG is willing to leave SSI, despite it
being such a young company and already valued,
I imagine DG's stake is in the billions of dollars there.
So to leave that and go to Meta.
Says something, I don't know exactly what it says.
I think it could potentially say a few things.
One is that maybe artificial intelligence
is more of a sustaining innovation
than a disruptive innovation.
And so that just by training a fantastic model,
you're not immediately going to be able to overcome
the network effect at Meta.
And so Meta is maybe potentially a better place to go,
really reap the rewards of artificial intelligence.
That's kind of a signal because no one at Google
was really thinking about joining Yahoo, right?
There wasn't a lot of flow that direction.
It was like, we're onto something, we are going to disrupt.
Same thing with Amazon.
I'm sure Bezos wasn't losing people to Barnes & Noble.
Right?
I think this is the analogy.
I'm sure Barnes & Noble threw out a couple of Macs contracts
and got a couple mercenaries.
But yeah, I mean, if you're-
If this was truly disruptive,
you would think that you would say,
well, I definitely don't want to be with the incumbent.
I don't want to be in the legacy player
because there's nothing that they can do
to capitalize on the new wave of technology.
And so there's been this question about AI,
clearly an incredible technology,
clearly the greatest inventions,
and it's up there with electricity and fire.
It's really, really cool.
The computers talk now, it's incredible.
At the same time, what is the market dynamic
that drives how this technology will accrue value
in various places?
Who will the winners be?
A whole bunch of startups.
Will there be a monopoly player around,
it comes from a startup?
Will there be a monopoly player
and then sustaining innovation in every other Mag-7?
And this is the question that everyone has been talking
about for years, for a couple of years now.
This is what Ben Thompson writes about
with what we talk about all the time.
And I think people are gradually waking up to the idea
that like it's possible that a lot of the value creation
at the foundation model layer will happen at OpenAI
because of their consumer products.
And this aligns with Sam's piece from last week,
the gentle singularity.
It's basically saying like we created intelligence
and it's less weird than we thought,
which is a step back from how things were talked about
and is a stark difference than maybe AI 2027,
which is super, extremely AGI-pilled and just saying,
we're going to continue accelerating.
And I don't know.
I think it's right to kind of read into this.
And two things can be possible.
It's possible that Ilya will create very important lab
with SSI, but it's also possible that they might never
grow into a $30 billion valuation.
Is that where they are now?
They're currently priced at $30 billion.
So for DG to leave as a co-founder of that company,
sure, I'm sure he's get a 10-figure package
if he goes to Metta, if this goes through.
But he's also leaving, I would imagine, billions of dollars
of shares.
This seems like very rumor mill at this point.
Like, this could go a bunch of different ways.
It could just be talks, and maybe they just
come on as advisors or something,
or they join the board.
Like Meta has a board that includes people
that don't work at that company,
and they add a lot of value there, so that could happen.
It could also be that Meta winds up acquiring SSI.
That would be a wildly different take on this,
or it could be that they leave.
The information seems like somewhat confident about this,
but it's only a couple sources.
SSI, multiple co-founders, it's very possible that
as even since the company was started,
certain members of the co-founding team like Ilya
want to build what they see as super intelligence
and it's possible other members of the team are like,
yep, this is like more software,
we're gonna vend this out in a bunch of different places and ultimately, you know,
it just, I don't know, you don't typically see people get off rocket ships.
I agree.
Especially co-founders unless there was an extreme rift.
There's another side to this, which is there's a question about what structure, even if the goal still is
super intelligence, what is the corporate structure
and the capital formation structure
that delivers super intelligence?
Because we saw this with OpenAI when it was a non-profit.
There was simply no way to marshal a $10 billion donation to a nonprofit for
a large GPT 4.5, GPT 5 level training run.
There was no way to marshal that type of capital.
The richest people in the world had already donated $100 million and there was not really
a lot of appetite for, yeah, next year I'm 10Xing my donation.
And so they had to become a for-profit.
Then you look at the flywheel of what it takes
to continue to develop and continue to do these training runs
and continue to invest in reinforcement learning
and it feels like you need a data feedback loop
and you also need a financial feedback loop
to be able to justify more and more investments.
And so, if we're, and we're gonna talk to Mike at Arc AGI
and there's this interesting thing that we heard
which was that the reason that the foundation models
are not able to one-shot Arc AGI right now
is because they're all just like doing a nice thing and not reinforcement learning on it but
if they actually were did like some fine-tuning around it and they were like
hey we want to knock this model off they could and and what that tells me is that
for any really well-defined problem like chess or Dota 2 or Go or League of Legends.
Like you can go and say, hey, we're doing a specific
training run for this one problem
and it's gonna get really good at it.
The weird thing is that the economy
and the global value creation chain from humanity
is potentially extremely long-tailed.
There's potentially not just like five skills,
like oh yes, you know IMO level math,
and you're good and you generalize.
You might need to go and dig into all these
different pieces of value,
and having a feedback loop or an economic model
like what OpenAI has with their app
that generates a ton of revenue,
or like what Meta has where they can deploy these products
in all sorts of different ways
and get billions of people using them very quickly,
that actually might be a more,
like it might be the only way forward.
You might not just be able to go into monk mode,
come up with the perfect algorithm,
and then train it on some like medium sized cluster.
You might actually need to just scale energy,
scale data center capacity, and scale users smoothly
for decades to get there.
So I don't know that this is updating my probability
of superintelligence ever happening.
And the question is how many different labs
that are losing billions of dollars a year
can the capital market support and for how long?
Exactly. Right?
Yeah.
You have XAI, thinking machines, safe super intelligence.
Yep.
Anthropic is kind of an open AI or in their own categories
and that they are generating a lot of revenue.
It's also hard because it's not like biotech
where if you come up with a machine learning algorithm
or you come up with the transformer, you patent it
and then you just make money off of it forever.
That's not the way these innovations.
Until you forget to renew.
Yeah, yeah, yeah.
Or run out for Nova Nordisk.
If that was the case, I would actually be maybe more bullish
on SSI because I would say, well,
Ilya is clearly an incredible researcher.
If he goes into his team
and comes up with the next great training paradigm
and then patents that and is able to license that
to Google and OpenAI, that could be extremely valuable,
but that's just not the way the structure of the market is.
It's not like drug development.
Exactly.
Anyway, it's a fascinating story.
There's been a ton of reaction to this.
Nick says, this is somewhat related,
Carpathi literally said Meta's llama ecosystem
is becoming the Linux of AI and you're blackpilling?
And so this was kind of like a narrative violation.
A lot of people have been anti.
We should get a little bit more into the article
because it does give some important color.
So Friedman has been involved in Meta AI's efforts
for at least the past year.
In May 2024, he joined an advisory group to involved in Meta AI's efforts for at least the past year in May 2024
He joined an advisory group to consult with Meta's leaders about the company's AI technology and products
Earlier this year Zuckerberg asked Friedman to lead Meta's AI
Meta's AI efforts altogether the person familiar with the discussion said Friedman declined but helped brainstorm other candidates including Wang
While Zuckerberg was skeptical Wang would leave scale,
Friedman convinced him a deal was possible,
said a second person with knowledge of the discussions.
As the Wang hiring came together,
Zuckerberg approached Friedman again,
this time Friedman agreed to a deal of his own.
He is currently expected to report to Wang,
who is roughly 20 years his junior.
Both men will be a part of a small group of meta leaders
that Zuckerberg refers to as his management team or M team.
For Gross, the talks with Meta put him in awkward position
with SSI, a startup forum with the goal of building
a leading AI company insulated from short term
commercial pressures.
So again, SSI strategy from the beginning is saying,
we're not gonna release anything
until we create super intelligence.
I just think it might be the nature of the economy and the nature of artificial intelligence and the structure of the market
That might mean it is impossible to
Insulate yourself from short-term commercial. Yeah, the question is you have billions of dollars on your balance sheet and you
Hypothetically could just do AI research forever just off of the interest yield alone
Yeah, except for the fact
that if you want to compete from a scaling standpoint,
you have to spend billions of dollars on GPUs and data
centers and training runs and things like that.
So it'll see there's a very real tension there that will
have to be resolved somehow.
The startup SSI hasn't yet launched a product
or described in detail what it planned to build.
Gross's departure for Meta would damage
an important investment for some top venture capital firms.
In April, SSI raised $2 billion
at a $32 billion valuation from investors
such as Greenoaks, Andreessen Horowitz,
and Lightspeed Venture Partners.
It has also raised money from Sequoia Capital.
They basically got everybody.
They got the whole crew together.
Together, Friedman and Gross have invested
in some of the busiest AI startups,
including Search Startup Perplexity
and Robotics Startup, the bot company.
That's Kyle Vogt's company.
The firm had more than two billion of assets
under management as of last year,
though that figure is likely higher now
with the increase in value of some of their startups.
So it's wild.
Anyways, this feels crazy.
But Nat Friedman independently going to work at Meta
is not that crazy.
I feel like the craziest part is someone like DG going from SSI
to Meta.
But at the same time, it's very possible
that SSI and Meta could work out some type of relationship
and maybe that's not getting reported yet.
What's interesting is that both of these guys,
Daniel Gross and Nat Friedman, were both at one time
thought to be future really, really significant leaders
in Mag-7 companies.
So Daniel Gross, he started an artificial intelligence
company, I believe went through YC,
and then, or maybe he went to YC after,
but he sold it to Apple.
And at Apple, everyone was kind of like, wow,
now that he's in there leading AI at Apple,
he's going to be kind of like this young, incredible talent.
Maybe he'll be like the next Steve Jobs. Maybe he'll take over the company one day. People were kind of like this young, incredible talent. Maybe he'll be like the next Steve Jobs.
Maybe he'll like take over the company one day.
People were kind of like waiting for that,
but I didn't see like Apple was really set up for this.
DG was accepted into YC in 2010.
He was the youngest founder ever accepted.
Yeah, yeah, yeah.
And then he went back as a partner shortly after
because he left Apple.
But there is a different like fork in the road
where Daniel Gross is like next in line
to run Apple after Tim Cook,
if they were set up to empower someone young,
which I don't think any of these big companies
really are necessarily, maybe except for Metta.
And then Nat Friedman has the same thing
where he's CEO of GitHub, he goes into Microsoft.
It was always a possibility that GitHub's really important
for this $500 million business, it's growing,
it's co-gen, he's set up in the tech industry,
like he could have potentially taken over for Sasha
at some point.
Yeah, yeah, yeah, that's just co-pilot.
And so there was a world where you could see them
at the ranks, but we don't think about it this way
because most of the succession plans
in manager mode big tech companies are more managers.
We don't tend to acquire founders
and let them take the helm.
But Zuck, it's not like he's stepping aside by any means,
but he's very much leaning into this idea of like,
there is something special about these founders,
these people who have built companies,
these people who are at the heart of the technology,
really, really in the midst of things,
get them on my side at any cost.
And I love it, I think it's amazing.
Nat and Daniel both want to make a dent in the world,
especially in the context of AI, right?
So they're not gonna wanna go to Meta
and just cruise and make ads 10% better,
you know, that kind of thing.
Make it easier to generate.
You do this deal and then you just go and rest and rest.
I don't think that's going to happen.
No, I can't see it.
Anyway.
It would be interesting, too.
I wonder, would they continue to be
able to invest independently, or would there
be a kind of structure that says,
like, no, you actually have to just go all in on this?
If I was Zach, I would hope and expect for that,
but who knows.
Whatever they're working on,
I'm sure they'll be using Linear over there.
Linear is a purpose-built tool
for planning and building products.
Meet the system for modern software development,
streamline issues, projects, and product roadmaps.
And they got Linear for agents, folks.
Dylan Patel is doing a little meme on this.
Zach, founder mode master plan.
Don't pay the PyTor torch at LLM people enough,
lose 20% of the torch people to thinky, thinking machines,
hire Alex Wang, Nat Friedman for 10 plus billion dollars
to help you recruit talent,
inflection back the torch people
at 10x their previous total comp.
And so Dylan's obviously saying like,
you should have just bet on the same people earlier
and kept them unclear how much of it was really about pay.
But clearly that is not a gating issue anymore.
The floodgates have opened.
This was something that was identified earlier.
We covered a timeline post about this
where it was like, how will Apple compete in a world
where they can't justify paying anyone $10 million a year?
If that's the new normal, or that's the value
of some of these people that are gonna
do some of this research, you're gonna be kinda hamstrung.
And it's not because you're not spending $10 million
on an organization, it's because you're not spending
$10 million on a person.
It's a crazy new thing.
Yeah, Sam, Altman was taking shots at Meta earlier this week.
Yeah, that's on the cover of the Red Angel Times today.
You got it right here.
Meta accused of attempting top-up.
Sam came out and said Meta started
making these giant offers to a lot of people on our team,
like $100 million signing bonuses and more than that comp
per year.
I'm really happy that that at least so far,
none of our best people have decided
to take them up on that.
Yeah, the meta game in here is like wild, it's so good.
3D chess.
Yeah, it's a lot.
None of our best people is just getting into,
getting into Zuck's head.
Yeah, yeah, yeah.
Yeah, so somebody had a good breakdown of that.
He says, the strategy of a ton of upfront guaranteed comp
and that being the reason you tell someone to join,
really the degree to which they're focusing on that
and not the work and not the mission,
I don't think that's going to set up a great culture,
Altman added.
I mean, the only thing here is tell that
to the world of Wall Street and hedge funds,
where if somebody is just really good at making money,
you'll just offer them a maxed out contract
to come over to your team and it's entirely motivated by.
The value that they're creating,
but it's very trackable and it's a lot harder in tech.
But still, it's clearly up there
if you're moving the market cap, you can kind of tell.
Spore says, is Ilya SSI already DOA
if its co-founder is potentially about to be poached?
Good question.
Swick says, these guys are already centi-millionaires
so we're not talking about $100 million signing bonus
anymore, it's the first 1 billion signing bonus in history.
This is going to cost.
Zuck clearly is in spend mode if you think about
100 million bonuses are high.
This is a guy who lost 14 billion in 2022,
16 billion in 2023, 18 billion in 2024,
and 20 billion in 2025 to invest in VR.
All he has to do is cut VR spend for 2025,
and he has more money than Anthropic has raised
in its entire lifetime.
I didn't put it in that terms.
Never bet against Zuck long term,
but I think we're in for another costly period
of investment, and we know what happened last time
he went so hard on a thing.
We do not have the balls or imaginations
to do what he is about to do.
Yeah.
Can you imagine being Tim Cook running Apple,
three trillion dollar company,
making a paltry 74.6 million in 2024?
Just looking, after just going through the most brutal year of his time at Apple, you know, like it's
brutal pulling the company back from, you know, back from the brink of this like trade
war.
Yeah, yeah, yeah.
He's sitting there.
He's checking his pay stubs being like, he's like, I'm making a scale who doesn't even
work there.
He just got paid out bigger than me. When you put it into context that some 24-year-old AI
researcher who's cracked and deserves
a great role at a great company with great pay,
making more than the CEO of Apple, it's just absolutely
brutal.
So we still need to organize this protest,
hit the streets for Tim Cook.
We do, we do.
Head over to Cupertino.
Yeah, we do.
We should design some posters and Figma for it.
Go to figma.com.
Think bigger, build faster.
Figma helps design and development teams
build great products together.
Nathan, trying to do the show.
While all these companies are duking it out,
Figma is powering the design teams of all of them.
Yes.
So it's kind of like.
So one hand watches the other scenario.
We love to see it.
Nathan says,
Zuck came out,
Juan Gross and Friedman to lead AI
for tens of billions of dollars was not on my bingo card.
Yet, I don't think and many people predicted anything
like this happening.
I think everyone was kind of saying like,
there's probably going to be like some sort of V2
of the llama strategy, but being so talent focused,
I think was not on the table.
It was more like, okay, maybe they'd do an acquisition
of a foundation model lab, or maybe they would just build
an even bigger data center since they have abilities there.
And it's been a very different strategy.
We don't know much about sports, but there's probably,
I was trying to think, is like, Luca Donsick or whatever
going from Dallas to the Lakers.
That was a big surprise.
SSI co-founder going to Metta.
Yeah, this is the Luca of tech for people
who know what reinforcement learning is.
Exactly. This is the Luke of tech. Yeah. For people who know what reinforcement learning is.
Exactly.
Luke Metrochiams in dog helmets, you suck, Peng.
He's over at Anderol right now.
The meme has been, do you wanna just sell ads
or do you wanna build something important at Anderol?
Well, with these pay packages,
I think you're gonna get some Anderol engineers being like,
I'm willing to sell ads.
I'm willing to sell ads I'm willing to optimize ads actually I see a lot of ads throughout my day yeah I've always been kind of fascinated by that you know protecting
the world and ensuring like you know Western led peace creating world peace and a certain dollar value ads are cool too.
Yeah.
Yeah.
Oh, absolutely insane.
Well, I'm excited to see this unfold.
I mean, some of, it's interesting the way
that this reporting is written.
It feels at times like it's already happened,
but it's clearly not confirmed.
So. Yeah, it could kind of go either way, but I mean, we heard the leaks about Scale AI, it's already happened but it's clearly not confirmed.
Yeah, it could kind of go either way, but I mean we heard the leaks about Scale AI
like a few days and there was some speculation
about what was going on there and it became very real.
And so, you know, who knows?
Maybe it does become real, but we'll leave tracking it here.
Near Cyan says, ladies and gentlemen,
Mid Journey has done it.
It's a new AI image to video model.
Justine Moore from Indruz and Horowitz
mentioned this yesterday,
but the posts have been going out on the timeline.
We have our intern Tyler Cosgrove in the studio today
playing with Mid Journey video.
How's it going so far?
Can you give us a little review?
Good, it's been a lot of fun so far. So you're in the mid journey discord right now.
Yes. Fantastic. I've actually made, I've made four videos so far.
Okay. We can pull those up. Yeah, let's see. I'm excited to see these.
He's in the discord. He's in the discord. He's in the trenches. Uh, how is the,
how has the interface been? You just upload an image.
Does it do the same thing that you get with a mid journey image where you type a prompt and then you get four images and you get to pick one. Yes
Yeah, okay, so you get four video results videos
I mean you up and then you up res the one that you like basically
Yeah, I think when you export it basically does that got it. Oh, okay here you kicked it off with an image of us
reading the paper
You know kind of bear domestication, right?
This is bear domestication.
So did you include a prompt alongside the image?
Yeah, yeah.
OK.
You add an image, and then you prompt it.
Oh, and then you add a prompt.
OK, cool, cool.
It knows us too well.
That was the first one.
OK, let's see the second one.
Love it.
This is great.
Bear domestication is in our future.
OK, this is us on our phones.
And what is this?
Kind of an angel flying over.
Very bizarre.
I was thinking Pegasus.
It kind of has a bit of a demon vibe.
Who's the angel?
In the back, that uh.
What was the prompt for this one?
Let's see, that one.
Angel wings or something.
Angel flies up behind two men as they look back and smile.
OK, yeah.
The actual video on us.
OK, what's this one?
What's this one here?
This is us in the studio.
Little meta.
Oh, that's extremely demonic.
The aliens come and steal the gong.
This is super creepy.
I don't like this.
Bring the horses.
Bring the air horn back.
Oh, that's weird.
They steal the gong.
Wow, they really just. Hold your position. OK. I think there's one more. Oh, that's weird they steal the gong wow
Okay, I think there's one more yeah, let's play the last one what you got
What's this one? That last one was bizarre very very so the physics are pretty solid sometimes
You see a bit of the same thing with vio3 where like if a car is driving right?
You'll see the back of the car. Okay. We got us standing at the pool. Let's take a look at this
Okay, we're back aligning on that's incredible. Okay finish strong ten out of ten for mid-jury again. There we go
I love this. This is great. That is cool. It looks really good, too
Yeah, always bring your f-35 Joint Strike Fighter into
But okay that was cool I like that one a lot
Yeah, a lot less demonic took us on a bit of a roller coaster. Yeah. Yeah, I did not like that angel
That was weird. That was that was very weird a bit creepy. The aliens were very bizarre, but you redeemed it all you want it all back
fantastic Weird like creepy the aliens were very bizarre, but you redeemed it all you want it all back Fantastic
Give us a review of the like overview of the actual experience
How long does it take to generate these are you hitting rate limits?
How much is it cost give us like the breakdown of like you know the consumer experience? Yeah, it's really good
I mean, I think so I'm on the $30 a month plan, okay, which is kind of the mid-tier one
It's really good. I mean, I think so I'm on the $30 a month plan,
which is kind of the mid tier one.
But it's very fast.
I mean, it takes probably 10 seconds,
50 seconds, it's really fast.
Whoa, okay, VO3 is like two minutes.
So you can iterate like super quick.
Oh, that's cool, okay.
But it's very easy to use.
I mean, I haven't used,
so I'm actually not on the Discord, I'm on the website.
Okay, yeah.
But it's very easy to use, yeah.
Okay, very cool, very cool.
And you can run them concurrently,
so I could do multiple at the same time. That's to use. Yeah, very cool. You can run them concurrently so I could do multiple. Okay at the same time
That's great. Yeah, really great. Awesome. Well, very fun. We'll be tracking it more asking people
How how it's benchmarking how it's working? We'll have to get have some fun with those
I had a lot of fun with the vo3 ones. We were doing the the crashing through the Hollywood sign
A couple few too many bottles of Dom Perignon on the back of Ferrari. Yeah, I didn't like how there weren't guardrails.
There seemingly weren't guardrails.
Yeah, that's an AI safety issue.
Yeah, that's an AI safety issue.
I shouldn't just be able to prompt you to drunk driving.
Yeah, bottles of champagne flying out of the car.
The quality was remarkable.
Anyway, Mid Journey is having fun on X.
They say, introducing our V1 video model.
It's fun, easy, and beautiful, available at $10 a month.
It's the first video model for everyone,
and it's available now.
How many prompts do you get for $10 a month?
I don't know.
You want to look it up?
Yeah, I mean, I think it's actually unlimited,
but it just takes longer.
It takes longer.
Yeah, but I'll verify that.
That's insane when you put it into the context
of those outputs there in many ways
Better yeah on par. Yeah, at least like from an entertainment value standpoint as VO
Yep, and VO is $500 a month. I'm still gated on I could only do three per day
I have to come back they take two minutes of pop. Yeah speed of iterations really really key
So I mean that's the whole discord model is like is like get people iterating sharing ideas
like to explore the space and figure out what works like
Even just from seeing those four. I feel very confident about its ability to
render aircraft and so I'm probably not going to go and prompt a bunch more alien videos
But I'll definitely be prompting a bunch more F35 videos
because it seems to do that really, really well.
And so the more people you have making more stuff,
the more you learn the guardrails,
learn how to use it creatively
and can actually make a better product.
But Mid Journey was having some fun.
Devin Fan from XAI says,
"'I know what I'll be doing this weekend,'
and Mid Journey says, "'What weekend?'
And Will Depue says, I know what I'll be doing this weekend, and Mid Journey says, what weekend? And Will Dupieux says, LMAO.
Blake Robbins says, Mid Journey video is breaking my brain,
and everyone's having a good reaction to this.
So, it's always fun to have a new AI tool,
and we're talking to a couple AI folks on the show today,
so we'll be running through that, getting their reactions,
and talking more stuff.
Elon Musk posted the very sad, what is this,
the peepo, pepe or something, it's the green frog,
he's smoking a cigarette, he's not happy,
probably because RIP to ship 36.
4 a.m.
Brutal.
We, Ashley Vanceance we should I
Think a picture of Elon, you know smoking a heater
after one of his rockets blow up blew up would be
Become a timeless meme that would be it would be worth kind of his
Comm's team kind of working on putting that together maybe working with actually actually Vance to get that shot yeah he could live so girls say I
can't believe he didn't cry at the Titanic do men even have feelings boys
crying at the sight of ship 36 exploding very very sad and then Elon says just a
scratch the entire thing blew up it was was- It was just a flesh wound. It was intense watching it.
I mean, the ball of fire here is immense.
So the Starship exploded during a test in Texas,
a setback for Mars is Mars, for Musk's Mars ambitions.
Now the Mars transfer window is very, very tight.
Like you can only get from the earth to Mars,
like once every 18 months or something,
or maybe even more.
It's really hard because like,
if the planets are on the opposite side
of the solar system, like you just can't,
even though you have a rocket,
you just can't get over there.
So you have to wait until they're lined up
and then you can do it.
But realistically skill issue?
Realistically true, skill issue.
If you built an even faster rocket,
you could get there no matter what.
You just pilot, steer it around like it's a GT3RS
around the Nurburgring, no problem.
So the explosion occurred during static fire test.
No injuries were reported.
Thank goodness.
We love autonomy.
Very, very happy to hear that no one was injured
during this because it looked horrific
and it looked like in any other scenario,
there would be a bunch of technicians there,
but fortunately they were able
to do everything remotely, which is great.
And then Starship features pressure to meet deadlines
for NASA's moon mission and Mars exploration.
So there's a big NASA moon contract
that's very important, very material to the business.
Obviously SpaceX has a lot of other business lines,
but this one's very, very important too,
and we hope that they can get back on track.
SpaceX is making an enormous bet on Starship,
which stands roughly 400 feet tall at liftoff
as it tries to break ground with new reusable rockets.
And the paradigm of Starship, it's not just a bigger rocket,
it is way more reusable.
You look at a thing, it comes down,
gets caught by those arms, can instantly be refueled and sent back up
You're talking about potentially like multiple flights per day
And so the problem here is not can you build a big rocket humanity has done that before humanity's built a rocket
That's roughly on par. We've gotten to the moon before
The challenge now is not can we get to the moon?
It's the same thing with like the challenge is not can we get to the moon. It's the same thing with like,
the challenge is not can we build a flying car
or can we build, we have helicopters,
can we build one humanoid robot
or one self-driving car in San Francisco.
It's like, can we actually scale these systems
to the point that it is safe to go to the moon
and back on the drop of a hat for 200 bucks?
Like that's the challenge.
It's more of an economic and industrial might challenge.
And that's a completely different challenge
from just can we get one rocket to the moon,
an exquisite system.
We're looking for reusable, scalable,
you know, engineering system.
So, so good luck to Elon rebuilding
and the entire SpaceX team.
I'm sure it's a huge challenge right now.
in the entire SpaceX team, I'm sure it's a huge challenge right now.
But, if, let's do some ads to tell you about Adio.
Customer relationship magic, Adio is the AI native CRM
that builds scales and grows your company to the next level.
Get started for free, Adio.com.
Adio.com, I like it, you can use code.
Wait, what is that, what is that sound at the end?
Is that attached to that soundboard?
Guys, I think you botched the action movie sound effect.
Oh no, it has the, okay.
In other news, the Los Angeles Lakers has been sold
for 10 billion in richest deal in sports history.
Guggenheim Partners CEO Mark Walter,
who also owns MLB's The Dodgers,
is acquiring the storied NBA team in a move that makes it the world's most valuable sports franchise.
And it's so funny because the Walter General's framing
this is like, this is the biggest deal ever.
No one's ever done a deal like this.
And we're like, wait, so you're talking about like a
Series A for like a foundation model company,
like as a tech person?
I'm just like, like 10 billion dollars like
I mean we should ring the gong, but it's not exactly like the first time
It's not even the first time this show we've heard
a deca corn
Congratulations to to the Lakers.
Mark Walter and the whole team.
It's fantastic.
Major premium to the Boston Celtics
who sold for 6.1 billion.
And now the Lakers is the most valuable sports franchise.
But they just don't do enough volume.
There's only a couple games, you know?
They're not 24 seven.
Like Instagram, does that ever go offline?
No, there's always entertainment.
Lakers, they're still doing seasons.
They need to have 24-hour basketball.
They want to really get there.
Around the clock, it's like endurance,
endurance basketball.
It's just a week long game, you know?
Gotta always have five players on the court.
Just constantly.
Running up.
It's the only option.
Jeannie Buss and her family,
who have owned the Los Angeles Lakers
since Jerry Buss bought the team in 1979, wow,
on Wednesday agreed to sell majority control
of the storied team to Mark Walter, the sports investor.
And I looked at the return on investment
of owning the Lakers for that 40 years,
slightly under S&P 500.
Like it was a really, really good deal
and it was a really great company that grew a lot,
but it didn't outperform the stock market.
Just diversification bros, DCA bros, undefeated again.
Well if you're trying to DCA, do it on public.com.
Investing for those who take it seriously,
multi-asset investing, industry leading yields,
they're trusted by millions, folks.
Anyway, Walter, who is part of the ownership group
that owns the Dodgers, has been part of the Lakers
since 2021 when he purchased a 27% minority stake
in the franchise.
He's also a co-owner of Chelsea in the English Premier League,
the WNBAs's Los Angeles Sparks,
and the newly formed Cadillac Formula One team.
Let's hear for Cadillac.
Let's go.
Let's hear for Cadillac.
Congratulations.
John, John, you know, front run.
I can't hear you at all.
John front run the Cadillac F1 team
and got a Cadillac for himself over there.
You can see the black window.
It's great to have an American F1 team in the business now.
Yeah, we've fallen off, but we're coming back.
It's great.
You're not gonna be able to get one of these
in the whole country.
I don't think so.
They're gonna be too popular.
After the F1 team gets out on the track.
The sale marks the end of nearly a century
of Lakers control by a family that has become synonymous
with Los Angeles sports and the glitz
of professional basketball.
The deal also comes at a time of skyrocketing valuations
in professional basketball, which haven't come back
to earth since the league announced a media rights deal
last year with worth 77 billion when the Celtics sold
in March the $6.1 billion valuation exceeded
the previous record valuation set for a sports team
by the 6.05 billion sale of the NFL's Washington commanders in 2023.
He purchased Lakers for $67 million in 1979.
The team transformed from franchise uprooted from Minnesota into one of the winningest and most valuable sports properties.
I had no idea that they were founded in Minnesota. That's where the lake name comes from.
Interesting.
Minnesota is the land of a thousand lakes.
They were the Lakers
because there's a lot of lakes in Minnesota.
And then they just brought them to LA and kept the name.
But that's what Lakers means.
Wow.
The Buss family oversaw the creation of Showtime
and presided over the NBA's last three-peat.
A-listers like Jack Nicholson and Leonardo DiCaprio
have become fixtures at the games.
And when they sell merch, they need to pay sales tax.
They should get on numeral.com,
numeralhq.com, sales tax on autopilot.
Spend less than five minutes per month
on sales tax compliance.
You know all the time.
They have won 11 championships since 1980.
Their rosters have boasted many of basketball's
brightest stars, Magic Johnson, Kareem Abdul-Jabbar,
Kobe Bryant, Shaquille O'Neal, LeBron James,
and LeBron James' son have all worn
the Lakers' purple and gold.
I love it.
It's such a cool, yeah, the father-son duo.
I mean, I feel like that should have been
a bigger national news story. It's such a cool thing. I think father-son duo. I mean, I feel like that should've been a bigger national news story.
It's such a cool thing.
I think it's like not,
like if they were winning championships together immediately,
that might be a different story,
but it's just so insane
that you could be playing professional basketball
with your son.
It's amazing.
Just because you could've earned a better return
by DCing into the stock market.
That's not why people own these assets though.
Owning the Lakers for a number of decades,
I imagine was absolutely priceless.
So great investment.
You get the owner's box.
Great run.
Yeah, all the perks, you have to add those in.
Do you get perks from DCAing into the S&P?
I like how Lakers legend Magic Johnson hit the timeline,
said, just like I thought, when the Celtics sold for 6B,
I knew the Lakers were worth 10B.
Let's go.
The confidence of Magic Johnson.
Great investor, too.
He's got a bunch of good stuff in the portfolio.
Anyway, more news on the Scale AI transaction.
So it's closed.
I believe that Alex Wong has a badge at Meta
and shows up to work in Palo Alto
and clocks in at Meta HQ now.
Scale AI is still an ongoing concern,
is still a company, but every competitor is out for blood
and they want to take as much of the business as they can
since obviously the perception is that scale AI
will primarily be working with Meta
and that other foundation model labs
might not want to do business with scale AI anymore.
Unclear if they can separate out the businesses,
if they can separate them out fully over time
and sell the position to other investors,
create like a diversified,
I mean they could even take the company public.
At which point, I imagine that it would be a lot less,
a lot less of a conflict of interest or a fear.
But there's been news that OpenAI said,
hey, we're not training,
we're not using ScaleAI for data anymore
because it's too aligned with our competitor, Lama maybe.
But everyone's trying to get a piece.
Yeah, a lot of this was very predictable.
I don't think Meta and Scales teams looked at and said,
hey, if we sell right now to Meta, which is competing
in open source AI, we're totally going
to retain all of our customers.
People aren't just going to immediately turn off.
And no, they were smart enough to know what would happen and
There was an article I think yesterday about open AI, you know ending their relationship with scale
But from what we knew like they hadn't been doing much. Yep for a while
That's part of the reason why Mercor had been and they also brought a big function in in-house because for some of the more complex
Tasks that make sense to generate the reinforcement
learning data yourself.
And there's just so many other services
having a single point of failure.
Never makes sense for a business of that size,
but we'll see.
So the information has an article here
about a little-known startup that has surged,
hint, hint, past Scale AI without any investors.
This is interesting.
After Meta Platform Scale AI deal,
data labeling is looking like
Silicon Valley's hottest new interest.
That's enormous opportunity for Edwin Chen's surge AI.
For years, data labeling existed
in a tucked away corner of Silicon Valley,
a critical but unglamorous area of AI
where companies like Google and OpenAI
hire outside firms to improve their models
by laboriously grading the quality of what they produce.
Now, a spotlight has unexpectedly fallen onto the field
in the wake of Meta Platform's decision to pay
14.3 billion for 49% of Scale AI,
the best known data labeling firm.
But it's not the largest such firm,
nor perhaps the most impressive.
That title belongs to Surge AI, founded by Edwin Chen.
This is fascinating.
I didn't know this.
One billion in sales last year.
Bigger than scale.
Yeah, so Chen's startup has one customer,
it's like Google, OpenAI, and Throppet.
It's such a testament to the idea that like,
sure you can bootstrap, but it's so incredibly hard
to have any hype around your business
if you're bootstrapped because you're not having, your investors aren't hitting the
timeline for you on a daily basis.
And also, if you're not trying to raise capital, you have less need to go and be loud and go
on podcasts and talk to the press and all this stuff because you're just making a lot of money
and sometimes it can be beneficial
for people to not know about you.
So this is crazy, crazy stats.
So Chen is 37, he has no investors
and has bootstrapped a five-year-old startup
entirely by himself, which has 110 employees
in offices in New York and San Francisco.
The company generated more than $1 billion
in revenue last year.
Surge has told employees, a previously reported figure
that exceeds the $870 million scale generated in revenue
during the same time period.
And unlike scale, Surge was profitable
and has been from the beginning, Chen said.
Moreover, Surge could see its sales get even larger
if other companies copy OpenAI's decision
to stop hiring Scale, a choice made over concerns
about Scale's relationship with Meta
to shift business to Surge.
Other key financial metrics couldn't be learned,
like how much revenue Surge keeps
after paying its workforce of mostly contractors.
So there is a question about the margin,
since this is somewhat of a marketplace business.
This could be a situation where a you know, a thousand dollar contract comes in
and $800 of that contract goes to the actual contractor
who's doing the work of the data labeling.
But at the same time, even if it's 200 million
in like, you know, like net revenue,
that's still a huge business.
It's hard to imagine Surge not being a fantastic business
if they haven't had to raise money.
They have 110 employees and they're used by Google and all these
major foundation model apps, so it seems like a fantastic business.
But if Surge could earn a valuation from investors similar to the one Scale Receiver Meta, such
a price would make Chen a billionaire many times over, at least on paper, and quietly
one of the wealthiest people in tech.
Interesting. over at least on paper and quietly one of the wealthiest people in tech, interesting. I'm very interested to see what he did before this company.
Edwin Chen, I feel like I've heard that name before,
but I don't know.
As AI models transform from toys into real business tools,
data labeling is becoming more and more essential.
Contractors hired by companies like Surge grade the responses
from AI models and write thousands of questions and answers
in fields like programming, math,
and law to feed those AI models.
And so, I wonder if this is gonna go the route of,
you are Deloitte or McKinsey,
and you're going to have your team,
but then also a company like Surge,
create a ton of training data around a specific workflow
that is costing your business 20 or 50
or 100 million dollars every year.
And then, so it's like instead of like the AI BDR
that's like kind of generically writing emails
based on like the average of the entire internet,
it's like no, this is a fine tune for your business,
perfectly trained, perfectly,
and it really distills what you do excellently.
I don't know if it'll go that way.
I'm interested to talk to people about it.
As AI models, so,
Surge's subsidiary, Data Annotation Tech,
says workers get paid to train AI on your own schedule
with wages starting at $20 an hour.
Chen has distinguished Surge by making it the high-end shop,
charging premium rates often two to five times
what Scale might bill.
Surge justifies the prices with its reputation
for industry-leading work.
Indeed, one former Scale employee said
Surge often performed better than Scale
in customer audits of labeling quality,
and competitor Garrett Lord, who's coming on the show today,
who runs Kleiner Perkins-backed Handshake,
readily acknowledged that Chen is the number one player.
So I'm excited to talk to Garrett Lord today
about this exact topic.
Should be very interesting.
You wouldn't know that from the coverage
of Meta's blockbuster deal to quasi-acquire scale AI.
Its CEO Alexander Wong, who is now joining Meta
in a senior AI role, was widely regarded as the leader of the data labeling field and had become a Silicon Valley celebrity, blanketing
podcasts and conferences with his presence and posting heavily on X. It also raised $1.5 billion
in venture capital, putting Scale on a very short list of companies that have raised that much,
and he hired upwards of a thousand people. Wong had made time to his exit perfectly, given the
traction of Surge, which had grown larger than scale without outside capital
and with a tiny fraction of scale's workforce.
Scale also missed the goal to hit a billion dollars
in revenue last year, but Scale spokesperson
said the company stood behind this.
Scale wasn't profitable either?
It was not profitable.
Which?
But wasn't burning a ton of money.
Like I think they had like all.
They were efficient.
They raised 1.5 billion and they still had like
almost a billion in cash.
Yeah.
So they weren't in trouble or anything,
but at the same time it was not a wildly profitable,
not a wildly lean business, but I don't know.
What, what, what a, it's absolutely fascinating
to top these two businesses.
It's a wild industry.
Something that, yeah, I mean, it feels like
there's such an edge just to even identifying
this opportunity years and years ago.
I mean, I guess search started four or five years ago,
but it was certainly like pre-chat GPT
that all these companies got started.
And then they realized like some of them got started
in self-driving car annotation,
all sorts of stuff like that.
But Chen studied linguistics and math at MIT,
came to the idea for his startup after leaving college
and witnessing firsthand how big companies struggle
with data.
Before starting Surge, Chen worked as machine learning
engineer at Facebook, Dropbox, Google, and Twitter.
He worked at four different tech companies.
Just like going from one to the next.
That's insane.
He was developing recommendation and search algorithms
and helping gather the data needed to train them.
Despite the hefty resources of those companies,
Chen encountered a lot of problems.
At Facebook, for instance,
Chen was tasked with helping build a Yelp competitor.
His team needed to train a model
that could correctly classify businesses,
telling the difference between restaurants
and grocery stores, for instance.
To do so, they needed a data set
containing 50,000 accurately labeled businesses,
which he found out would take six months
for an outside firm to assemble.
We had no solution other than waiting.
We simply waited.
When the data came back, Chen blanched.
In some instances, it had labeled restaurants
as coffee shops and coffee shops as hospitals.
The data was complete junk, he wouldn't say,
which vendors Facebook had used.
In 2020, he left Twitter to found Surge
and picked up some of his first customers,
executives from Airbnb and Neva,
a once promising AI search engine startup,
as only a founder in San Francisco might,
bumping into them at rock climbing gyms
in the city's Dogpatch neighborhood
and the Mission District, talking up his startup.
To get Surge going, Chen recruited data labeling contractors
he knew from his previous roles
and funded the startup using his savings.
He wouldn't say how much he put in.
Fortunously, Chen focused on language modeling,
scale by contrast started out using more visual data
for autonomous vehicles we talked about.
Just as those types of models began to grow in importance.
Less than a year later, OpenAI had hired Surge
to fine tune its models by teaching them
how to avoid producing harmful responses
like a racially biased language,
biased based on research paper
the company published together in by 2022,
Anthropic and the Surge customer.
They're putting out research papers with OpenAI
and still manage to stay this under the radar.
Wow, yeah, so look at this, the label Largesse,
data labeling has proved to be a lucrative niche in AI.
Surge, founded in 2020, has over a billion in revenue,
zero funding.
Scale, founded in 2016, has 870 million in 2024, raised,
oh, this says funding raised,
but this is clearly valuation or something,
because it says 17.4 billion,
which is not what they raised.
Touring has 300 million annualized,
raised 225 million.
Invisible, mobile.
It's interesting, touring too initially
was like a marketplace to just hire developers,
and I think they pivoted into data labeling.
Interesting.
It's the same thing when I work with a cloud provider.
The enterprise tech customer said,
I don't know the internal expectations
for why their services work so well.
I push a button and I'm glad for the internal work
to make that happen.
Data labeling companies typically use various techniques
to make sure contractors aren't just dialing it in
or phoning it in, I guess, when answering questions.
For instance, the companies randomly insert questions
that have no correct answers or make sure labelers agree on the right answer
to a question.
So obviously you scaffold up these responses
so that everything's like double checked
and then you can kind of see if people are messing around.
But wow, what a beast of a business.
No idea how big this thing is.
Amazing.
How'd you sleep last night?
I'm on a comeback.
I got an 89. Go to aidsleep.com. No. Get night? I'm on a comeback. I got an 89.
Go to aidsleep.com.
No.
Get an aidsleep.
I know what I got.
Five year warranty, 30 night risk free trial,
free returns, free shipping.
What'd you get?
You got 100?
88?
Let's go.
Soundboard, I demand a soundboard.
Ashton Hall, let's go.
John did it.
I did it, beat you by one.
They said he would never beat me.
I had a terrible night.
I also took a nap after I got it. I did it. They said he would never beat me. I had a terrible night.
I also took a nap after I got home.
It was fantastic.
We have our next guest coming into the studio.
Mike from ARC AGI, breaking down how all the different models are doing.
Last time he was supposed to come on, Elon and Trump decided to get in a massive timeline
war. It was brutal.
John wanted to power through it.
I said, John, the people in the YouTube chat are-
Demanding this.
Demand that we do a timeline turmoil second.
If we don't do it, they will put up attack ads against us.
They will buy billboards against us.
They will go to adquick.com and put up attack ads on TBPN
if we don't cover the Trump-E-up, so we did and now we're getting those folks back on the show today and later
But if you want to take out an attack out on us buy a billboard and add quick out-of-home advertising made easy and measurable
Say goodbye to the headaches about of home advertising only ad quick combines technology out-of-home, and data to enable efficient, seamless ad buying
across the globe.
We should do some timeline.
I wanna do this blank street story,
but we can do that later.
Let's dig through some what's in the timeline
while we wait for Mike to join.
Oh, we have Mike in the studio now.
Welcome to the show.
Mike, good to see you.
How you doing?
Good, I'm here.
He's back.
Thank you so much for-
I'm glad there wasn't a major breaking drama story today.
It was actually able to show up.
Yes, yes.
I don't know if you watched that show at all,
but I was just sitting here, John was so locked in,
wanted to just keep doing the show,
and I'm messaging him like,
now we actually have to-
That was a terrible day to launch anything new.
And we launched launched something.
I saw several startups launch stuff.
Like, regrets to everyone who tried
to get anything out that day.
Yeah, Rahul.
Actually, I remember them.
It was Rahul.
And yeah, that's right.
Julius had a launch.
And what's the voice cloning company?
11 Labs, I think they launched something.
And then I think Lulu said something.
She was like, if you have bad news today,
it would be a good day to drop it.
And then open AI actually flagged like, Hey,
we had this like massive, you know, uh,
Oh yeah. This, this dust up with the government, right?
Where the government was like, you have to give us your chats.
Which is like, we don't want to do this.
Yeah. I mean, it was like serious. Like, I mean, that got, um,
we're still looking at actually, you know, the end results of that, but that went really deep into the world. I feel like
much more than, um, you know, kind of maybe even got reported on like every
single chat thread I was a part of was basically like, Hmm,
should I like stop using chat GPT as much? Um, I don't think what,
what's real life now, like to chat,
it feels like anthropic has a similar policy.
I it seems like Google might have a similar policy.
It's shachy because it feels like Anthropic has a similar policy. It seems like Google might have a similar policy.
Like there was that story a year ago about a man who was using a Google phone with a
Google Fi cellular connection and had all of his data stored in Google Drive and Gmail
and he took a picture of his child to send to a doctor, and it was kind of like a nude photo of the kid
to inspect the child for a physical medical problem.
And it got flagged as child abuse material
by an automatic system, and the automatic system
basically deplatformed him from everything Google,
and so he lost his email, his phone number,
all of his drive stuff, and it was like a false positive,
but it was really hard for him to get back through there
And so I guess my question is like like it seems bad when we hear the story in isolation
But maybe the problem is not the individual company and it's instead like the government policy and this applies to all the different companies
But I don't do two things to be true
One is that it can be a massive overreach by that court to say, basically, you need to eliminate privacy on
your platform.
Yeah, yeah, yeah.
And you can simultaneously have questions around, maybe I should use this product in
a different way.
Totally.
And the inflammatory nature of it is that people use ChatGBT as a confidant and tell
it things that they wouldn't tell anyone.
They wouldn't tell anyone in their life and they're
and they're having those conversations and I think that's why it struck such a
court because like that's true I just saw some reporting from coach you this
week that like chat few minutes per day or like 30 minutes a day now usage and
like it's not it's like closing the gap with like Instagram which is just sort
of nuts to think that,
I mean, who would have thought a productivity tool
would ever be on par with a social media app, right?
In terms of daily usage.
It's so fun.
I love it.
But the interesting thing is it's filling a similar void.
It's delivering digital companionship
in maybe the way that social media products historically did
without any social element to it at all.
Just like this one to one.
It's interesting to think like,
we went from what your friends are doing
is the most interesting thing
to what the Kardashians are doing
is the most interesting thing
to actually maybe the most interesting thing
is this person that knows everything about you and is always on and always willing to talk and
You know, you know who knows? Yeah, I think the consumer happens something being born around the stuff today
Yeah, yeah
I mean I find myself all the time like instead of scrolling YouTube looking for an infirm like an interesting video essay
To explain how I don't know like global shipping lanes work or something like that,
just going to ChatGPT and saying,
hey, break this down for me
and then I can just ask a follow-up
and dive exactly to the layer that I want.
And so, yeah, I'm definitely in that camp
of using ChatGPT just as like an exploration
and entertainment education tool,
an infotainment tool much more than Instagram right now,
at least for me.
But enough about that.
What is new in your world?
How should we frame kind of the current horse race
between all the foundation labs?
Yeah, okay.
So I'm gonna share a link.
I don't know if this is something y'all can pull up.
Yeah, we can pull it up.
So this is a post that we published a little over a week ago.
So I think there's
been this really big, like, what's the frontier right now in sort of AI progress, right? The
massive shift in the last six to nine months has been moving from this regime of, like,
scaling up pre-training with more and more labeled data into these, like, test time compute,
test time adaptation regime. People call these AI reasoning models, right?
We're getting these models time to think out loud,
additional data.
Every major lab now, pretty much at this point,
I guess, except for Meta,
has one of these systems that we've been able to test
and report results on.
And I think there's some really, really interesting stuff
we're starting to see.
I think the most notable thing is that like,
there's not an
absolute clear winner across the sort of like landscape right now. There's basically a sort
of pre-deferent here that's emerged. One of the most important things if listeners are listening
here, I think you should take away, is that anybody who gives you a benchmark score on an AI system that is a single number is just marketing to you.
Because the reality is now with these AI reasoning systems,
you have to report score on like a two-dimensional.
You have to consider cost and efficiency
alongside the accuracy.
And all these different lab providers
have come out with different AI reasoning systems
that sort of score differently.
They're trading off cost per accuracy at different points.
So like if you want just like the absolute highest horse, you know,
highest Ross horsepower for costs and time is no option.
Oh, three high is going to be your like clear winner today for that.
Um, but if you're somebody who's saying like, you know, Hey, I want to plug in
an air easing system into an existing product I have where I want like faster
answers and I'm willing to sacrifice some raw horsepower for generality for
like quicker response times, lower cost. You might look at something like Brock or
Gemini 2.5 for thinking. There's not like a single like best answer, which I think is
pretty interesting. And we haven't seen like this sort of frontier is I think what all
the labs are working to, to try and figure out, okay, how can we get accuracy as high
as we can? But also we got to try and keep costs as low as we can down in the human efficiency
graphs.
Yeah, I've noticed that more recently with
the kind of my default usage in chat GPT,
4.0 seems super fast, but I always am thinking like,
oh, I should maybe put this in O3 Pro,
but do I wanna wait 10 minutes?
And I'm making that kind of like economic calculus there,
even though because I'm on the pro plan
It's not an economic cost not a direct. I'm hitting a
13 yeah, yeah exactly and so I'm kind of like they're doing a 4o thing over here and then switching back and forth
It's very it's very odd paradigm that I we never really had to deal with in computing necessarily before I mean
I guess like if you were downloading like the 4k illegal blu-ray versus the SD. Of course we never did purely hypothetically, but if you're on
a torrent site there was a time trade-off between watching a screener. Well, I think this is actually
one of the reasons these AR reasoning systems, I would assert, and I don't have inside baseball
in the data, but like from the outside looking I think there are some interesting suspicions that would suggest that these
like a reasoning systems,
at least today in their current form have like relatively weaker product market
fit compared to the, uh, like not a, the, not reasoning based systems, right?
The pure language model based based things.
Interesting. That's a huge violation of the, the,
like the deep sea narrative that I felt like was really bubbling up was deep sea
came out with like the first just open access
reasoning model, like reasoning had been tucked behind
the open AI paywall, and so the pro users were familiar
with what reasoning models could do,
everyone was very excited about them in tech
or in the early adopter crowd,
but deepseek, when that app came out
and you could just install it and instantly see the reasoning chain
It felt like everyone's like oh everyone's gonna be addicted to this forever
And this is gonna be the new paradigm, but it seems like that might not necessarily be happening
Jordan you have something or I wanted to talk about like spiky intelligence and how that plays into this
we had this someone came on and said like
They might have been cholto actually talking talking about Arc AGI, just saying like,
hey, all the foundation labs kind of have like a truce
that we won't reinforcement learn specifically against Arc.
I don't know how real that is from your perspective,
but it feels like increasingly we might see
like very task-dependent RL runs
kind of chipping away at specific things like IMO level math
is something that clearly like there's a ton of work
to be done on, but we don't have as many verifiable rewards
for poetry or comedy writing,
and so that'll be a little bit messier
and later down the road maybe,
but at the same time, there's probably other verifiable
rewards that are just smaller pockets
of value here and there for these little micro tasks.
And so I'm wondering if we will ever see like the marketing language around these models
evolve like Grok kind of did this with like, we are the anti woke one, but that was more
just in the overall like temperature, the vibe of the model.
But I'm wondering if there'll be an idea of like,
this one's really good at math,
this one's really good at research,
this one's really good at that,
or if they're all kind of going down the same path
with what they're trying to solve.
I do think you probably are gonna see
some domain specialization.
I think my guess over the next 12 to 24 months
is that you'd see some domain specialization
benchmark scores diverge because of how the labs
are starting to do the next evolution of training, which is they're using RL environments to generate synthetic COT traces, doing their
sort of model trainings on that data. And they're trying to go get it on a lot of different
domains. The original 03 paper, I think was interesting on benchmark results where on
this new sort of COT reasoning system, they had relatively high scores on math and coding. But the gap, or I should say the step function increase in those scores,
that was much higher than the increase in legal reasoning. Which you would sort of maybe
intuitively guess or think or expect that legal reasoning would probably be one of the best
general domains if you trained a reasoning model that was really good on math and coding, like it should be
like it's a language model that like that would directly transfer into like the legal
domain because like, okay, it's symbolic reasoning that's like self-consistent.
And that wasn't the case.
So I think that's, I suspect that's what we'll see there.
You know, there's obviously the big scale news.
The thing that I'm seeing now is there's probably like, I don't know, a handful that I know
of these new startups
that have come up in the last several months,
but all getting founded to basically go build
RL environments to generate synthetic
or semi-synthetic data and like selling them
to sort of the major labs, which are the major frontier
folks building these next gen systems.
I think we're gonna see more of that.
I expect that's kind of where it's gonna drive
a lot of areas.
What do you even more?
What does the data labeling market look
like today we are covering surge AI which a lot of people
weren't familiar with I I'm sure I've seen it at some point
but I was certainly not familiar with it until we covered it
today. What do you think the table labeling market looks
like in five years. Do you think that scale was getting out
kind of at the perfect time. You know I'm curious. looks like in five years? Do you think that scale was getting out
at the perfect time?
I'm curious.
I think the timing was pretty good.
I mean, look, the macro change here
is from a regime where we're scaling up pre-training.
We want as much tech quality label tech
as we can get our hands on to scale
these foundational models into one where we're
trying to train process models or make the foundation models really good for process thinking and COT generation.
That is a complete shift in how you want to generate that data. You want an RL environment
that you can create lots and lots of COT traces, really very long traces of long running tasks as
well. And you can feed all that right back in and then take advantage of the scaling laws. We already
know about language models and how they work and how the performance increases as you can get more examples of the data there.
So you know, my like, I guess like macro that would be, you know, sort of the trend is heading
down towards like the pre training skill stuff.
Yes.
So significantly higher on our environments.
Yeah.
So when you say RL environments, what you're talking about is moving from a paradigm of
I go to a data labeling company
and they hire a ton of contractors to generate new text
or verify the responses or grade the responses
from these models to I am now hiring top machine learning
engineers, AI scientists, and having them design
an environment that the reinforcement learning can happen
like autonomously within the system, right?
Or effectively like these new startups that you mentioned,
they are taking the massive like hundreds of thousands
of contractors like out of the loop for the next runs.
Is that correct?
Yeah, it's synthetic or semi-synthetic in some cases.
They're like example companies here like mechanized
work is one that got started recently. It's doing this stuff
more morph. I think it's doing this stuff. Habitat is another
world environment that's sort of doing similar stuff. Sure.
There's just a lot and it's like very emerging all these many of
these got founded in the last like couple months. Yeah, I
think that's a function of the demand and the pull from a lot
of the frontier research groups that are wanting this data
to sort of do their all-trending stuff.
So do you imagine companies like Surge and other players
would try to pivot into this if they're expecting less growth?
I would expect that founder-led companies like that
would recognize this as a growing part of this
and have bets in it if they don't already.
Yeah, I was always wondering,
the whole story of scale
was kind of a series of like various booms
in training data.
Like the first one was data labeling for Atanas vehicles.
And it seemed like that grew very, very quickly.
And then the training paradigm around Waymo's
kind of shifted away from,
hey, we need more and more label training data to something else.
And just having the cars on the road and,
and generating real world data from that.
Then there was like the second era of like the,
the pre-training generating the data for RLHF and the big boom there,
open AI and meta both big customers throughout that cycle.
And then there was kind of a question of like,
what's the third act for all of this?
And I was wondering if the,
is it possible that there is a third act,
but it's just something like humanoid robots
or something like that?
Like put a bunch of people in mocap suits
and generate a ton of training data
for what it means to pick up a soda 25 times in a row.
It'd be a very different like training data product,
but at the same time, like we have mocap suits
and maybe that's relevant or maybe that's ridiculous
to think, I don't know, what do you think about that?
I mean, so you could, one definition of intelligence
is the information conversion ratio
from the amount of information you have
to an action policy decision.
There's the intuition here is you can make
a perfect decision given a set of data or
information that you have. And oftentimes the right thing to do is go collect new data.
And so once we actually start peeking out on intelligence capabilities,
either plateaued because of research or plateaued because we've actually got AI that's close to
AGI, the limiting factor then becomes the ability to acquire new information, new data. And so in
synthetic or on the internet, that's going to be a function of like,
you know, in sort of software bits world.
And then the one beyond that is going to be, well, how do you literally go
make contact with like reality, the universe?
And that's your that's your feedback mechanism to get new information
into the system so you can like increase your overall intelligence.
Yeah. What do you what do you think?
We had George Hots on the show a few days ago, and he was talking
about this efficiency problem, where
like if you took all of the conversations
that I had ever had and you transcribed them,
it would be like a few megabytes of data.
And I'm able to generate some level of intelligence
based on that and have a golden retrievers level.
The level of intelligence of a golden retriever, yet an LLM
needs effectively like, you know
Terabytes of data the sample efficiency is very low. I mean, this is a true statement about just the paradigm of deep learning
Compared to program synthesis is the bet that that we're making at India
India program synthesis is a regime that's much more sample efficient ordinary models that can just that can generalize out of distribution
But I think it's completely statement
It's like it's a completely statement.
It's a very damning statement that we've got AI today that's trained on some colossus of all of humanity's knowledge and text, right?
Over the last 5,000 years, it's all on the internet.
And what new ideas have they produced?
And maybe I could point to Alpha Evolve, which I think is a very, very impressive frontier AI system.
And it's legitimately finding new knowledge.
It's creating new ideas, verifiably.
But they're very small and they're
on the margins of things that we kind of
already have been doing, right?
Matrix multiplications, things like this,
are in the regime of things that we kind of know about
and can define and spec out for these systems.
Whereas if I took either of you guys
and I gave you somehow the superhuman capability
to have all of humanity's knowledge
in your head at the same time,
I think you'd probably be able to produce at least one idea like connect to random you know
divergent domains are like oh hey this kind of looks like this what here's a new thought right
yeah totally that still feels like something that uh I think I mean it's very exciting to build
towards I think so we all want right that is that is like EGI is capable of invention invention
discovery that that will actually increase the rate of like scientific frontier innovation,
but we don't have that yet.
Switching gears a little bit, five years from now,
do you think the average American will pay
for an LLM subscription?
I think that the, I think the cost is probably going to go down far enough where that just
gets built into the subsidy of whatever the product is and the revenue stream is attached
somewhere else.
I haven't thought deeply about it, but that's like my off the top of my head thinking.
Yeah.
Yeah.
We were talking offline this morning about just this dynamic of like the average American will actually churn from HBO Max because at that moment in time,
like they, and it's $20 a month or some, whatever the fee is at that moment of
time, there's not a show that they really love. So like, yes,
there's a lot of like value there, but like, they're just like, yeah, I'm just,
yeah, like I don't know.
I mean, I still know so many people that don't even have subs that are like,
like my, my, my wife, like she's catching up all the time, but she's still in the free plan.
That's enough to get a lot of value for what she does.
I sort of respect that.
Yes, there are going to be fire users, and those are going to be the folks who really
are doing amazing, powerful things and stuff.
I suspect the base rate is going to go down enough where it's going to be more embedded
across almost all the products and experience you have as opposed to being a dedicated thing
that you're paying a lot of one-off cash for. Now, you might buy products that need it, as opposed to being like, you know, a dedicated thing that we're paying a lot of, like, one-off cash for.
Yeah.
Now, you might buy products that need it,
like robotics, or, you know, things where intelligence
built into it, I think you're gonna have new product
categories that emerge, and like, people will go buy those,
but like, paying the subscription itself,
I'm not as confident on.
Yeah, the other thing that stood out to me today,
specifically, was Mid Journey came out with their
new video model, it's very good, it's $10 a month
for effectively unlimited prompts, And you comp that to Google's VO3, which is $500 a month and you're still heavily
gated. And it just seems obvious that that five years may not be the right timeline for
that prediction, by the way, might be longer than that. Yeah, sure. Yeah. Like there's
so much use case diffusion. Like one of the things we're seeing with Zapier AI, which
is growing quite strongly right now,
it's on the exponential growth path for the AI usage and AI's apps.
I've looked at this and I've been wondering,
is this a function of the technology getting better or use case diffusion?
I've looked at the usage and most of the majority of the usage is still on
four or cheaper or worse models right now with people
bringing AI into the majority of the usage is still on like four or cheaper or worse models right now with people bringing system like AI putting AI into the middle of automation.
Interesting.
And so I'm pretty confident that a lot of this like agentic automation is actually not being driven as a result of like technology progress from AI to AGI, but more about the market is just starting to learn, finally learn.
Okay, here's what we can use it for and can't use it for, because it's good at not good at.
And it's very similar to the option curve we saw earlier the earlier example where once you learn what the tool can do,
you carry a tool forward with you in time,
and then you encounter a new situation or circumstance
that you can apply your tool to it.
And so you like, we create use cases over time.
So I think we're still very, very early, you know,
but there's, I suspect that a lot of the usage increase,
you got 30 minutes a day even on Chat2PT,
is a function of just use case diffusion, less tech progress.
Yeah.
Where do you think XAI will eventually
need to generate a lot of revenue?
Where do you think it'll come from?
I mean, if they make progress towards AGI,
it's probably going to be enabling other services they
have around the ecosystem. That would be my guess. Less selling it as a direct product itself
and going head to head. You know, on cars, rockets, robotics, there's so many places
where I think you would want to use and have the product shape where you can use higher
degrees of intelligence. You're not bound by just like, you know,
but the fastest consumer experience you could deliver. Um,
I suspect that might be actually what most of you is, at least in the near term,
who knows our longterm, I think the shape falls proxies.
I mean, yeah, that was in many ways my,
my longterm thesis around the llama project and the super intelligence team at
Metta is that there's just so much work to do at Metta broadly that's enabled by AI that if you can avoid the long-term open AI bill it like that's probably worth billions and billions of
dollars because of where how AI is going to infuse into every single corner of
their entire ecosystem and it's all at such massive scale that the cost of using other vendors might be in the billions and so
just looking at the savings there might make sense I don't know I mean I think
the most important takeaway I think I shared this last time I was on with you
guys it's still true today is that we are idea constrained to get to HCI this
is what ARC's V1 data shows is what V2's data shows V2 is completely unsaturated
we're not even talking about efficiency just nothing can do's what V2's data shows. V2's completely unsaturated. We're not even talking about efficiency, just nothing can do it.
And V2 looks very similar to V1.
Even on like hyper-specific solutions on Kaggle,
the Ark Christ 2025 contest,
progress has been slower this year than it was last year.
We are very much still like,
the thing I can state most confidently
and assert most confidently is that like,
we need no ideas, there's some major breakthroughs
we have not figured out or found yet.
Does it worry you that that could take years and years and
years and what happens to...
One of the reasons I funded the prize last year, I wanted to
like correct the market narrative here. Like I've
spent a lot of time with students, a lot of young
researchers and like at the beginning of last year, there
was a serious vibe of like, oh, it's all figured out. I'm not
going to go do AGI research. I'm just going to go work at the
application layer on LM stuff and make a quick buck before AGI
gets here. And that is a boy, you know, I'm just going to go work at the application layer on LM stuff and make a quick buck before AGI gets here. And that is a, boy, you know, look, if you want to live in a world of AGI for yourself,
for your kids, like, I think you, what we should be trying to encourage is to design
the like strongest global innovation environment possible.
And that's one where there's a lot of diversity of approach, a lot of different ideas being
taken, a lot of sharing, you know, kind of what AI looked like in the 2010 to 2020 era,
right? Very open approach is how we got the transformer to GPT-1 and GPT-2 and so on for
today. Yeah, I'm optimistic. I think the last six months have looked a lot better than the
previous two, three years. I think the AI industry is maturing actually quite a bit on
this front, this topic as well. We're being more tolerant and kind of recognizing, okay,
we've found it all figured out, there's more ideas we need.
That's been encouraging, and I think it's seeping down
at the low levels too, but yeah, my sort of broad view
is any capable human who has no ideas to work on AI,
I should like, that's the most important thing
you could be doing at this point in time.
That's amazing.
Thank you so much for stopping by.
This is always fantastic.
Yeah, great catch up, guys.
Thanks for having me.
We'll talk to you soon.
Cheers.
Yep.
Bye. Next up, we have David Hahn from Sequoia Capital coming in.
David Kahn.
First time.
First time on the show.
Exciting.
Coming in. Wrote a fantastic piece. I want him to break it down.
Would you mind kicking us off with a little bit of an introduction on yourself?
Hey guys. Good to see you. Yeah, I'm one of the partners. My name is David Kahn, one of the partners at Sequoia.
Excited to chat with you guys.
Yeah, thanks so much for hopping on.
Kick us off with the new blog post.
What was the thesis, what inspired it,
and then I'm sure we'll tie it to a bunch of news.
So the new blog post was about AI companies
or AI labs being more like sports teams.
And of course we all probably saw,
seeing the news around scale AI acquisition,
some inspiration coming from that,
and then these rumors that we'd been starting to hear
over the last few weeks and finally now bubbled out
over the last couple of days into the public conversation
around a hundred million dollar signing bonuses,
huge amounts of money being spent on top AI talent.
And for me, I mean, I write these pieces
as I think about and learn about AI
and what an exciting time that we're living through.
And I'm pretty fascinated by kind of the human dynamics of it all. There's like seven to
10 people at the top of these big tech companies. They control, you know, the big magnificent
seven or now a third of public market cap. They're extremely powerful and important.
And I think there's sort of sometimes in AI, this notion that AI is super abstract or these
things are inevitable, but actually it's human dynamics. It's sort of this game AI, this notion that AI is super abstract or these things are inevitable. But actually, it's it's it's human dynamics.
It's sort of this game of three chess that's being played by these really fascinating
individuals. And so as you know, an observer on the sidelines, we all get to watch and
see how this stuff plays out. And I like to write about it as I as I think about it.
I posted on February 2nd, companies should do NBA style trade deals.
I want to see OpenAI trade at COO and CFO to Anthropic
in exchange for their CMO, a cracked PM,
and a couple of Waterloo class of 2026 new grads.
Well, there is kind of this new draft dynamic, right?
Like every year there's kind of this new draft.
And as people see these big packages,
probably all the Stanford kids these days
want to be AI researchers.
And so there is this notion of it getting refreshed.
What is driving this? Is it true AGI pilling at the top of these organizations where they think that, you know,
it's going to be winner take all or it's going to be a $10 trillion market?
And so there's no amount of money that you can over invest
or is it just, hey, it's a more competitive dynamic and sure, we're a trillion dollar
company, so yes, spending $10 billion to move our market cap 1% is totally rational economically.
What do you think's driving this and I want to get into the different cultures of the
different Mag 7 because some of them don't seem to be doing this yet.
Meta platforms had 63 billion of net income last year.
So it's like, is spending a quarter of that
to like, you know, be a major player
in the next wave worth it?
They could have bought the Lakers six times over
off net income.
Anyway, well yeah, what is your take on like,
on like the ethos
that's driving these bigger packages?
Yeah, I think about this, and when I write these posts,
my frame of mind is I almost put myself
in the shoes of these people, and I try to imagine,
what would I do, how would I think about it,
what's the game theory of it?
And I think there's two things, right?
I think one thing is kind of the revealed preference
seems to be that they're AGI-pilled.
People can tell you a lot of things.
I think you learn a lot more by watching the decisions people make.
And I think the evidence suggests that they believe AGI is coming.
It's extremely important for these companies.
It sort of must win.
And I think for Meta with these decisions, it's almost all in.
We have to make, we have to win.
Then I think there's a second dynamic, which is you can believe these things,
but you know, we're all humans.
I'm again fascinated by these kind of human dynamics. And you can get caught up in an arms race, right? And
as humans, we look at evidence and we see evidence through a lens that we already have.
And oftentimes, we overemphasize reinforcing evidence and we underestimate evidence that
disagrees with our point of view. So you can imagine that three years in now to this sort
of AI moment that started with Chad GBT, you can imagine that people are really caught up in this.
And I think the arms race dynamics are something I wrote about in the piece.
And I've commented in the past with AI $600 billion question on the compute arms race
dynamic.
And I guess it's now interesting to see two arms races.
First there was a compute arms race.
Everyone kind of got a lot of arms, right?
Everyone has a lot of GPUs now.
And now there's the talent arms race
and everyone does not have equal talent, right?
And so now you're gonna see this arms race and talent
and everyone's talking about it,
but I think we're still probably like
inning two of this talent arms race
because in any arms race, when I up the ante,
you have to respond.
And I think it would be a fiction to assume
that nobody's gonna respond to this. Who can who can respond at least on it from a dollar standpoint
I want to talk about Apple because it seems like Apple has the money
But they seem like the least a GI pill of any organization poor CEO barely makes it doesn't even crack 75 million a year
that's not
Become a AI researcher and go to Metta
He could make more. He should just become an AI.
He should become an AI researcher and go to Meta.
You really should.
If you want some fun.
It's funny that you say that because I think people,
these numbers are so big that they're
kind of hard to grapple with.
And so I was actually, after publishing the piece,
I was like, I wonder how much Fortune 100 CEOs make.
And I think an AI researcher is going
to make four times the amount the CEO of Coca-Cola makes.
And it is kind of wild when you think about the economic.
Exactly. This is a totally new phenomenon in the scale of business.
Yeah, yeah, and I mean, it kind of begs the question,
like the numbers are huge, but the market caps
of the companies are huge, and so the question
is maybe not should the AI researchers be paid less.
It's like, should Apple be set up to pay Tim Cook
a billion dollars a year so that he can confidently
go out and hire a couple people at a hundred million or fifty million or two hundred million
and not feel like he's like the organization is like flipped from like a pyramidal standpoint.
Like you're still at the top.
There's always a weird dynamic with a founder CEO who's taken a low salary and wants to
hire a big shot and like can you really have a reporting to dynamic if you're making half as much as your direct
report?
Well, the question is, what's the marginal, you know, I think with any salary, if you
just think about in pure economic terms, right, like what is the marginal benefit that you
get from hiring this person on a sports team with a pivotal position, you very clearly
actually can understand kind of the economic rationale.
You understand sports licensing and the way that
the way that these businesses make money hiring a star player actually does make economic sense for some of these franchises.
And then the other element is sports teams are owned by mega rich individuals for whom ownership of the sports team is
more than an economic investment, right? Maybe they really care about the city.
Maybe, you know, it's cool to own a sports team.
And so I wonder if some of
those actual sports like dynamics play out here where question one, and I don't think we know this
yet is what is the marginal benefit of an AI researcher? And again, revealed preferences,
these organizations are telling us is if you're one of the 50 AI researcher who's going to get
us to AGI, the marginal benefit is incredibly high, right? So that's the revealed preference.
And then second, if you have a team of all stars, what does that do for your company? What does that do
for your market cap? What does that do for the innovation inside of your company? So
I don't think we know yet the economics of it. I think you can make the argument in favor
and say, Hey, it actually is economically rational. This is the only thing that's going
to matter. If you increase the probability that we get to AGI by X percent that that
is impactful. I also, you could make the counter argument and say, hey, everyone just wants to have
the team of all stars. It's not actually economically rational.
CEO pay, by the way, is linked.
You know, there's a lot of criticism of CEO pay historically.
Right. But CEO pay is functionally.
What is the replacement cost of this individual?
What is the marginal benefit to the corporation?
And there's a lot of brain damage that's gone into comp committees on public
companies and how much they should have paid.
Right. They're not arbitrary numbers. And this is more lot of brain damage that's gone into comp committees on public companies and how much they should have paid, right? They're not arbitrary numbers. And this is
more out of thin air, right? This is more a new experiment.
And so we're going to see if it is economically rational or not.
But regardless of whether it's economically rational, it is
self perpetuating. If one company is offering everybody
this amount of money, and you're in an arms race, everybody's
gonna have to respond.
Yeah. Have you have you or anyone on the team
comped this to what's happening
in high-frequency trading or on Wall Street?
Because there's an interesting dynamic there
where if a high-frequency trader comes in
and sets up some trading strategy
that could produce $100 million in profit,
basically in perpetuity, but then if they leave, they can't take that code
or strategy with them and there's intense scrutiny on whether or not they are trying
to exfiltrate that strategy.
With AGI research, it feels like even if I go develop a transformer at Google, it's open
source immediately with the paper and then even the secrets about reinforcement learning
with human feedback is important.
That just kind of leaks out immediately
and deep-seeking clone it.
It just feels like a much more porous environment
over in tech, and I don't know if that's just the legacy
of the open source community, but can you walk us through
kind of the comp between the two organizations?
It is such an interesting dynamic.
We just had Mike on from Park Prize,
and he was saying, we need new ideas.
The issue is if you pay somebody a hundred million dollar signing bonus they come into your organization and generate a new idea
That gets us, you know one step closer to what super intelligence or whatever
You know you want to define as like what what people are aiming for and then immediately it's like it's actually not really IP
And it just yeah, like can't really pass out you can't patent it. And then everybody benefits, right?
But yeah, what's your take?
It does seem pretty porous.
I mean, people are moving back and forth.
I don't think this was true.
I mean, when you think back four or five years ago in AI,
people were kind of very loyal to these institutions.
It does seem like that's changing.
I mean, it is really hard to say no
to these type of big numbers.
And so I totally understand why people are saying,
hey, it's just a life-changing amount of money
for my family, of course I I'm going to do it.
And then I think to your point, the question is, in the high-frequency trading world, there's
non-competes, I mean, extremely complex kind of contracts when they sign people, guard
and leave, all this stuff to prevent the secrets from leaking out.
What we've seen in AI now is with people moving fluidly between these organizations, it's
basically impossible to keep anything within one organization.
I roughly like to think of the AI ecosystem as an ecosystem. Like all of these players
are kind of contributing to this body of ideas. There's no proprietary IP. Maybe you're going
to have compute scale and there maybe there are moats there, but it's unclear actually
how that evolves and what you can keep in house. I do think maybe one dynamic at play
here is, remember reading in the Steve Jobs bio,
there's a story of Steve Jobs recruiting 50 people.
He had 50 people working with him on the sort of groundbreaking product that was going to
make Apple and it actually worked.
And then you read about Elon and the 50 people working on Tesla autopilot.
There's sort of this magic number 50.
I don't know where it comes from, but it does seem to repeat throughout tech history of
50 people is kind of the largest organization that you can get where everybody is talking to everybody and you're achieving incredible results.
And so that if that is an art, imagine if you take that as an artificial constraint.
And I think that is what's happening with this lab that that met as organizing these
I read in Bloomberg, it's going to be about 50 people.
You know, if you impose that constraint, then suddenly all of the math also changes because
you're like, okay, well, 50 times 100, it's actually only $5 billion.
Sure.
You spend $5 billion on talent.
Yes, if you believe that you're gonna get to AGI.
So I also think that the artificial constraint matters.
And there's some rationality
to that artificial constraint.
What we've seen as these research organizations
get bigger and bigger is you're not producing more results
as you get more headcount.
There's a sort of a Pareto,
the top 20% of people produce 80% of the results.
We need a new coinage for that.
Like the two pizza team is well-defined.
This is like the 10 pizza team.
This is called, people call it cons law.
Oh yes, okay, yeah.
I'll take that.
Cons, a con-sized team, one con team.
That'll work. A con, it's just a con.
It's just a con, yes.
Yeah, yeah, that's fascinating.
Jordan, do you have anything else?
I was interested if you had a reaction
to the gentle singularity.
It's published on Sam's blog,
which means that it's not directly content marketing,
it's not directly from OpenAI,
but obviously you should read into it in multiple ways.
Did you have any specific reactions to that?
It felt like a step back for me.
Yeah, the question about that is always like
disruptive innovation or sustaining innovation,
and that ties to meta strategy,
but I'd love to know that.
Well, like it feels like, you know,
my question I've been asking today is
how many unprofitable, you know,
multi-billion dollar AI labs can the capital market support
over the long run, over a five year period.
If we stall out for a few years in terms of really
meaningful progress, which Mike has said people aren't making,
at least against the ARC prize, there's not a lot of progress
happening right now.
OpenAI is actually in a great position.
They have a subscription business. They have a subscription business.
They have a consumer tech company
that has a lot of revenue.
Anthropic is in a good position.
But there's this tension between the labs
where you have billions of dollars on your balance sheet.
In theory, you have a lot of runway.
But at the same time, to make progress,
you have to spend a lot of money both on talent
and different
training runs and data centers, et cetera. So I just have this question around the next
three years as a very interesting period.
Yeah, I think there's two pieces to that. One is, and I think about this a lot, is the
long run in AI. What does that actually mean? And I think that we, you know, there are all these essays being published last year, right? Like AGI
is coming in 2026. It is interesting how the narrative has changed in the last 12 months,
right? A year ago, you had all these people saying, Hey, I'm one of the hundred people
who knows I really am resistant to these type of arguments. I find that to be frustrating.
But you know, I'm one of the hundred people who's in the social circle where all my friends
are building AGI and AGI is coming next year and you guys are all crazy if you don't see it
and just be aware, you know,
it's like life is gonna change dramatically.
And then now we're at the gentle singularity, right?
Like it's sort of interesting this contrast.
That's what I'm saying.
It's a huge contrast that's very convenient
if you have a consumer tech,
if you have a consumer app that billions of people
are gonna use in the next few years
and there's a bunch of different ways to monetize that.
For me, I would tie it back.
I mean, I did this math last year,
the $600 billion question.
It was initially a $200 billion question.
But it was basically like, hey, if you look at Nvidia revenue,
you can use that as a proxy for total data center spending.
We're spending $300 billion in data centers.
We need to make $600 billion of revenue off of those data
centers to get a 50% gross margin.
And so I had done this math. And I basically said, Hey, you know, total revenue in the
AI ecosystem at the time opening, I had about 3 billion of revenue and I did some rounding
and said, okay, give everyone else a ton of credit and maybe there's 50 billion of revenue,
but we're like 10% there, right?
In terms of actually generating the revenue, the AI ecosystem needs.
And now 12 months later, you know, open is at 10 billion, the coding AI ecosystem is at 3 billion.
But we're still dramatically under-monetizing
this technology.
And to your point, in the long run,
the question becomes, how long does that sustain?
And I have this sort of mental model now of AI
as it's sort of being carried by its own momentum.
I think of it almost like this slingshot
you're swinging around.
And it's like it's sustaining itself by its own momentum.
And there's this arms race and there's this
sort of microeconomic game theory
of how each player is reacting to each other.
But at the end of the day,
it's momentum that's carrying it.
And at some point, maybe we get this AGI thing
and then it's like all worth it.
And in the long run, I am very confident
it's all gonna be worth it when I'm 80 years old,
AI is gonna be everywhere.
But what do you do in the medium term?
And I think nobody's talking about this right now, which is this sort of about face
or this you turn from the one year ago, you guys are all crazy. If you don't see a GI
coming immediately to now, I was listening to the podcast that with $100 million signing
bonuses and it's like, well, you know, AI actually hasn't changed people's lives that
much. It's going to change people's lives later.
I just think it's interesting.
And these narratives change quietly.
People don't talk about them, and then they quietly change.
There are big labs that directly benefit from the narrative
that AGI is a year away.
And then there are labs that will benefit greatly
from a gentle singularity in that their competitors will
struggle to raise additional capital in the long run, greatly from a gentle singularity in that their competitors will struggle
to raise additional capital in the long run,
struggle to compete, struggle to retain talent.
Yeah, I know exactly what you're saying.
Also, I mean, I don't think this is one company,
the whole ecosystem has to deal with this,
but there were a lot of promises made a year ago.
And I think a lot of people would like to ignore those.
What's gonna happen when we pass all these deadlines
where we've been told like that's AGI?
I just think that's interesting.
And clearly, if nothing, that's not changing.
We're upping the ante, right?
Now it's like millions of dollars of people.
But I guess this is part of why I think
you take things to such extremes is
everyone believes the prize is so big
and now you have to up the ante.
So I think we're just gonna keep seeing until for a while,
we're just gonna keep being in this phase
of everyone upping the ante to say,
okay, we're not there yet, but we're gonna get there.
We're gonna get there.
We're gonna get there.
What does that look like?
Well, this was fantastic conversation.
I wanna have you back on as soon as possible
to go way deeper into what this means
for the early stage and mid-stage markets,
because I'm sure you have a lot of visibility there.
But we'll let you go and get back to the rest of your day.
But thank you so much for stopping by.
This was fantastic.
I'm glad we coined a new term, a con.
Yes.
It's a group, it's a talented group of 50 people.
50 technologists building the future.
Yeah.
One con.
Get your con team together.
Get yourself a con.
Get yourself a con and make it happen.
Thank you so much, David.
Awesome, thanks for joining.
This was fantastic. I'll be right back.
Talk to you soon.
Next up, we have Walden from Cognition coming in,
keeping the AI chat going,
talking to him about everything that's going on
in the AI ecosystem.
Walden, are you there? Welcome to the stream.
Yes, it is great to be on here.
How are you guys doing?
I'm doing great. Thanks so much for stopping by. Would you mind introducing
yourself in the context of cognition? We've obviously had Scott on the show
multiple times and people are probably familiar with Devin and cognition,
but I'd love to know a little bit more about your story,
how you wound up there and what you're working on kind of day to day.
Absolutely. I was good friend with Scott before we started Cognition.
We did the same competition series growing up.
And I was kind of also working on just various ways
of working with these new programming agents.
I was really waking up every day trying to figure this out.
When I caught up with Scott, we figured out
that, hey, we were both very interested
in a similar thing.
We had a group of people that were all ready to jump at this opportunity, and that's how we
got it together. So today here, I'm Chief Product Officer and Co-Founder. A lot of the time, honestly,
I think many times people think of product as just like the interface or the UI or the integrations.
I really do think the intelligence and brain
behind Devin is so fundamental
to how you think about the product
that we build our product team
so that individual people are,
you know, tuning the weights of the models,
but they're also the ones talking to the customers.
And so in terms of the role I have, it's pretty broad.
And I like to, you know, spend some weeks, you know,
really deep into how do we make Devon more responsive? How do we
make it smarter? And then other times, you know, really, you're
going and talking to customers working on the UI, things like
that.
Cool. I want to dive right into that, that question about
trade offs in models from a product perspective. My my
question is, we talked to Mike from Arc AGI about the
Pareto Frontier. I'm feeling it personally. I'm feeling the AGI, but I'm also feeling
the delay of the AGI when I open up ChatGPT and I have to decide between 4.0 and 03 Pro.
Am I going to wait 12 minutes for the really good response or do I want something now that
might hallucinate and I don't know if it's right?
And I'm doing that work.
It feels like OpenAI is starting to tuck those features under UI and already it's kind of,
it feels like it's learning when I want to use O3 Pro and making these buttons easier
to access and they're tucking models under UI layers.
Talk to me about in the context of Devin,
how are you using different models
and when do you leave that up to the developer
versus something that you as a product
can make an even better decision than the human?
Yeah, it's so funny, the AI is coming so fast,
but it feels like it can never come fast enough
Yep, there's really this time. I think is probably around two years ago
I was taking a bet with a friend at the point these models were not even that good at math and he said oh
You know, I think they're gonna get like a gold medal and like the International Math Olympiad in just a year
I thought he was crazy. I took a bet against him and I absolutely lost that bet
I've learned to kind of adjust my expectations upward
I think what you're pointing out is that as these things get smarter, they don't
uniformly get smarter at everything. And you'll find that sometimes there'll be a model that will
take 15 minutes to figure out how to respond to high. And then there are, you know, there are
models that, you know, do respond super fast, but are not nearly as intelligent. I think one thing
that we do as a product in Devon that is a bit different from other people is we kind of black box the models away. And part of that is out of you know, we can
then test and use a bunch of different models under the hood and kind of hide that, you know,
all that complexity from the users, you know, when you buy a chip, like sure, you'll look at like,
or when you buy a computer, you're sure you'll look at like, oh, like, has this much RAM has this much CPU if you're into computers,
but you're not like looking into all the individual specs of the exact chip and model and things
like that.
I think that's where the space is going to move is people want systems that are just
going to work. And, you know, we can put in the months to, you know, human years of effort
it takes to evaluate models and figure out
what is this actually good at
so that an individual user who's just paying $20 a month
doesn't have to figure that out.
It's gonna be one of these things that I think
the models are coming on so fast
that it only becomes harder and harder
to keep up with all of this.
And so eventually I think people are just gonna get
to the point where they just want things to work
and that's kind of where we're starting off.
Talk to me more about AI winning an IMO gold medal in 2025.
Polymarket has it down at like an 11% chance.
It was up at 70%.
I don't know if that's an aberration because of when
this actual test will be run.
But it sounded like you were very confident that it will.
I remember when Scott was on, he was it's definitely gonna happen but the poly market's
been down.
Might be that what's actually with the diameter.
What's actually with the diameter.
Go through the effort of trying to do it or too busy working on coding agents.
I think so yeah when I when I basically said I think I lost that bet is because we were
only one point away from like a gold medal last year.
Oh okay.
And that was already much farther than than we expected. Yeah when you look at the point but that's a gold medal last year. Okay. And that was already much further than we expected.
Yeah, when you look at the deployment,
that's a very interesting way to put it.
I think part of it is people have considered
that already completed and so perhaps researchers aren't
putting much effort into it.
They're not working on it.
Like who knows if they'll actually come out
with a new release?
Because maybe in Google's mind, for instance,
if they come out with a gold medal on the IMO,
everyone's not gonna even care
because people just accepted that it was gonna happen.
So. Oh, I think it would be the biggest news of the day.
I think we got to get Google comms and they can take it to do this.
I think it's an easy, easy thousand like banger on X.
You are absolutely right.
Yeah.
It seems like top of mind for everyone, the labs, product developers is really getting
coding agents.
Yeah. Part of that is because there's this belief
that if you get these coding agents to work really well,
then that'll just solve the rest
of the research problem for you.
We have this joke internally that the only thing
we have to get Devin to be good at writing
is Devin's own code, and then it can solve
the rest of this.
Re-enforces, makes sense.
On that question of the spiky intelligence,
narrow reinforcement learning on specific tasks,
maybe we think we're good enough at IMO level math,
and so we're not going to go for that last point.
Where are we still early in the RLing
around specific coding challenges?
I've heard that distributed systems can be really difficult because you have to spin up all these different pieces of the system
and that just takes longer and so you can't simulate as fast as just like a
small Python block of code that you can run in simulation in a millisecond or if
we're talking about like I know Devon's useful for like replatforming from you
know dotnet to Python or something,
or even go back to Fortran.
It'd be great to just not have any of that code,
the legacy code sitting around.
But is there enough training data
around those older programming languages
or less used programming languages?
Or are you optimistic about new training runs?
Maybe we don't get something that's like,
oh, it feels way better, the vibe's way better,
the IQ went up by a ton, but it's way better,
it's something that's really relevant to you.
Is that important right now?
My mental model of these systems is,
their IQ is so much higher than any individual person,
I know, but what makes them still bad at specific things,
it's like someone who has the potential
to be a really great engineer, but hasn't got to trade school yet to actually practice
that. So nowadays I actually think about how smart these models are less in terms
of how much training data are they being fed, what language are they being fed, but
actually more so in terms of the environments that they're being RL'd in.
And so one example I have of this is sometimes you can actually feel the reward function.
Back a few months ago when Anthropic released their, there's like Sonnet 3.7 model.
One of the top complaints of people was, hey, like it seems like this model is like super
great now, like finding all the files that needs to change, coming up with a strategy, but it's really over eager.
It just changes a lot of different things.
And I think a lot of people suspect that it's because when Anthropo was training the model,
they told it, hey, we're going to give you points on how many of like the correct things
did you do?
And maybe they forgot to dock points for, for doing things that were kind of outside
of that zone.
They fixed these from now on, but you get these little leaks of, Hey, like
you can kind of feel the reward function underneath these things.
So when you, when we talk about, Hey, can, uh, can these things not do
distributed programming at actually one, I, in my opinion, the biggest thing that
these models aren't great at yet is actually debugging live code.
So I think part of the reason is it's actually
really hard to create and rerun environments that interact with live systems. Right. And
so if, if your task depends on, you know, working against a live customer or working
against a live stream of events, these are things that they, it's going to be hard to
replicate in our own environments. And So these two models are bad today.
The good news is these aren't like fundamental limits.
I think these are all engineering challenges, they're less like theoretical challenges.
But it takes work to build up to that point.
Can you explain reward hacking at a high level and then kind of give me some examples of
how that interfaces with AI agent and coding agents specifically.
Absolutely.
The way to think about these systems is they are just trying to maximize a number.
So if you tell it, hey, we'll give you a point for every time that you do XYZ, you'll find that, hey, that
model will just keep on doing XYZ.
I think the classic example of this is the paperclip generating machine.
So if you give it points for generating paperclips, but don't account anything else in the world
that is important for humanity, then the system might do really bad things just
to keep on generating paper clips. In the context of code, one example we've seen of this is, hey,
if your thing is just get all the tests to pass, you might find that the system will just learn to
delete the tests or make the test just say, okay, I pass, rather than actually fixing the code. So
a lot of times you just know software
No real software engineer would ever do that right? Yeah, you know human has ever done that comment out the test. Okay, it's working enough
Absolutely, it's almost too human
It's great. I think there there are so like it reminds me of these systems that are like we're trained on
slack responses and
When you would ask the system, hey, can you do this for me? It's a oh like I'll get back to you on Monday
Yeah, what you what you try to get the model better at really matters it to be very thoughtful about it
Yeah, I've noticed that with with the whip or some of the whisper transcriptions if you don't feed it enough text
It'll just say
Please like and subscribe
Okay, I know exactly where your training data came from like that's its default phrase because it's just like what it's what it's hearing
Jordi What how are you guys approaching?
Talent acquisition as a firm, you know the headlines from this week are these talent wars
You guys have raised a lot of money, but I certainly imagine you're not making,
you know, nine-figure offers or even trying to compete there.
But what's been the approach?
Does it mean you're, you know, keeping team sizes smaller or, you know,
kind of dig into that for us?
Yeah, the fundamental bet of the product we're building is it revolves around this idea that
individual people will just be able to be way more levered up because they'll be able
to work with agents and they will be able to work with all these tools to make themselves
better.
So at a minimum, we can't be hiring people who their whole aspiration in life is to just
write code at the level which Devon will be able to do in a year or two years from now.
In many ways, I think we're kind of figuring out how do you build up an org from scratch
that is AI native.
And one thing that this already means is we actually kind of just delete some teams.
A lot of companies at our stage, they have like a internal tools team to maintain all
the different services that engineers internally use. We found that internal tools are one of these things that AIs are just really good
at. And we can just staff that team with the Devons and then basically have engineers just sending
requests to those Devons for how to do that work. And that doesn't just save us head count. I think
fundamentally the structure for how does management work and how do tasks
get passed down look very different, especially in a lot of large companies.
You'll see today, the way it works is an engineer will get a task assigned to them.
And then they'll go work on a task and when they're done, you'll be like,
Hey, what's my next task?
And then, you know, you'll kind of like go down the list of tasks you have.
But here, every engineer is like constantly juggling like three or four tasks, probably
because, you know, we're not trying to hire super fast, but also because you can juggle many tasks
when you have these minions that can go and, you know, work on working your things for you.
So it means that I think we are very aggressive for people who we think that can fit these roles and
become very well good generalists. And as we build up this company, make sure that we're
building in a way that works in a world where AI can do so many different roles for you.
And I think there will be kind of like a moment for larger companies as well when they realize,
oh shoot, all these structures and patterns of management that we've had in place
are actually slowing us down from adopting AI.
What will happen at that point? I'm very interested in seeing,
but it's very clear from us and from our smaller customers that the earlier you
bring it in, just the lot, a lot easier it is to, you know,
kind of pick things up.
Are you tracking, I mean, there's been this like,
in the agent discourse, there's been this discussion
of like, we've gotten 10 minute AGI.
Yes, these large models, 4.5,
like they're incredibly intelligent,
extremely high IQ, extremely knowledgeable,
they've compressed all of humanity's knowledge,
but they're only good for a minute.
Now it feels like maybe 10 minutes with deep
research. That's how most people interface with them. Have you been tracking kind of the longest
agentic run of a Devon process? Is that the key metric? Is there anything that you can share with
us on like, have you been able is there an example that I could give where there's a lot of work to be done, but it's all in Devin's wheelhouse.
So it just needs to go and grind for a couple hours and it does it without kind
of getting lost. Like we know happens with a lot of these agents.
Yeah, absolutely.
I think a lot of people in the space have expressed this feeling now that they
are feeling more and more like the bottleneck in these systems.
Interesting.
And the way this applies here is we have seen people get really, really long
tasks to work, but sometimes it actually takes a lot of effort on your part upfront
to be able to get that to work.
I was talking with a customer yesterday where he said, I just rewrote our entire
testing system so that the error messages are a lot more clear
and the tests actually guide you through solving them
one by one.
And once he did that upfront work,
he kind of just gave it to Devin.
And we were actually reading the product,
started sending him warnings that, hey,
your session's going on for really long,
are you sure this is actually working?
And he's like, no, no, it actually is,
because I did all this upfront work to get that to happen.
I do think that this kind of 10 minute AGI, 20 minute AGI,
40 minute AGI will just keep progressing
and people will be able to be more hands off.
But people will also find that you can kind of always
extend that duration by being a better manager in some ways
and giving more clarity upfront for exactly what you want.
Yeah, I mean, just like real life, that makes no sense.
Jordi, do you have another question?
Last question from my side. I'm curious if you know what kind of learnings you're having around
agentic interface design. It feels like this sort of the default when you think about
agentic software
is just something that can effectively
sub in for a team member on any different software tool,
whether it's Slack or Linear.
I mean, you see this with deep research,
where you ask it a question, and then it asks you
a bunch of clarifying questions, kind
of trying to build that test suite to get you
to give it to more stuff so that it actually
has something to run with.
Yeah, so is messaging going gonna be the dominant interface?
Is there something else?
Like what are you kind of seeing
or experimenting with on that side?
You know, it's funny, I saw someone post about this idea
that a lot of these products now will make you respond to,
hey, does this look like a good plan?
Do you have questions before I start?
And some people find that annoying.
And I think this fundamentally comes down to as these things become more like coworkers,
you know, some people just have certain working styles that they like some kind of coworkers
you don't work well together and others don't.
And it's funny as you build a product, we find that some people just love the way Devin interacts.
And then other people are like,
Devin's too needy in these ways.
Other people are saying like,
Devin doesn't ask me enough questions.
And so there are toggles and controls
that you need to have here.
Kapathi recently gave a talk on how a lot of AI tools,
not AI agents, but AI tools kind of implicitly have ways you can use them
where you have more control and then ways you can use them
where you have less control.
But when your interface is just chat,
now the model actually has to become more intelligent
and detect, hey, this seems like someone
who just wants me to go off and do work
and get back when they're done,
or this seems like someone who's very curious
and wants to hear more about the
system. And so this is actually going to be,
I think work that we'll have to see people make on the intelligence of the
agent side, not so that they get better at coding,
but so that they know how to get better at working with people. Yeah. Yeah.
That makes sense. That that's an, I mean,
the good thing is you can have some type of quick conversation,
you know, with the user around their preferences and how they like to work and then layer on
the sort of real-time feedback and learning and understand a lot more about how they work
and stuff on.
Yeah.
Roughly how big is the team now?
Oh, on the engineering side, we're probably just over 20 or so engineers, and then we
also, the entire company as a whole is around 40 people now.
Almost 50!
This is the magic number!
You get stuff done.
We were just talking to the previous guest about how Steve Jobs set up a 50-person team
to develop the first Apple product and the Tesla autopilot
team was right around 50.
There seems to be some magic number there.
So it seems like it's a fantastic time for the business where you have scale product,
but especially size.
Yes, you have those like two pizza teams here, but everyone kind of knows each other's name
basically.
You're still a tight knit group.
Anyway, anything else, Jordan?
Awesome stuff.
Thank you so much for stopping by.
Thanks for joining. This was fantastic. We'll talk to you soon
Have a great day. See ya. Bye
Really quickly let me tell you about bezel your bezel concierge is available now to source you any watch on the planet
Seriously any watch go to get bezel comm and we have our next guest own the cave coming into the studio to tell us the story
Intercom how you doing there is
Doing good. I did just sprint three and a half blocks
So no blocks
It's all good we'll just do more ads
If you do more ads does that mean ads for intercom
We officially, pretty soon, pretty soon. I think you're breaking the news.
You're breaking the news.
You're breaking the news.
Damn it.
No, it's good.
You know the way it works with the pharma companies
where they kind of own the news networks.
Is that a similar thing?
That's the goal here for enterprise.
What favors do I get?
Can you do a hit piece on Brett Taylor?
Yeah, shots fired.
Shots fired.
I'm just, no, he's a great guy.
We'd just like some hit pieces on our competitors, please.
Yeah, of course. Yeah, we're lucky to not be in the hit great guy. We just like some hit pieces on a competitor's place
Now we're lucky to not be in the hit piece business
Sponsor the wrong show. Yeah. Yeah. Yeah. Yeah, I think I think just buy like a hundred thousand subscriptions to the information Yeah, and then start putting pressure on them. Yeah. Yeah, you might want to look into this company
I would be down to do a hit piece about technological stagnation.
Yeah.
I hate stagnation.
And so I would want to take down that as a concept.
Really slur that whole concept.
Or closed IPO windows.
Be prepared for a terrible hit piece on closed IPOs.
Or hit pieces on just CEOs that take their foot
off the gas.
Totally. You obviously have not, the foot's been.
I've got two feet on the gas.
I think that's possible.
It's a bit irresponsible, but.
Yeah, walk us through the story that you posted,
how you rebooted 15 year old decelerating business.
I wanna hear this from, kind of set the table for us and then we'll
walk through the story because I think it's fascinating. Yeah, sure. I mean, you know,
it's a 15 year old business. It's a successful SaaS business. We're in the service game.
But at the end of our kind of first chapter, things slowed a little. We were unfocused,
bad commercial decisions. This happens to successful companies that become a victim of
their own success and comfort creeps in. Definitely 2020, 2021 were some comfortable culture times. And I
got sick, I had to leave. So it's a big long story that ultimately comes down to the fact
that we lost our way a little bit. And we had like five quarters of decelerating revenue. I came in midway the fifth and it was looking kind of gloomy.
And the two things we changed were we went back to good old fashioned SaaS fundamentals,
pricing that people liked, selling the product in a way that people liked.
They used to have to like talk to sales for everything.
And it just those simple things,
becoming super customer first, and started
to really accelerate the previous SaaS business.
In the last eight quarters, the growth rate of the SaaS business
has increased by 10x, which is really remarkable.
But then, of course, we jumped on AI.
And we were kind of OG AI guys.
We had dabbled, not dabbled.
We had developed real AI products before, but they were
baby AI compared to what we all have today. But as soon as GPT
3.5 came out, we all just jumped on that. And we saw that there
was opportunities for this whole new category where you can
create what we call now customer agents doing all of the things,
customer success and service and sales and marketing that humans used to do at height.
And that just propelled the business even further.
Finn, our customer agent, is now the best performing
in that category in our benchmarks.
We win every bake-off against our chief,
our primary competitors.
We have the most customers, most ARO.
So we're kind of this very weird story
that I don't know any comparisons to,
where we're previous generation SaaS
that's actually winning in the category in AI.
I think it's hard.
It's really, really difficult for the previous generation,
the slower, older cultures that work in the age of AI.
It requires a lot of agility and dynamism.
I often mess up that word, but it really does.
Talk to me about the different break points
for growing a company.
I feel like mentally I think about it as like,
just the founders, maybe the first 10, then we were talking to previous guests about this breakpoint at like 50 people
Like there's something about there's a magic of a 50 person team. Everyone knows there and everyone knows each other's name
Then maybe there's other
Group has 47 senior
engineers and researchers right in that
seven senior engineers and researchers right in that 50 person sweet spot.
But I feel like there are, in the story of startups,
we often map them to funding rounds,
seed rounds, series A, series B,
and sometimes the head count grows in line with those,
but I feel like head count growth
might be more of a factor in cultural drift,
and I wanna go through some of the key moments
where you feel like, you know,
it was only one foot on the gas,
or the foot came off the gas,
or what are the upstream drivers of that?
What are the things culturally that you think startups
need to get right at various scales as they grow?
Because I feel like there's always these different moments when you're
scaling up and you have a whole bunch of decisions
to set the culture, and you have a pretty limited time
and you're focused on product and revenue and growth
and all these other things, but culturally,
there's some very important decisions that get made
at every, I don't know if it's every order of magnitude,
but there's these key milestones.
And tell me the story of the milestones in your mind.
Maybe it's shifting offices or fundraising
or headcount milestones, but what changes
and what advice would you have for founders at every stage?
That was a five minute question, outstanding.
Sorry, I've given you a hard time.
Look, there's a kind of intellectual set of answers to this that you can kind of break down and break it into tips.
There's a kind of a more abstract thing, which is both, you know, in good instances, self-aggrandizing
for someone in my position, but then also bad news in other instances.
And the answer is that it all comes from the top.
And the early days, the founder typically certainly founders that, you know, have
any degree of success at the start.
And the early days, the founders bring a phenomenal amount of energy, conviction,
whether it's founded or not, you know, just, just, just belief, obsession,
uh, intellectual curiosity, excitement, passion, you passion, a lot of intangible things.
And that really drives great people.
All of us want to make great money in this industry.
And that's awesome.
And I really think it should be celebrated.
People are too shy to talk about that.
But they also want to be part of something meaningful and exciting.
And they want to work with people that inspire them
and make them want to push themselves.
And so the reason a lot of these older generation companies lose a lot of steam is that just for
very obvious human reasons, the person on top is not pushing in that same way. When you have 15
years of SaaS, how exciting is every day going to remain? Like Like honestly, like the first year you're like
cool, SaaS, churn, huh, wow, okay, I get the math. And then in year two, you're like, okay,
churn, get it, cool, raise some money. Year three, roadmaps. Year 15 of SaaS? You're done.
You're not bouncing to the office every day. And people will pick up on that
all around you. Of course they will. And then you don't push yourself in the same way. You
don't really pitch the opportunity to new employees. You settle a little bit because
life is hard. You've got other priorities. Maybe you've drifted a little bit. You've
got side projects. Some people end up with families, girlfriends, ex-girlfriends. Like
life gets way more complicated than it is for a 26 year old kid who just
moved to San Francisco and has one of those buzz cuts and the curly hair on top.
Life just gets more complicated and that's what happens.
And so part of our secret is that AI reinvigorated us.
I would not still be doing this if we were just doing SaaS. SaaS is not only kind of easy, but super boring to me now.
That's OK.
Hopefully, AI and whatnot will get boring too,
and there'll be something new.
And so again, we could break it down and get all mechanical
and try and pull out some tips and tricks and advice here.
But really, it just comes down to energy.
And so for anyone who would want to reinvigorate
their company, the question is,
how can you reinvigorate yourself?
And I see a lot of founders of late stage companies,
many of them public,
you kind of haven't heard from them for years,
their stock price has gone sideways for five,
maybe seven, eight years.
And I'm like, what are they still doing?
And I wonder, are they able to admit to themselves that like, they
don't want to do this anymore?
And if you don't want to do it anymore, make a change, like kind of move on.
Um, and so I think a lot of people just, they struggle with that moving on and
making that decision because their whole identity and sense of purpose and
validity in the world comes from, I'm CEO of whatever.
So it's like this deepy human squishy spiritual challenge rather than an MBA
type challenge.
What about bringing in young people to kind of keep that reinvigoration process
going? I'm just thinking about, you know,
Zuck is paying so much to bring in Alexander Wong
from Scale AI, at the same time, you know,
like the level of energy that Alex is going to bring
to that organization is potentially worth a lot, you know?
Yeah, but at the same time, Zuck is super high energy.
Yeah, but there's another world where you surround yourself
with- So I think low energy people
hire low energy people, high energy people hire.
Yeah, but I guess what I'm getting into the trap of is like, you can be the high energy founder as your business But there's another world where you surround yourself with... So I think low energy people hire low energy people. High energy people hire...
Yeah, but I guess what I'm getting into the trap of is like you can be the high energy founder as your business becomes more serious.
People keep telling you like bring in the seasoned executives.
Bring in the grey hairs.
The people who will keep the steady hand on the tiller.
And that can lead to a less dynamic less lower dynamism
in your organization. Is there is there a hack to just hiring crazy young people and
empowering them to be in the C suite whether or not they really like deserve it by traditional
standards?
Yeah, like the challenge is super obvious, which is these young, crazy, energetic, optimistic,
wide-eyed people are super messy, super sloppy.
They get in fights, they get upset, they hungover late.
They don't know how to do larger company professional things.
And so part of the problem is that larger companies to scale and get more efficient and
become global organizations across many offices and time zones, that they introduce a lot of
regularity and they iron out the chaos. So part of it is you have to be willing to entertain chaos.
You have to be willing to put younger people in positions of influence and let the chips fall
where they may.
It's possible to give them roles
where they don't have to engage with the entire organization.
Like we've definitely got roles in intercom
where you can have to collaborate across two time zones,
sorry, across eight time zones and two different teams.
But then we've got other positions
where you've got one super smart guy, he's 30,
which is 10 plus years older than the execs.
But you give him one thing he can do on his own,
and he'll crush it.
So part of it is knowing how to work with these people.
But also, it's a special type of X-factor young person
who knows what they don't know.
And yeah, the degree to which this is a talent game
and that people are not fungible is not recognized at all.
People imagine like, oh, you lost one person,
you get a backfill.
Entire organizations just flip and change completely
when you change out the individuals involved.
So yeah, it's not easy.
Do you think venture should take,
they should take almost like turnarounds more seriously?
Like in some ways you were your own turnaround CEO,
but one of the, I think the issues of the venture industry
is let's say a company becomes a unicorn,
has a hundred million dollars plus of ARR,
and then the sort of growth starts slowing.
Maybe the CEO is like gets bored or whatever.
They just start partying,
or they start going to Europe.
And the VCs kind of write it off,
and they're like, I made my return,
or at least I'll get my money back.
But at the same time, I mean, private equity is built,
like there's been empires been built around,
like the turnaround.
And in some ways, think about a talented founder,
maybe they took their first company through YC
and had a nice exit.
A lot of those people could go to a company
that has like a hundred million of revenue
and like a big customer base
and like actually make more money
and start on, you know, second or third base
and take, you know, you can make quite a lot of money
taking a business from a hundred million
to hundreds of millions of revenue.
And that can sometimes be easier
than taking it from zero to 10.
Totally.
Yeah, what do you think?
I think theoretically, I think, you know,
VCs are best are pattern matchers
and turnarounds don't fit the pattern.
You know, think of all of the most successful
and exciting zeros of
technology over the last 20 years. They invented a thing, something, something,
something, it's worth 10 billion. Like, it's kind of that. It's like, yes,
sometimes it takes a little bit longer, there's a slightly circuitous route. But
it's not the company was totally failing and they had to reinvent themselves and
then they became the biggest thing ever. So, you know, for VBC, I just think it's really, really hard that it's just
hard for them to get it like the underlying narrative and the underlying
story, this is where PE comes into play, but PE has all of its own problems too.
And these guys want deals and they won't be exciting to a lot of people who
started venture back companies.
It's straight up difficult.
And to my point previously, the idea that talent
isn't that fungible.
It's like any given company, if you replace the founder
with even another highly competent founder,
they're probably not, the chance that they're right
for that opportunity and idea, like look,
there's so many people, so know, so much more accomplished than
I am, but I'm pretty accomplished. I know how to run and build and reaccelerate businesses, but I'd
be a probably a shady CEO for 99% of other companies just because, you know, that's not what I do. And
I don't have any experience there, etc, etc. I don't even know the people there. So I think
people should be bearish on turnarounds. You know, turnarounds don't really work.
They're like generally like a, it's a failed thing.
Yeah, somebody will figure it out.
Maybe it's Jeremy Giffon, maybe he'll do it.
Yeah, well, that's even a different strategy,
but yeah, I think this idea of like,
you need to kind of read what you need to-
I like the idea of bringing in like a crack founder
into a company.
That's a-
Yeah, but a crack founder wants to do their own thing.
They want to start from scratch.
They want all the equity themselves.
Like the recap alone that it would take
just won't be palatable to existing investors.
Failed companies are just generally doomed to fail.
And when there are so many opportunities out there
as an investor, you know,
you gotta just like not try anything novel.
Yeah.
It's like no one got fired for-
And in your case, it's like a little bit of luck,
the timing of like you going back in, GPT-35,
seeing the opportunity for a new product, all this stuff,
but you also had to make the choice to risk your own ego
to go back in and if revenue had decelerated
for another five quarters,
you'd be sitting there being like,
yeah, maybe I'm not as good as I thought I was, you know, and you have to kind of live with it.
It's totally true, but I got to cheat a little bit because when I was out, I was like sick.
I had been beaten up in the press.
I was like, just my confidence was pretty low and I didn't really have a lot to lose.
And I felt like I was without purpose.
I always wanted to be independently wealthy and free. And I finally got it. It was in many ways magical and then completely
boring. And so when I had this opportunity to go back, have purpose and I had nothing
to lose, I took it. So like it's easy now to tell this maverick story. You're so brave.
Look what you did. you took a big risk.
No, you have nothing to lose. You'll just go for it. And I think part of the secret
is if people can separate themselves from their egos a little or work on their egos
or learn to love their egos and not be run by their egos, great things are possible.
Most bad decisions are made just out of fear. And the fear is driven by
just fear of public failure and embarrassing yourself. I found myself unafraid to embarrass
myself. Look at how I'm speaking to you now. It's not fully true. The ego is still there
in present, but the smaller and weaker it gets, the more freedom you have.
It's fantastic. Well, thank you so much for stopping by.
Always a pleasure. We could yap like this for hours. The smaller and weaker it gets, the more freedom you have. It's fantastic. Well, thank you so much for stopping by.
Always a pleasure.
We could yap like this for hours.
Yeah, I feel like people are gonna listen to this
as like a little founder therapy.
100%, I was like, that's cool.
We can do a little therapy corner.
It's amazing, yeah.
Once a month, you come on.
Pump up speech.
Pump up speech.
It's great.
This is great.
Well, you can do a diet of meditation if you're interested.
That'll be the next one.
Thank you, Jens.
Hey, this is Cheesy.
I want to give a shout out to my friend Stuart.
That's it.
I promise I do it.
Amazing.
Shout out to Stuart.
Air horn for Stuart.
Do we need to ring the gong for Stuart?
What did Stuart do?
We got to ring the gong for Stuart?
OK, ring the gong.
Yeah, he's had a big year.
He's had a pretty big year.
Congratulations to Stuart.
Stuart, let's go Stuart.
Congratulations.
We will see you soon.
Have a great rest of your day.
Talk to you. Peace.
Up next, we're staying in the Irish hour.
We're going over to Stripe.
Stripe.
Luck of the Irish at Intercom.
We'll check in on how the luck of the Irish
is treating Stripe.
We got Jeff from Stripe. Welcome to the Irish is treating stripe. We got there is from straight welcome this dream
How you doing the moment we've been waiting for we're so sorry for a week for a couple weeks ago
It wasn't our fault. It wasn't our fault if geopolitics is currently outside of each of your controls
Yes, well that wasn't that wasn't even geo
That was South Africa South African attacking an American on the timeline.
A reality TV star.
Yeah, former reality TV star.
Yes.
Jordi, I have to say it's really awesome
to see you in this format
because you and I have been Zooming
for I think almost a decade now
and now it's live in front of all these,
this great audience.
It's really great to see what you all are up to.
It's a bummer.
We've never met in person.
I've had so many Zooms with you in this exact room.
I have a theory that you'd never leave this room, actually.
Well, we're busy.
Yeah, you're busy.
What is the major update?
We wanted to have you on to talk through it.
Can you break it down for us?
I mean, I think it's more of a conversation.
Jeff's evolved his role over the last year,
was running
point on Atlas made it made it a platform that a meaningful percentage of
C corps thinker started on Atlas today yeah about one in six now but about
halfway through last year we looked at what was happening in AI and started to
get really serious at Stripe
about not just the application of it inside of our business
for preventing fraud and running our own
payments foundation model, but also to help developers
and businesses and consumers get ready
for when AI starts to come to commerce.
I'm still a little surprised that we got self-driving cars
before ubiquitous online commerce is mediated by agents.
But you can really start to feel that AI is now coming very
close to commerce and will be part of buying decisions,
discovery, execution of transactions, and new ways
that businesses can find their audiences online.
I mean, I'm really quite impressed to see the rate at
which discovery has changed.
And it feels like around the corner,
commerce and AI is gonna be very closely mediated.
Talk about maybe some early products experiment,
what you guys are experimenting on,
what you guys have already rolled out, all that stuff.
Yeah, we've been trying to work with
the fastest growing companies
as they push the frontier of agented commerce.
So one of the first we worked with was Perplexity,
where they have this buy with pro package
inside of Perplexity,
where they show great e-commerce search results.
And then when you go to buy,
you're not going to the merchants tab
and dealing with the merchant's webpage. You are actually just clicking buy and in the background a Stripe virtual card
is spun up and given to an agent or any other automation process so that you can
just have a completely seamless experience of buying in situ to where
you're doing discovery and we're starting to see that in more and more
places. So recently HipCamp which is know, the cool kid way to book camping online, sort of Airbnb for places. They started to partner with Stripe
to make national parks and state park inventory available to a wider audience because some
many of those checkout pages are hard to use. That inventory is not nationally online, but
these are amazing places for people to be able to camp.
But there was just a huge amount of friction.
I remember as a kid, there was a place my family used
to always go camping, and my dad would wake up at 5 AM
and just be refreshing this terrible site when
the campsites would be released.
So ready for the age of agentic commerce.
And then it was very unreliable payments.
It was the equivalent of agenta commerce. And then it was like very unreliable like payments. It was like, it was the equivalent of like a streetwear drop, but like the, you know, like, you know, some state park with like managing it.
I think we're gonna see this more and more where the inventory of the world is getting closer and closer to intent
and agents are a way to bring that, bring them together.
And then it opens up really interesting questions that Stripe is trying to help answer developers. What is the developer experience for being able to
execute those purchases? We have this new order intense API that we're trialing where
you can just give a product URL and one of our agents will go buy it on your behalf.
We have new ways for businesses to be able to start to expose their inventory to agents in a safe and permissioned way.
And then as a consumer, you should feel it is reasonable to think actually that
agentic processes is the last place you'd want when it comes to money.
You actually want that to be incredibly permissioned, safe, deterministic.
You know what's going to happen.
And so you can expect that the Stripe APIs are going to evolve for
a new type of user in the world,
which is an agent that can safely be delegated
with your permission to buy on your behalf.
Can you talk about Stripe Link
and how that product might fit into
a product like Perplexity?
It feels like it's great if,
it's one of those classic things in AI and tech is like,
okay, great, it surfaced the right product for me,
now I want it to buy it for me even faster.
Now I don't even wanna go through
the checkout process at all.
As soon as I get the current thing, I want the next thing.
So how do we see that playing out
with just making that commerce experience even more seamless
or happening entirely inside of a chat interface or an agentic interface?
Yeah, we, you know, the borders of the internet are starting to blur. And so you will soon be
able to experience if you chat, if you, if you search for something on ChatTVT,
they already have these cute little shopping cards
that link you out.
If you're sitting in Cursor and you need access
to a database, Cursor can recommend SuperBase
and even start to accomplish your homework for you
right in the editor.
But there is this like missing moment here, right?
Where, okay, now I know about these products,
what am I supposed to do? Go to a new tab, do an offline kind of feeling search, go through a bunch of blue
links, find the website, go to the website, make an account, deal with the password problem,
get a bunch of weird emails to confirm my password, find the settings page where I can
get the billing information, pick my billing thing, put in my payment credential, get my API key, walk it all the way back.
I think we will start to see this as this loop that we've all been operating under for
the past 20 years of the internet as very arcane, very quickly, whereas you just want
to delegate your payment credentials to a safe, trusted place.
And Stripe Link is this payment wallet we've made
over the last few years,
which is a cross internet payment wallet
that works with cards and bank accounts
and future other payment methods
where if you log in once to Link,
then you will be able to delegate safely
your permissioned credentials
with a virtualized token
such that you can safely hand it off
to a good robot to buy on your behalf.
And so we see this as a new borderless way
that commerce can happen in a very permission safe fashion.
Yeah.
How are you thinking about
agentic commerce and stable coins?
A lot of, you know, there's a lot of commentary around stables
and how they can be applied here.
Oftentimes the people that just sort of default
assume that agents and agentic software will use tokens,
you know, whether they're stables or other tokens,
they usually have crypto, you know, funds
or crypto companies, right?
So I've had maybe a more middle of the road view
where I can imagine agentic commerce experiences
leveraging stable coins.
I can also imagine them leveraging cards and ACH
and a bunch of other sort of forms of payments.
So I'm assuming you've spent a lot of time
thinking about this and you guys have obviously
been acquisitive recently with Bridge and Privy as well.
Yeah, this is one of the areas in which Stripe
is very problem solving oriented and not technology
or particular technique religious.
We think that humans are going to have a variety of ways
that they want to pay and hold money.
Stablecoins is a phenomenal way for many people in the world to hold funds and for businesses
to move it across borders.
And so we expect that stablecoins will be a very popular way for consumers and businesses
to just interact with themselves.
Then you have businesses who are also going to have, you know, they're going to have a
long adoption curve when it comes to accepting and holding crypto
assets.
And then in some purchases, stable coins might make sense between two parties that natively
know how to interact in stable coin.
But often it might be the case that Jordy has an Amex card and the seller is expecting
an ACH transaction.
And we're sort of missing a universal way
for all these types of currencies and rails to work together. Visa also announced a new
way of being able to hash your card and give it to an agent with this Visa agentic token
where Stripe is one of the first partners to implement it. And I think we're just going
to see this new proliferation of new ways that money can transact between parties.
And we're going to need some type of sort
of Babel Fish translation service across all of them.
Because if you're going to pick one route,
then you're going to likely exclude
many of the agent humans and businesses in the world.
That makes sense.
How are you guys thinking of not to go too broad,
but the business model of the internet agents,
change things the internet today is heavily reliant on,
advertising and if you have a bot,
just crawling a website,
or even when you look at other services.
And so we've talked, Ben Thompson had some good writing
around just like what the future
business model of the internet could look like and potentially micro payments.
But I think the takeaway from that, our takeaway is like, there's so many different stakeholders
that would need to find some type of alignment.
It's hard to see like the obvious path forward here.
Yeah, I think the universal want from businesses is just more
channels to reach their customers and be able to do so in more direct kinds of
ways and so if you go to you know a SaaS software provider and you said hi you
know I sort of two choices for you you can have this very cool large budget for
a 101 billboard and kind of hope that at 85 miles an hour developers like see your ad and then remember to implement it later or would you like them in situ as they're working to have agents mediate the purchase recommend it and be able to like integrate and accomplish your thing in five seconds right inside their editor. Okay, yes, well, first of all, I'll do both.
But also the second one sounds very nice
because I'll be able to directly attribute
where it came from and be able to have a great
CAC for that and the LTV should be even higher
because the robot even integrated it directly.
And so I think that we're gonna see
new channels emerge for monetization,
both usage-based through MCP or other ways that businesses
are going to expose their APIs to agents, but also for transaction-based referral fees,
which will supplement affiliate.
And then I think there'll be a new way for businesses to make sure that their agents
can read their docs, can read their product SKUs, can have access to that information
in a new permissioned way.
I really liked the car copy topic I posted last night
where you basically said that if your docs involve a click,
not good because agents want to act and not click
and just only read, they want to start acting.
And so that's why Stripe is, you know,
if you go to the Stripe docs, we really push like,
hey, here's our MCP where you can just talk to the primary best way
of integrating Stripe and it can do it on your behalf
rather than just reading or reading something
from a three-year-old corpus.
Interesting.
Last question for me, we wanna move on,
let you get back to your day.
Stripe was famous early on for having this crazy
kind of open culture around emails that anyone
from the entire organization could read. That seems like incredible foresight to the moment
today because you don't have all this private information that, oh, do we train on that
or not? You could very easily fine tune a model or do some sort of, you know,
embedding on the emails that are already deemed to be
worthy of the entire organization, reading them.
Is that still part of the culture?
Is there a tool if you join Stripe where you can get up
to speed without needing to read every email,
but you can kind of get the Stripe way of doing X, Y, or Z.
Talk to me about Stripe's culture.
Stripe's, you know, has a, has a very serious writing culture where any decision I've been a part of for the last seven years,
I can really point to some Google doc that has the pros, the cons, and the decisions, as well as the, the, the, the as well as the email culture you mentioned where we
just...
It's very commonplace at Stripe where if you write, you spoke to a customer or even
after going on TBPN, hi, I went on TBN, you just CC a notes list and now it's available
for anyone who wants to subscribe to the notes list.
But one of the major subscribers to notes list now is agents.
And so if I'm in Slack, we have this really awesome bot called Trailbot that's read the trail of everything,
all the paper trail of everything that we've done that's permissioned to it.
And I can say at Trailbot in any Slack room, and it has the context both of the team Slack room I'm in,
but also the full corpus of Stripe and all of our permissioned wikis and documentation and internal tools.
And it is, it takes the first line of defense
of most questions immediately. And we actually have it to the point where it knows to jump
in automatically without even asking it. And so I find that most of the time we're able
to just at Trailbot and answer a lot of questions. And then increasingly, these agent tools,
which I think are going to apply to commerce soon quickly too, they're not just read onlyonly They're going to start taking right actions and purchase actions and for stripe internally right actions might be to roll back that deploy or to
You know auto communicate to that customer because of a an MPS score under 10, which which we do often
Hopefully not too often
But then in the real world if you want to make some of these actions
You're gonna need to prove who you are, pay for it,
make sure the merchant was able to accept that money, get the entitlement and move on.
Yeah. Even something as simple as like you show up to a new company, Hey,
there's this system over here that we're using and I don't have access.
You might go to a Wiki and ask, how do I get access? Now you just ask,
and it just does it for you.
It's so interesting to think about if there's like some type of user flow where if somebody sends a slack message
There's like it's tiny delay built in and it gives like a bot an opportunity to like actually
Every message is gonna waste like yeah
New version of shadow band where you first get your question answered
Do you really want to ask this question because it was was answered here, here, here, here, here,
and like here's our recommended action.
It's like pro-autocomplete, it's amazing.
It'd be interesting, yeah, it's slacks just become
completely silent because everybody's like doing things
and just immediately getting at it.
Those of us who have nerdily taken notes and made docs
over years, that is somewhat of a okay.
We've been waiting for this moment.
It was worth it, yeah.
That's amazing.
Yeah, yeah, yeah.
Made fun of by some people for a long time,
but it all came back.
Well, thank you so much for stopping by.
This was great, Jeff.
Always welcome.
Yeah, we'll have you back soon.
Great to see y'all.
We can talk more.
Talk to you soon. Cheers.
Bye.
Let's give it up for Jeff.
Next up, we have Garrett from Handshake coming in.
He was mentioning the information.
We've been mentioned in the information. It's a bunch of
Information boys hanging out on the chat. We love the information. We love we love the information
Thanks so much for joining. How Garrett Lord the nominative determinism is
Yeah, I think we love some also in common to that sauning I'm a big big sauna guy. No way. There we go
Yeah, yeah the sauna is important.
We're devastated to hear that when we moved into this new studio, we don't have a good
We don't have a good sauna set up. But we'll figure it out eventually.
The cold plunge can fit nearby though. I mean, there's still opportunities.
Yeah. Maybe we got to get in the cold plunge game. Anyway.
In full suit.
Anyway, kick us off with a little introduction on the business. Obviously, it's in the news today.
We covered a little bit about it earlier,
but I'd love to get you to explain
the business, a little bit of the history,
and the positioning of the company.
Yeah, for sure.
So I mean, the business started way back when I was in college.
I started Handshake out of a personal pain that I faced
in breaking into to find my first internship and first job.
I went to a no- name school in the middle of nowhere
called Michigan Tech.
It's awesome if you love to ski or love the cold,
but if you wanted to break into Silicon Valley,
nobody had really recruited there before.
Fast forward to today,
Handshake is the number one place that young people
in America start, jumpstart, or restart their career.
We're like kind of an 18 to 30 early career network.
There's a million employers that use Handshake.
So it's where the vast majority of employers recruit undergrads and interns and people
after school.
And then there's 18 million students and young professionals use the network.
And we also power about 1600 universities in the country.
And the background I think that's important for right now in this very moment is about 18 months ago, many of the Frontier Labs, as well as the large annotation engine
companies started reaching out to us with basically asking us beating down the door
saying like, do you have access to PhDs? Do you have access to master's students? And
for us, that was incredible. I mean, we have 500,000 PhDs in the network.
We have 3 million master's students on the network.
There's tens of millions of undergrads in the network.
And we started serving these players with experts really
as the world has evolved from training frontier models
that's moved from generalists,
like drawing kind of boundary boxes around stop signs.
To today, experts and experts are in law, finance, medicine,
mathematics, physics, chemistry, biology.
These labs really are hungry for reasoning data
to help improve with human in the loop,
the actual frontier of what their models
are capable of delivering.
Get alone in the future and talk about like tool use or trajectory. So they started reaching
out to us and saying, do you have access to these PhDs and master students? And we started
providing, we were the leading provider of all this talent. And we really started to
realize is that people weren't getting paid on time. They were really confused. They would
go through training and kind of get dropped out of a leaky bucket.
We heard from students that were successful on it, that they love the money.
They love learning more about some of this AI tooling.
They wanted to use AI tools in the classroom.
They wanted to use it in their research.
And so given that we have this huge supply and zero customer acquisition costs, we started
building a human data business.
And really in the construct of building that business,
the focus is really around like,
how can you also think about evolving
and automating a lot of the recruiting practices?
Recruiting is still, you know, it's sourcing,
it's screening, it's scheduling.
There's a lot that AI can bring to bear on that.
And so we now fast forward to today in the last six months
have been working now with six of the Frontier Labs.
We provide them
tens of thousands of computer.
It's a lot of them. I didn't even know there were six. There were only five. The big six.
Count them up.
You got them all.
Yeah. And we provide them with experts to help make their models more effective.
Very cool. Talk to me about like what the, how, how are the Frontier Labs thinking about human data annotation and answer
generation?
It feels like we might be at the end of that story soon, or maybe we're shifting into more
of a focus on the areas that are less verifiable, less like write the answer to an IMO level math problem and more in the biology and legal context where the models are falling behind like where where are the pockets of value? Where's the most demand within the human data generation industry? And where do you see it going over the next couple years?
Yeah. So maybe I go like from the latter part of the question, the first, like where we see going up the next couple of years, it's definitely get involved in audio. It's definitely
get involved in a tool use. It's definitely evolve into trajectories. Experts will be
needed to provide data. Imagine almost like recording your screen as you're conducting
a task. Maybe you're building a slide deck and doing it. You know, if you're an investor
construct, like doing a DCF and doing competitive research,
they want more data to be able to help improve these models,
especially these things about like agents, right?
And step-by-step problem solving.
As of where the puck is right now
and where the puck will continue to be,
if you talk to a lot of the frontier researchers,
is they need expert data.
And expert data is in basically
every esoteric area of human knowledge.
They want to, you know, the, the, the models have already kind of sucked up the entirety
of books and YouTube and, you know, human knowledge. And what they really need is they
need special data to be able to make and understand the step-by-step reasoning that's required
in order to be able to kind of fuel the future.
And so if you think about academia, these PhDs, like what is the definition of getting a PhD?
The definition of getting a PhD is like pushing forward an area of research that nobody else has
done before as peer reviewed by your peers. That's how you get your doctorate. And so this kind of
perpetually recurring stream of PhD students and master's students are really valuable in this very
moment. And it's also to zoom out to their experience. Like you can make, I don't remember reoccurring stream of PhD students and master's students are really valuable in this very
moment. And it's also to zoom out to their experience, like you can make, I don't remember
when you were in school, but you can make like 23 bucks an hour being a teacher assistant.
You know, you could dive, you could drive door dash and we're putting these students
like 60, 70, 80, a hundred plus dollars an hour. And there are also, we can connect it
to actually getting jobs. So we envision a world where like you get badges on your profile
and there's like leaderboards by school. And we envision a world where you get badges on your profile, and there's leaderboards
by school.
And we're actually, I mean, what better way to articulate your skill than actually proving
it by being able to break the model or by being able to provide the model feedback.
And so we believe that we can help you get more jobs with the million employers in the
network, help you build your professional reputation and articulate your skills all the while,
while making like $100 an hour when you want to.
I mean, it's a gig job.
Yeah.
How do you think about financing, handshake going forward?
I'm sure you're making, generating a lot of revenue.
You're clearly paying a bunch of your network out quite a lot.
We were just learning about Surge
AI earlier and what they were able to do while bootstrapped.
I imagine even in the last week
you've had investors reach out
trying to say, hey,
scales out of the game.
You want to you want 100 million.
You want to dance.
But how are you thinking about the
business going forward?
Yeah, I mean, one of the ways we think about this market is like, you know, if you don't
have an audience, there's no moat.
What our competitors are doing is they're at some of these companies, they'll have hundreds
of people who are recruiters sitting on top of platforms, sending messages on companies
like Hinshake or spending tens of millions of dollars a month doing performance advertising,
trying to acquire experts on Instagram.
Can imagine if you're like a physics PhD
and you get an ad on Instagram
for a company you've never heard of before,
claiming they could pay you a hundred dollars an hour,
it's kind of a jarring experience.
And so because we built a decade of trust
in adding a ton of value to these users' lives,
we have no customer acquisition costs.
And what that means is that we can pass along
all those savings by paying contributors, we call them fellows. It's the move fellowship program.
We can actually pay you more than any other vendor on the market. Um, we can also pass
along those savings to the frontier labs. So as you think about our overall P and L
like our gross margin and ability to scale this business, considering, you know, the
moat is the network that we built. we sit in an amazing position to grow extraordinarily quickly
and that's what we've been seeing.
I mean, in the last month we've grown by over 3X and
you know, it seems like there's a lot of demand
continue to be out there.
I can imagine, I had no idea it was that big though.
Let's go.
Let's go. Let's go three hits.
That's incredible. Are there any last question we'll let you go?
Are there any like weird areas that you think will see this type of human data
generation pop up? I'm imagining like AI seems to be at like 150 IQ,
it can write code and yet it can't like book me a flight.
Do we need to take like flight, like travel,
travel agents and have them go through the workflow
so that they don't get hung up on,
should I sign up for the credit card
or do I want, you know, insurance on this flight
so that we have a whole bunch of data
specifically about that task.
I'm just interested in this concept of like these
economically valuable but highly niche tasks that
Don't seem to be we don't seem to be getting closer and closer and closer to like one-shotting them with the current models
And I'm wondering if we're gonna see this long tail of
different hyper specific business use cases like what we saw in SAS where there would be
specific business use cases, like what we saw in SAS, where there would be Hipmunk,
just help you book flights better.
Is there going to be a flow where there's a new startup
that's doing AI agents for flight booking,
and then they're coming to you
for a ton of data generation around
how to actually book the correct flight,
because it learns whether or not you're okay with a layover,
or how price sensitive you are all
the things that you would get from the interaction with a human flight travel agent. Is that
something that you think we'll see or is that kind of just completely tangential?
No, I think it's totally something we'll see. What you just described is like a trajectory
called a browser trajectory. Sure. And it's basically like you have a goal in mind. Yeah.
And you you know, you have like a step by
step kind of thoughts in your mind around how you accomplish
that. And you navigate tools, you navigate the browser, you
stitch together your own intuition to be able to
accomplish that task. You might look at your own calendar, when
do I get off work? How look up how long it takes to get to the
airport? It takes me different amount of time to get to Burbank
than LAX. What's the parking like like there's so it's such a simple task because you think about like anyone can do that job for you and yet to do it well is actually really hard.
Only and you talk about just being able to talk to a model right like totally even need to log in right so you're gonna need audio data you're gonna need trajectory data you're gonna be able to interact with API's humans experts will be needed for the next several years
to be able to make that data happen.
Interesting.
In order to be able to power the frontier
of where you wanna see it going.
Well, that's exciting.
I want to book a flight with an AI.
It still hasn't happened.
That's my own personal touring test.
Hopefully you can make it happen.
But thank you so much for stopping by.
This was fantastic.
Be sure to time.
We'll talk to you soon. Thanks, Garrett.
Great to meet you.
Cheers.
Coming in next, we have Tane coming into the studio
to the TVP and Ultradome.
A massive round.
Oh, oh, we're going to hit the gong again?
Yeah, we'll let you hit it.
The 10th time of the show.
Always a good time.
There he is.
Welcome.
Welcome to the show.
You got news for us?
Hit us with an introduction.
Hit us with the news.
What's going on in your world?
Think we might be muted
Donnie are you there? Can you hear us there? I'm i'm itching to hit the gong for you. I hear there's gong worthy news right there
I'm gonna send him an email. Okay
You are live you are live on tv pn
Okay, we'll pull them off in the meantime. I will tell you about wander
Find your happy place find your happy place
I can hit you with the gun book of wander with inspiring views hotel great amenities dreamy beds top-tier cleaning and 24-7 concierge service
It's a vacation home, but better folks
We told you about don't realize we told you about yo we told you about polymarket told you about linear
Did we do Vanta automate compliance manage risk prove trust continuously?
Vanta's trust management platform takes the manual work out of your security and compliance process and replaces it with continuous automation
Whether you're pursuing your first framework or managing a complex program. Let's hear it for Vanta comm
Give it up for Vanta. But if we got a extra minute. Is he here? Are we back? Welcome to the stream. We made it.
How you doing?
We made it.
How you doing?
Fantastic.
Sorry for the audio issues.
Oh, no.
It was a pleasure.
We got to do extra ads.
So you're making my day.
It's a dream.
It's a dream.
You're making my day.
What's going on?
Quick intro.
You've had a big day.
What's happening?
Yeah, thanks for having me.
We announced a $200 million round.
Whoa.
That's fantastic. We announced a $200 million round. Whoa! Which I don't know. Boom! Boom!
That's fantastic!
Buried the lead, Jordy.
You guys gotta start selling those.
I feel like we need one in our office.
But, yeah, no, we're super excited.
Congratulations.
Obviously works in healthcare.
We power AI workflows, everything from ambient to revenue cycle payments
in large hospitals.
Oh, interesting.
Give us a quick history of the company.
Yeah, yeah, yeah, start with that.
Because it's not often I see a 200 on six something billion.
I hadn't heard of the company before,
but I had Josh Browder connected us last night.
Amazing.
And so I'd love to hear your quick story,
kinda how you got here.
History of the company,
I wanna hear about the first customer too.
For sure, so Josh is great.
I've known him since we were at Sanford together same year.
And the, you know, the story behind Kameer is interesting
because Kameer started as an incubation
inside General Catalyst.
It, you know, the best analogy I have is it is
Hey Montz
Pound here very focused on health care. I started a company while I was at Stanford called a fellas which was focused on applying
language models and computer vision in health care
We started as a blood diagnostics company and then eventually grew into this
We started as a blood diagnostics company and then eventually grew into this mid-market SMBOS for physicians. We merged the two companies about a year and a half ago or almost two years ago now.
And then I took over as CEO with our management team.
And so it's really, you know, it's sort of a coming together of these two businesses.
And yeah, I mean, the company powers large hospitals, about 250,000 physicians
and nurses. We power the private practices here in California. That was our first, first
customer. It was someone that my co-founder Deepika literally walked up to and cold knocked
on their door and then got them to use one of our first devices and remote monitoring
solutions.
And yeah, that's the quick story. I got a bunch of questions. I want to kind of like contextualize this around the broader general catalyst discussion because there was news,
I think just today that Ohio authorities approved the first ever purchase of a US hospital by a venture capital firm.
That's General Catalyst's bid to acquire
SUMA Health, a hospital system in Akron
with over 20 facilities.
And I'd love for you to, I'm sure you've studied this,
what is going on there?
And then is there any sort of synergy across the portfolio?
General Catalyst has had like a very differentiated strategy there, but I haven't had the chance
to dig into it, so I'd love to get you to contextualize it, and then we can go into
how this links to your business again.
So the Summa transaction is super interesting.
It is a venture capital firm buying a health system, transforming it, and Premier is obviously
a big part of that.
We're serving as the office of the CTO.
So our engineers are forward deployed.
We work hand in hand with the Summa IT teams.
We've been working with the revenue cycle leaders,
the clinical leaders.
And it's a really special system.
I mean, it's in Akron.
If I'm not wrong, it's where LeBron James was born, literally
the hospital itself and
Any people have been calling you the LeBron James of healthcare AI
Now we got to put that out there
Yeah, I mean I maybe have been the first person to say it might have coined it here
But many people all say right now you have a LeBron James of healthcare
Yeah, now many people not just one is too many.. Not just one. Two is many in our book.
We're just gonna make that a thing now.
Fantastic.
It's remarkable because running a health system
is super hard.
It is a one to 3% operating margin business.
Most of them go out of business.
And I think what General Catalyst believes in
is language models and technology can transform the operating margin
and also lead to better care.
So it's not a PE, you know, cut and juice play.
It really is an investment.
That's awesome.
Talk about Comir's overall product strategy.
You guys have a number of different products.
It seems like a very different, you know,
we've talked with founders and covered companies
that come on and just want to own one,
you know, one key area.
But health care feels like somewhere
where if you can get embedded with the set of customers,
you can, you know, more, you know,
rapidly kind of add products to the platform.
So I'd love to understand the product strategy.
We really look up to businesses like Rippling and RAM.
RAM is gonna say.
There's this concept of you enter with a wedge
and in our case that wedge is either Ambient AI,
which is a tool that helps a physician document
and really automate the revenue cycle of their appointment, generates the claim automatically.
And then the back office, which is all, I mean, when you walk, when you walk into a hospital, there are tens of thousands of people at large health systems whose sole job is fill out claims, call up insurance companies, fight denials, fill out new forms.
All of that's going away with LLMs. And our belief is that if you do that as a point solution, as like a single little part
of the solution, you might get some initial usage, but eventually the EMRs like Epic or
companies such as ourselves will just eat you.
And you have to be that compound startup from the get-go.
And I think payments is a really interesting vector to deploy software.
Ramp has shown it where where you get into the transaction suite
and then you build a whole bunch of tools
for the CFO's office.
We're trying to do the same for a health systems CIO
and CFO.
Can you tell me a little bit of the history
of the healthcare industry broadly and how,
I know that there was like this kind of catalyst
around Obamacare, I remember talking to Jonathan Bush,
the founder of Athena Health
about electronic health records mandates
and there's been a number of changes
kind of at the federal level that have kind of
opened up different pockets of opportunity.
Like what is the story that you tell
about the recent history of healthcare in America?
I think it's fascinating. The 90s physicians had amazing lives. I mean, they drove Porsches,
they had work-life balance, they had personal relationships with their...
Let's hear it for Porsches. We'd love to hear that. Let's get back to Porsches.
We need a return.
And all in all, patients got great experiences too because of that personal relationship.
And then the admin work tax just increased.
Everything from insurance to filling out an EMR.
Digitization came in the 2010s with Obamacare and Meaningful Use.
And really, EMRs proliferated.
And Jonathan Bush and Judy
and you know, all these people are legends in the industry because they built Athena,
a $20 billion company, Epic, probably a hundred billion dollar company now on the backs of
that very quietly and under the radar from most of tech.
I think the theme and the story of today is labor is turning into software and where is
most white collar labor in America?
It's in healthcare.
Where the majority of administrators sitting behind
a computer clicking on forms, it's in healthcare.
And we believe that the EMR will be transformed.
We also believe that the labor stack of healthcare
will be transformed and it'll create more operating cashflow
for hospital owners.
Is that narrative of the administrative ratio
or the administrative load increasing?
Is that similar to what happened in academia?
Because I remember seeing these charts
of the ratio of professors.
Everyone loves the idea of a high-functioning university
with a lot of professors teaching students
and a great ratio there.
Everyone's a little bit more skeptical about,
wait, why do we have five times as many people to admin?
Is that the same thing that's going on in healthcare?
And kind of what was the underlying driver of that?
Was it just regulation or lack of tools?
Where'd it come from?
I think it's very similar.
What I will say is I think in healthcare,
it bred more out of necessity and in academia,
it just kind of happened. In healthcare, there's more out of necessity and in academia, it just kind of happened.
In healthcare, there's this game of attrition between the insurance company and the provider.
And they're making it a little harder every month, every year to get an approval on a
claim.
And as a result, the health system needs to add a couple more people in order to fight
those claims.
And then it just kind of built up into this arms race.
And I think the insurers kind of carried the power after Obamacare.
Like when you look at United health's market cap, I mean, the, it's like,
what is it like a 12 X since Obamacare got passed? It's, it's quite shocking.
And the power dynamic I think will shift again,
back in the favor of physicians and hospitals because of LLMs and because of what you can now automate.
Yeah, it was kind of just like the game theoretic
Nash equilibrium was like higher a lot of admin staff.
Interesting.
There was no other option.
Yeah.
Talk about your personal ambition and the team's ambition.
You're a $6 billion company now seems like, you know, it's cliche, but
the way you're talking, it feels like you're just getting started.
Is the job finished?
Yeah, yeah. It sounds like the job's not finished. I don't want to put words in your mouth.
It's not finished. Look, I think when you walk into a healthcare practice, the inefficiency
is shocking. And the positive intent from the physicians and the nurses and
the caregivers themselves is all there.
And I think all it takes is for a company like ourselves to come in and
try to nuke that work tax.
Uh, so our ambition is look, we're, we're going to come after the EMRs.
We're going to come after the payers or revenue cycle businesses.
Um, this is a $4 trillion industry you can build for a very long time.
Uh, but what done looks like is when you walk into a physician's practice,
scheduling intake insurance, like all handled, there's no filling out a little
clipboard of the same information again and again, the appointment happens and
instantly the doctor is paid out.
There's no reason we can't have instant adjudication instead of, you know,
waiting 30, 45 days, but it's going to require a system overhaul
and like new payment rails to go do that.
And that's really what what's at the heart of what come here is building.
Awesome. Well, this is super exciting.
I'm glad you're doing what you're doing.
And you are new or you're our new health care expert and correspondent.
So expect a call.
And a LeBron James.
I'm LeBron James specifically.
The LeBron James of EMRs.
Yes.
All right.
According to Kobe, of course.
Congratulations on the milestone.
Hope to have you on again soon.
We'll talk to you soon.
Cheers.
Have a good one.
Should we do some timeline?
Fun show.
Fun show.
Yeah, we definitely should.
We got to talk about Sam Lesson's Oracle versus Salesforce.
He's getting in hot water.
You got in hot water.
The timeline's in turmoil.
We love Sam Lesson on this show.
He posted a screenshot.
He says, Oracle.
I will defend big tech.
I will defend Sam Lesson.
Oracle is 2x Salesforce, but Ellison
is worth 25x penny off what this sale says about the limitations of the SAS business model
He said he had a fun riff yesterday with the slow partners on this oracle is obviously crushing it
But if you take a today snapshot
Basically the market cap of Oracle is 2x salesforce 500 billion versus 250 billion
Meanwhile according to previously directional at best data, Benioff's net worth is 125th.
That of Larry Ellison's 10 billion versus 250 billion.
What do you learn from that?
What lessons do you draw?
I like this, the revealed preference.
For founders and companies, the old licensing model
is better than SaaS, that's interesting.
Hot take.
Imagine having 10 billion and just getting
Lil Bro'd by Larry.
He has a sort of Lil Bro-ing effect on most people.
There's an amazing story about a famous Lil Bro-ing
where Phil Knight of Nike was worth something like 10,
on the order of like $10 billion,
and he was in like maybe Sun Valley or something
going to a movie, and he runs into Bill Gates
and Warren Buffett, who are just going out to a movie, and he runs into Bill Gates and Warren Buffett,
who are just going out to a movie,
and they're both worth 10 times him.
And he's just like, yeah, I had this weird, awkward moment
where I was nervous to meet them for the first time
in a long time, because typically he's the most successful
businessman he runs into all day, right?
But he was just like, yeah.
In his book, Shoe Dog, it's a fantastic book.
Have you read Shoe Dog? Shoe Dog. In his book, Shoe Dog, it's a fantastic book. It's healthy to get a little brod.
Shoe Dog is a great book, great book.
And he talks about all the weird effects
of having immense wealth, how his wife
would hoard immense amounts of paper towels,
just because money's no object.
I have paper towels.
What should we do with this?
I gave him the castle.
Got a lot of paper towels.
And they'd figure out, okay, this is like some weird psychological things going on in my brain
Like I don't actually need paper towels. The fact that money's no issue doesn't really matter
People like to talk about you know, you're the you're the you're the whatever the average of your five friends
Yeah, it's like yeah
well if you want there should be some similar law of like like your growth rate is like
Should be is like tied to how often you're a little bro.
Yep.
You know?
Yeah, never get a little bro.
No, no, you want to be getting a little bro.
Oh, yeah, yeah, that's true.
Yes, you can be on the upward swing.
That's right.
If you're not getting a little bro enough,
you're not on an upward trajectory.
This is good.
Yeah.
Yeah, this is good.
I've been in that situation before.
Anyways, we could cover what Sam said
But I think we can just skip to we're also gonna have him back on the show. We're gonna have him back on the show
He's a regular we're gonna skip to miles
He says wrong take Allison is much richer because he didn't sell shares and instead of even buying back 2% of the company every year
For 30 years. He's increased his ownership from 17% to 40%
Such an incredible story.
Founders complain about dilution.
Oh, you got diluted?
Yeah.
Oh, I'm sorry.
Why don't you just buy back shares
every single year for decades?
If Ellison keeps doing this, he could very well
own 150% of his company at some point.
That's the future for OpenAI.
OpenAI just becomes the agentic organization.
It just buys back so many shares that it eventually
owns itself.
Yeah.
That's the real goal.
Anyways, Miles says, meanwhile CRM, AKA Salesforce,
made a lot of dilutive acquisition.
And Benioff said, Elise sells his shares yearly.
He doesn't sell them yearly. Daily.
Daily, $2 million daily.
Buco Capital.
People say, oh, liquidity events.
They're few and far between.
Daily liquidity.
Not happening every day.
Not for Benioff.
Daily liquidity.
It's pretty good.
You know, the real another, how much does he
pay Matthew McConaughey to just hang out?
That's got to be pricey.
I think it's only like 10 million a year, so it's like.
Yeah, couple Super Bowl ads, not bad.
So Sam responds to the hate.
He says, since a lot of folks are making the same comment
about buybacks versus sales strategies,
that is at best the new answer.
If you're smart, you understand why
they have different paths,
and the answer is path dependency
from business model quality.
Take a 201 level class.
And then Buko Capital Blow quote tweets that and says,
it's a timeline in turmoil,
wrong again CRM has executed poorly.
They've diluted shareholders with bad acquisitions.
They have 75,000 employees who they give
excessive stock based compensation to. They let hubs,000 employees who they give excessive stock-based compensation to.
They let hubs scale up in their face, HubSpot.
They've diluted versus shrunk their share count
versus the other companies, Adobe, that eat shares.
Investors don't trust him.
If Benioff held and cared about shareholders,
it would be a closer call.
He doesn't care.
It's not about the business model.
Well, you'll love to see some some timeline in turmoil very very fun in other news
She'll Monot has the story about
telegrams founder Pavel Dura
consistent feature on the
tech bro drip account everybody says they're pro natalists until they until you ask how many how many
Children, you know, have you fathered there's sperm donation
Apparently he has fathered over 100 kids via sperm donation and he is worth 14 billion dollars
And he says he'll leave his fortune to all of them with no difference between his six kids conceived naturally verse via
Compared to the hundred via sperm donation. So every one of them is gonna get $140 million
just to kick off fundraising, to start investing that.
You got your family office on day one
if you're one of Pavel Durov's kids.
Pretty remarkable.
Yeah, single LP, it's kind of a good dynamic.
Yeah, I wonder how he's gonna get liquidity
for Telegram at some point,
because you get a bunch of Telegram shares,
it's kind of like this difficult beast to wrangle.
Yeah, seems.
But I mean, I guess you take it public at some point
and get liquidity out of that.
I don't know, I mean, it also just prints money,
so even if it's like 14 billion,
you could just get a stake in the distributions
because it's making money.
I think he kind of figured out life
and wanted to make his life
basically 100 times more complicated.
How do you mean?
By having this type of dynamic,
not just with his many children
that he helped conceive directly
versus the 100 others.
So had to one up Elon,
had to little bro Elon.
He did a little bit.
And Elon's commented on this too.
He was like, oh, I got rookie numbers.
Genghis, Genghis Khan over there
is really taking over the world.
Genghis Khan of encrypted messaging.
Yep, it's very, very odd.
Only CFO says, the finance department outdrinks sales,
feels like sales is inviting finance to the party
so they can stick them with the bell and this is
The data you can only get from ramp.com slash data
Apparently apparently finance marketing sales teams lead in alcohol spend alcohol as a share out of this meal spend
Not in that order. So marketing is absolutely dominating dominating 20% 20 19% of all spend on alcohol
No, no, it's 19% of all spend on alcohol.
No, no, it's 19% of business meals are alcohol.
So if they go out and they're getting $80 worth of food,
they're adding on $19 or something, or $81 of food,
$19 of drinks, that's the idea.
Alcohol share.
Finance, they're getting $84 of food, $16 of booze.
You would expect-
Marketing is drinking sales under the table.
Yes, yes.
Narrative violations.
IT, in the tail end there, 9.7%. Many Huberman Davo tees in the IT department, apparently.
Yeah, not a power lunch category.
No, but the three martini lunch,
we'll make it back for the tech teams.
Should we go to this story about the Vibe Coder
who sold his business to Wix for $80 million?
It's only a six month old company
and there's no external funding.
$189,000 in profit in May.
Bryce Roberts says, just the beginning,
there's gonna be more stuff like this.
I think this is pretty cool.
Base 44 only employs six people,
hasn't raised any external funding.
The 31-year-old built a viral AI app maker
as a side project.
So you go in there, you design an app.
Obviously plays very well with Wix,
which is the website building business,
but he flipped it for $80 million,
and he's post-economic now, congratulations.
Yeah, when I saw this headline, I was confused.
I was like, okay, so he just vibe-coded something
and sold it, but it is a tool to do vibe-coding.
He built a vibe coding tool.
Trusted by over 250,000 builders worldwide,
and a nice quick flip.
Amazing.
He's basically getting a similar outcome to a founder
that sells their company for a billion dollars,
but goes through a bunch of different financing.
Or kind of like a mid-tier AI researcher.
Yes.
Starting, you know, like starting out.
Yes, starting out.
In other news, John Carmack is absolutely jacked.
This is fantastic news.
Yeah, Xen has the news.
He's looking very built, but John Carmack chimes in.
He says a chunk of this is just his wife dressing me
"'in tighter shirts, but I did put on several pounds
"'of muscle this year after switching my random grab bag
"'of vitamins and supplements over to Brian Johnson's
"'blueprint system.'"
Let's hear it for Brian Johnson.
"'Really making a difference in the technology world.
"'I was probably not getting enough protein
"'to take advantage of the exercise I was doing.
"'I have always been roughly upper quintile for fitness.
Let's go.
Regular exercise, but not at the level of serious athletes
that most offices tend to have a few of.
And now he's looking built.
Palmer Lucky chimed in.
It's a great day on the timeline.
Let's check in with Tyler, close out the show.
I was gonna check in with this polymarket.
Okay, yeah. Will Chamath launch a SPAC in 2025? Oh yeah, I was gonna check in with this polymarket. Okay, yeah.
Will Chamath launch a SPAC in 2025?
Oh yeah, I was supposed to look at this.
It is up to 70%.
It's up to 70%.
It was 33% when I posted it this morning.
Wow, that's big news.
Amazing.
It was partially because he came out and he said,
what did he say?
He said, 58, he asked yesterday, should I launch us back?
58,000 people voted, 71% said no.
He said, I hope everyone that voted no feels seen.
Now onto business.
I got calls from many Wall Street
and crypto titans yesterday.
They all want in and their vote matters a lot to me.
So I will probably do it.
Maybe this time it will go better.
Who knows?
The risks are clear though.
The last time wasn't a success by any means.
I will include this poll and the community note
in every SEC filing possible.
It will make an excellent disclosure about the risks
and is not short of irony.
So what kind of company do you guys want?
No crying in the casino.
So people are absolutely fuming at him. What company do you guys want? No crying in the casino. So. Let's go Chamath.
People are absolutely fearing at him.
You're gonna send the comments, I'm sure.
But, honestly.
Get after it.
Pretty fair post. Everyone knows what's up.
He's gonna play by the rules, you know?
And I think at the end of the day,
it's very, I look at the next Chamath's back as like,
it totally, like it probably will pop.
It'll probably, it'll get a lot of attention.
It might turn into a meme stock, right?
I will be interested to see what kind of target he.
I'm 100% excited to follow the story.
It's gonna be fascinating.
Anyway, let's check in with Tyler and then close out the show Tyler. I have a question for you
Can you guess a number between 1 and 50?
Like a random number. Yeah random number between 1 and 50
27
27 are you an LLM?
Do you see this every single model they all guess 27 when you asked them but to a number between 1 and 50
Chad GPT Claude perplexity meta they all guessed 27. You gotta get a world
We gotta get a world coin or been here to be able to prove that Tyler's not in fact. Yes, we do
He might just be a deep egg
final review
What did you get done this show?
Did you keep playing with mid-journey? Were you doing something else? What's been going on the last couple hours?
Yeah, I think I just sent
Another video. Oh, no
Productive you can watch this. Let's see. Oh
I like the feeling crosses behind. What's he doing? Did he just you know, he just took my spot. Oh, he took your spot
Wow Oh, he just took my spot. Oh, he took your spot Wow
He comes in with a paper breaking news breaking news gorilla the breaking we need to get you a
Breaking news gorilla outfit and and if he has breaking news, you can print it out come sit down take your seat
That'd be amazing. That's good. Okay. Are there any others or we're closing it out?
Yeah, I think that's okay
Laughing like they have some other or we're closing it out? Yeah, I think that's it. Okay, that's it. That's it. Well, good work. Good work today, Tyler.
I feel like the production team
is laughing like they have some other ones back there
Another productive day.
that are too scary for our audience.
I saw one get sent in the chat
and it just was really scary, bad looking.
Well, we will be back tomorrow.
We have a great show for you folks.
Leave us five stars in Apple Podcasts and Spotify
and thank you for watching.
Thank you for being here with us.
Fantastic show.
Have a great evening.
Goodbye. We love you.
